Archive for setembro \30\+00:00 2010

A New Approach to Intermediate Macroeconomics

setembro 30, 2010

Este post é fundamentalmente para professores e estudantes de Economia. 

O Prof. Greg Mankiw, em co-autoria com o Prof. Larry Ball, estará lançando no início de dezembro deste ano um novo livro intermediário de macroeconomia.  É destinado àqueles que estão interessados em mais material sobre o sistema financeiro do que é exposto nos cursos tradicionais de macro.  Para saber mais informações, é só acessar o link  here, e para ler um capítulo-amostra do novo livro, é só acessar here!

De qualquer forma, eu aconselharia assistir o vídeo abaixo, onde os autores tratam de aspectos relacionados com o novo livro!

Gestão de TI do governo é insatisfatória, diz TCU

setembro 29, 2010

 Matéria do blog!


Gestão de TI do governo é insatisfatória, diz TCU

Terça-feira, 28 de setembro de 2010, 12h44

A governança e a gestão de TI da administração pública federal ainda apresenta níveis insatisfatórios. A conclusão consta de relatório elaborado pelo Tribunal de Contas da União (TCU), o qual aponta que 57% das instituições federais ainda estão em estágio inicial de governança de TI, enquanto 38% estão em fase intermediária e apenas 5% em estágio avançado.

O estudo, que analisou 265 órgãos da administração federal, levou em conta o nível de uso de tecnologia pelas instituições e o conhecimento e preparo dos funcionários de TI para dar suporte às necessidades da própria instância e do governo. O TCU ouviu ministérios, universidades, autarquias, fundações, tribunais federais, secretarias e empresas públicas.

Para o ministro Aroldo Cedraz, relator do processo, “o panorama de TI do governo é desolador”. Ele entende governança de TI como “o conjunto estruturado de políticas, normas, métodos e procedimentos destinados a permitir à alta administração e aos executivos o planejamento, a direção e o controle da utilização atual e futura de tecnologia da informação”. O que não ocorre com os órgãos do governo.

A Empresa como Estrutura de Governança: Da Escolha para os Contratos

setembro 28, 2010

Se você quer saber um pouco como a empresa é entendida pela nova teoria econômica, basta percebê-la como uma estrutura de governança.

Este é o assunto da nova newsletter da Creativante, intitulada “A Empresa como Estrutura de Governança: Da Escolha para os Contratos“, que pode ser acessada aqui!

Implications of the Financial Crisis for Economics

setembro 25, 2010

Eis aí um excelente discurso feito ontem pelo Presidente do Banco Central (dos EUA)!

Nada como um Presidente de Banco Central como um acadêmico!

Chairman Ben S. Bernanke
At the Conference Co-sponsored by the Center for Economic Policy Studies and the Bendheim Center for Finance, Princeton University, Princeton, New JerseySeptember 24, 2010

Implications of the Financial Crisis for Economics

Thank you for giving me this opportunity to return to Princeton. It is good to be able to catch up with old friends and colleagues and to see both the changes and the continuities on campus. I am particularly pleased to see that the Bendheim Center for Finance is thriving. When my colleagues and I founded the center a decade ago, we intended it to be a place where students would learn about not only the technicalities of modern financial theory and practice but also about the broader economic context of financial activities. Recent events have made clear that understanding the role of financial markets and institutions in the economy, and of the effects of economic developments on finance, is more important than ever.

The financial crisis that began more than three years ago has indeed proved to be among the most difficult challenges for economic policymakers since the Great Depression. The policy response to this challenge has included important successes, most notably the concerted international effort to stabilize the global financial system after the crisis reached its worst point in the fall of 2008. For its part, the Federal Reserve worked closely with other policymakers, both domestically and internationally, to help develop the collective response to the crisis, and it played a key role in that response by providing backstop liquidity to a range of financial institutions as needed to stem the panic. The Fed also developed special lending facilities that helped to restore normal functioning to critical financial markets, including the commercial paper market and the market for asset-backed securities; led the bank stress tests in the spring of 2009 that significantly improved confidence in the U.S. banking system; and, in the area of monetary policy, took aggressive and innovative actions that helped to stabilize the economy and lay the groundwork for recovery.

Despite these and other policy successes, the episode as a whole has not been kind to the reputation of economic and economists, and understandably so. Almost universally, economists failed to predict the nature, timing, or severity of the crisis; and those few who issued early warnings generally identified only isolated weaknesses in the system, not anything approaching the full set of complex linkages and mechanisms that amplified the initial shocks and ultimately resulted in a devastating global crisis and recession. Moreover, although financial markets are for the most part functioning normally now, a concerted policy effort has so far not produced an economic recovery of sufficient vigor to significantly reduce the high level of unemployment. As a result of these developments, some observers have suggested the need for an overhaul of economics as a discipline, arguing that much of the research in macroeconomics and finance in recent decades has been of little value or even counterproductive.

Although economists have much to learn from this crisis, as I will discuss, I think that calls for a radical reworking of the field go too far. In particular, it seems to me that current critiques of economics sometimes conflate three overlapping yet separate enterprises, which, for the purposes of my remarks today, I will call economic science, economic engineering, and economic management. Economic science concerns itself primarily with theoretical and empirical generalizations about the behavior of individuals, institutions, markets, and national economies. Most academic research falls in this category. Economic engineering is about the design and analysis of frameworks for achieving specific economic objectives. Examples of such frameworks are the risk-management systems of financial institutions and the financial regulatory systems of the United States and other countries. Economic management involves the operation of economic frameworks in real time–for example, in the private sector, the management of complex financial institutions or, in the public sector, the day-to-day supervision of those institutions.

As you may have already guessed, my terminology is intended to invoke a loose analogy with science and engineering. Underpinning any practical scientific or engineering endeavor, such as a moon shot, a heart transplant, or the construction of a skyscraper are: first, fundamental scientific knowledge; second, principles of design and engineering, derived from experience and the application of fundamental knowledge; and third, the management of the particular endeavor, often including the coordination of the efforts of many people in a complex enterprise while dealing with myriad uncertainties. Success in any practical undertaking requires all three components. For example, the fight to control AIDS requires scientific knowledge about the causes and mechanisms of the disease (the scientific component), the development of medical technologies and public health strategies (the engineering applications), and the implementation of those technologies and strategies in specific communities and for individual patients (the management aspect). Twenty years ago, AIDS mortality rates mostly reflected gaps in scientific understanding and in the design of drugs and treatment technologies; today, the problem is more likely to be a lack of funding or trained personnel to carry out programs or to apply treatments.

With that taxonomy in hand, I would argue that the recent financial crisis was more a failure of economic engineering and economic management than of what I have called economic science. The economic engineering problems were reflected in a number of structural weaknesses in our financial system. In the private sector, these weaknesses included inadequate risk-measurement and risk-management systems at many financial firms as well as shortcomings in some firms’ business models, such as overreliance on unstable short-term funding and excessive leverage. In the public sector, gaps and blind spots in the financial regulatory structures of the United States and most other countries proved particularly damaging. These regulatory structures were designed for earlier eras and did not adequately adapt to rapid change and innovation in the financial sector, such as the increasing financial intermediation taking place outside of regulated depository institutions through the so-called shadow banking system. In the realm of economic management, the leaders of financial firms, market participants, and government policymakers either did not recognize important structural problems and emerging risks or, when they identified them, did not respond sufficiently quickly or forcefully to address them. Shortcomings of what I have called economic science, in contrast, were for the most part less central to the crisis; indeed, although the great majority of economists did not foresee the near-collapse of the financial system, economic analysis has proven and will continue to prove critical in understanding the crisis, in developing policies to contain it, and in designing longer-term solutions to prevent its recurrence.

I don’t want to push this analogy too far. Economics as a discipline differs in important ways from science and engineering; the latter, dealing as they do with inanimate objects rather than willful human beings, can often be far more precise in their predictions. Also, the distinction between economic science and economic engineering can be less sharp than my analogy may suggest, as much economic research has direct policy implications. And although I don’t think the crisis by any means requires us to rethink economics and finance from the ground up, it did reveal important shortcomings in our understanding of certain aspects of the interaction of financial markets, institutions, and the economy as a whole, as I will discuss. Certainly, the crisis should lead–indeed, it is already leading–to a greater focus on research related to financial instability and its implications for the broader economy.

In the remainder of my remarks, I will focus on the implications of the crisis for what I have been calling economic science, that is, basic economic research and analysis. I will first provide a few examples of how economic principles and economic research, rather than having misled us, have significantly enhanced our understanding of the crisis and are informing the regulatory response. However, the crisis did reveal some gaps in economists’ knowledge that should be remedied. I will discuss some of these gaps and suggest possible directions for future research that could ultimately help us achieve greater financial and macroeconomic stability.

How Economics Helped Us Understand and Respond to the Crisis
The financial crisis represented an enormously complex set of interactions–indeed, a discussion of the triggers that touched off the crisis and the vulnerabilities in the financial system and in financial regulation that allowed the crisis to have such devastating effects could more than fill my time this afternoon.1 The complexity of our financial system, and the resulting difficulty of predicting how developments in one financial market or institution may affect the system as a whole, presented formidable challenges. But, at least in retrospect, economic principles and research were quite useful for understanding key aspects of the crisis and for designing appropriate policy responses.

For example, the excessive dependence of some financial firms on unstable short-term funding led to runs on key institutions, with highly adverse implications for the functioning of the system as a whole. The fact that dependence on unstable short-term funding could lead to runs is hardly news to economists; it has been a central issue in monetary economics since Henry Thornton and Walter Bagehot wrote about the question in the 19th century.2 Indeed, the recent crisis bore a striking resemblance to the bank runs that figured so prominently in Thornton’s and Bagehot’s eras; but in this case, the run occurred outside the traditional banking system, in the shadow banking system–consisting of financial institutions other than regulated depository institutions, such as securitization vehicles, money market funds, and investment banks. Prior to the crisis, these institutions had become increasingly dependent on various forms of short-term wholesale funding, as had some globally active commercial banks. Examples of such funding include commercial paper, repurchase agreements (repos), and securities lending. In the years immediately before the crisis, some of these forms of funding grew especially rapidly; for example, repo liabilities of U.S. broker-dealers increased by a factor of 2-1/2 in the four years before the crisis, and a good deal of this expansion reportedly funded holdings of relatively less liquid securities.

In the historically familiar bank run during the era before deposit insurance, retail depositors who heard rumors about the health of their bank–whether true or untrue–would line up to withdraw their funds. If the run continued, then, absent intervention by the central bank or some other provider of liquidity, the bank would run out of the cash necessary to pay off depositors and then fail as a result. Often, the panic would spread as other banks with similar characteristics to, or having a financial relationship with, the one that had failed came under suspicion. In the recent crisis, money market mutual funds and their investors, as well as other providers of short-term funding, were the economic equivalent of early-1930s retail depositors. Shadow banks relied on these providers to fund longer-term credit instruments, including securities backed by subprime mortgages. After house prices began to decline, concerns began to build about the quality of subprime mortgage loans and, consequently, about the quality of the securities into which these and other forms of credit had been packaged. Although many shadow banks had limited exposure to subprime loans and other questionable credits, the complexity of the securities involved and the opaqueness of many of the financial arrangements made it difficult for investors to distinguish relative risks. In an environment of heightened uncertainty, many investors concluded that simply withdrawing funds was the easier and more prudent alternative. In turn, financial institutions, knowing the risks posed by a run, began to hoard cash, which dried up liquidity and significantly limited their willingness to extend new credit.3 

Because the runs on the shadow banking system occurred in a historically unfamiliar context, outside the commercial banking system, both the private sector and the regulators insufficiently anticipated the risk that such runs might occur. However, once the threat became apparent, two centuries of economic thinking on runs and panics were available to inform the diagnosis and the policy response. In particular, in the recent episode, central banks around the world followed the dictum set forth by Bagehot in 1873: To avert or contain panics, central banks should lend freely to solvent institutions, against good collateral.4 The Federal Reserve indeed acted quickly to provide liquidity to the banking system, for example, by easing lending terms at the discount window and establishing regular auctions in which banks could bid for term central bank credit. Invoking emergency powers not used since the 1930s, the Federal Reserve also found ways to provide liquidity to critical parts of the shadow banking system, including securities dealers, the commercial paper market, money market mutual funds, and the asset-backed securities market. For today’s purposes, my point is not to review this history but instead to point out that, in its policy response, the Fed was relying on well-developed economic ideas that have deep historical roots.5 The problem in this case was not a lack of professional understanding of how runs come about or how central banks and other authorities should respond to them. Rather, the problem was the failure of both private- and public-sector actors to recognize the potential for runs in an institutional context quite different than the circumstances that had given rise to such events in the past. These failures in turn were partly the result of a regulatory structure that had not adapted adequately to the rise of shadow banking and that placed insufficient emphasis on the detection of systemic risks, as opposed to risks to individual institutions and markets.

Economic research and analysis have proved useful in understanding many other aspects of the crisis as well. For example, one of the most important developments in economics over recent decades has been the flowering of information economics, which studies how incomplete information or differences in information among economic agents affect market outcomes.6 An important branch of information economics, principal-agent theory, considers the implications of differences in information between the principals in a relationship (say, the shareholders of a firm) and the agents who work for the principals (say, the firm’s managers). Because the agent typically has more information than the principal–managers tend to know more about the firm’s opportunities and problems than do the shareholders, for example–and because the financial interests of the principal and the agent are not perfectly aligned, much depends on the contract (whether explicit or implicit) between the principal and the agent, and, in particular, on the incentives that the contract provides the agent.

Poorly structured incentives were pervasive in the crisis. For example, compensation practices at financial institutions, which often tied bonuses to short-term results and made insufficient adjustments for risk, contributed to an environment in which both top managers and lower-level employees, such as traders and loan officers, took excessive risks. Serious problems with the structure of incentives also emerged in the application of the so-called originate-to-distribute model to subprime mortgages. To satisfy the strong demand for securitized products, both mortgage lenders and those who packaged the loans for sale to investors were compensated primarily on the quantity of “product” they moved through the system. As a result, they paid less attention to credit quality and many loans were made without sufficient documentation or care in underwriting. Conflicts of interest at credit agencies, which were supposed to serve investors but had incentives to help issuers of securities obtain high credit ratings, are another example. 

Consistent with key aspects of research in information economics, the public policy responses to these problems have focused on improving market participants’ incentives. For example, to address problems with compensation practices, the Federal Reserve, in conjunction with other supervisory agencies, has subjected compensation practices of banking institutions to supervisory review. The interagency supervisory guidance supports compensation practices that induce employees to take a longer-term perspective, such as paying part of employees’ compensation in stock that vests based on sustained strong performance. To ameliorate the problems with the originate-to-distribute model, recent legislation requires regulatory agencies, including the Federal Reserve, to develop new standards applicable to securitization activities that would better align the incentives faced by market participants involved in the various stages of the securitization process.7 And the Securities and Exchange Commission has been charged with developing new rules to reduce conflicts of interest at credit rating agencies.

Information economics and principal-agent theory are also essential to understanding the problems created by so-called too-big-to-fail financial institutions. Prior to the crisis, market participants believed that large, complex, and interconnected financial firms would not be allowed to fail during a financial crisis. And, as you know, authorities both in the United States and abroad did in fact intervene on a number of occasions to prevent the failure of such firms–not out of any special consideration for the owners, managers, or creditors of these firms, but because of legitimate concerns about potential damage to the financial system and the broader economy. However, although the instability caused by the failure or near-failure of some large firms did indeed prove very costly, in some sense the real damage was done before the crisis. If creditors in good times believe that certain firms will not be allowed to fail, they will demand relatively little compensation for risk, thus weakening market discipline; in addition, creditors will not have much incentive to monitor the firms’ risk-taking. As a result, as predicted by principal-agent theory, firms thought to be too big to fail tended to take on more risk, as they faced little pressure from investors and expected to receive assistance if their bets went bad. This problem is an example of what economists refer to as moral hazard. The resulting buildup of risk in too-big-to-fail firms increased the likelihood that a financial crisis would occur and worsened the crisis when it did occur.

One response to excessive risk-taking is stronger oversight by regulators, and the recent legislation and the rules and procedures being developed by the Federal Reserve and other agencies will subject systemically critical firms to tougher regulatory requirements and stricter supervision. The Federal Reserve has also been involved in international negotiations to raise the capital and liquidity that banks are required to hold. However, the problem of too-big-to-fail can only be eliminated when market participants believe authorities’ statements that they will not intervene to prevent failures. If creditors believe that the government will not rescue firms when their bets go bad, then creditors will have more-appropriate incentives to price, monitor, and limit the risk-taking of the firms to which they lend. The best way to achieve such credibility is to create institutional arrangements under which a failure can be allowed to occur without widespread collateral damage; if failures can take place more safely, the authorities will no longer have an incentive to try to avoid them. The financial reform legislation took an important step in this direction by creating a resolution regime under which large, complex financial firms can be placed in receivership, but which also gives the government the flexibility to take the actions needed to safeguard systemic stability. This new regime should help restore market discipline by putting a greater burden on creditors and counterparties to monitor the risk-taking of large financial firms.

The insights of economists proved valuable to policymakers in many other contexts as well: in the setting and oversight of bank capital standards, in the decision to provide the market with extensive information gleaned during the bank stress tests in the spring of 2009, in the design of the Fed’s liquidity facilities for nondepository institutions, in the analysis of the collapse of the securitization market, and in the measures taken to protect consumers from deceptive or inappropriate lending, to name a few. Many of the key ideas, like those of Thornton and Bagehot, were quite old, but some reflected relatively recent research. For example, recent work on monetary policy helped the Federal Reserve provide further policy accommodation despite the constraints imposed by the zero lower bound on interest rates.8 

Economics and Economic Research in the Wake of the Crisis
Economic principles and research have been central to understanding and reacting to the crisis. That said, the crisis and its lead up also challenged some important economic principles and research agendas. I will briefly indicate some areas that, I believe, would benefit from more attention from the economics profession.

Most fundamentally, and perhaps most challenging for researchers, the crisis should motivate economists to think further about their modeling of human behavior. Most economic researchers continue to work within the classical paradigm that assumes rational, self-interested behavior and the maximization of “expected utility”–a framework based on a formal description of risky situations and a theory of individual choice that has been very useful through its integration of economics, statistics, and decision theory.9 An important assumption of that framework is that, in making decisions under uncertainty, economic agents can assign meaningful probabilities to alternative outcomes. However, during the worst phase of the financial crisis, many economic actors–including investors, employers, and consumers–metaphorically threw up their hands and admitted that, given the extreme and, in some ways, unprecedented nature of the crisis, they did not know what they did not know. Or, as Donald Rumsfeld might have put it, there were too many “unknown unknowns.” The profound uncertainty associated with the “unknown unknowns” during the crisis resulted in panicky selling by investors, sharp cuts in payrolls by employers, and significant increases in households’ precautionary saving.

The idea that, at certain times, decisionmakers simply cannot assign meaningful probabilities to alternative outcomes–indeed, cannot even think of all the possible outcomes–is known as Knightian uncertainty, after the economist Frank Knight who discussed the idea in the 1920s. Although economists and psychologists have long recognized the challenges such ambiguity presents and have analyzed the distinction between risk aversion and ambiguity aversion, much of this work has been abstract and relatively little progress has been made in describing and predicting the behavior of human beings under circumstances in which their knowledge and experience provide little useful information.10 Research in this area could aid our understanding of crises and other extreme situations. I suspect that progress will require careful empirical research with attention to psychological as well as economic factors.

Another issue that clearly needs more attention is the formation and propagation of asset price bubbles. Scholars did a great deal of work on bubbles after the collapse of the dot-com bubble a decade ago, much of it quite interesting, but the profession seems still quite far from consensus and from being able to provide useful advice to policymakers. Much of the literature at this point addresses how bubbles persist and expand in circumstances where we would generally think they should not, such as when all agents know of the existence of a bubble or when sophisticated arbitrageurs operate in a market. As it was put by my former colleague, Markus Brunnermeier, a scholar affiliated with the Bendheim center who has done important research on bubbles, “We do not have many convincing models that explain when and why bubbles start.”11 I would add that we also don’t know very much about how bubbles stop either, and better understanding this process–and its implications for the household, business, and financial sectors–would be very helpful in the design of monetary and regulatory policies.

Another issue brought to the fore by the crisis is the need to better understand the determinants of liquidity in financial markets. The notion that financial assets can always be sold at prices close to their fundamental values is built into most economic analysis, and before the crisis, the liquidity of major markets was often taken for granted by financial market participants and regulators alike. The crisis showed, however, that risk aversion, imperfect information, and market dynamics can scare away buyers and badly impair price discovery. Market illiquidity also interacted with financial panic in dangerous ways. Notably, a vicious circle sometimes developed in which investor concerns about the solvency of financial firms led to runs: To obtain critically needed liquidity, firms were forced to sell assets quickly, but these “fire sales” drove down asset prices and reinforced investor concerns about the solvency of the firms. Importantly, this dynamic contributed to the profound blurring of the distinction between illiquidity and insolvency during the crisis. Studying liquidity and illiquidity is difficult because it requires going beyond standard models of market clearing to examine the motivations and interactions of buyers and sellers over time.12 However, with regulators prepared to impose new liquidity requirements on financial institutions and to require changes in the operations of key markets to ensure normal functioning in times of stress, new policy-relevant research in this area would be most welcome.

I have been discussing needed research in microeconomics and financial economics but have not yet touched on macroeconomics. Standard macroeconomic models, such as the workhorse new-Keynesian model, did not predict the crisis, nor did they incorporate very easily the effects of financial instability. Do these failures of standard macroeconomic models mean that they are irrelevant or at least significantly flawed?  I think the answer is a qualified no. Economic models are useful only in the context for which they are designed. Most of the time, including during recessions, serious financial instability is not an issue. The standard models were designed for these non-crisis periods, and they have proven quite useful in that context. Notably, they were part of the intellectual framework that helped deliver low inflation and macroeconomic stability in most industrial countries during the two decades that began in the mid-1980s.

That said, understanding the relationship between financial and economic stability in a macroeconomic context is a critical unfinished task for researchers. Earlier work that attempted to incorporate credit and financial intermediation into the study of economic fluctuations and the transmission of monetary policy represents one possible starting point. To give an example that I know particularly well, much of my own research as an academic (with coauthors such as Mark Gertler and Simon Gilchrist) focused on the role of financial factors in propagating and amplifying business cycles. Gertler and Nobuhiro Kiyotaki have further developed that basic framework to look at the macroeconomic effects of financial crises.13 More generally, I am encouraged to see the large number of recent studies that have incorporated banking and credit creation in standard macroeconomic models, though most of this work is still some distance from capturing the complex interactions of risk-taking, liquidity, and capital in our financial system and the implications of these factors for economic growth and stability.14 

It would also be fruitful, I think, if “closed-economy” macroeconomists would look more carefully at the work of international economists on financial crises. Drawing on the substantial experience in emerging market economies, international economists have examined the origins and economic effects of banking and currency crises in some detail. They have also devoted considerable research to the international contagion of financial crises, a related topic that is of obvious relevance to our recent experience.

Finally, macroeconomic modeling must accommodate the possibility of unconventional monetary policies, a number of which have been used during the crisis. Earlier work on this topic relied primarily on the example of Japan; now, a number of data points can be used. For example, the experience of the United States and the United Kingdom with large-scale asset purchases could be explored to improve our understanding of the effects of such transactions on longer-term yields and how such effects can be incorporated into modern models of the term structure of interest rates.15 

I began my remarks by drawing the distinction between the scientific, engineering, and management aspects of economics. For the most part, in my view, the financial crisis reflected problems in what I referred to as economic engineering and economic management. Both private-sector arrangements (for example, for risk management and funding) and the financial regulatory framework were flawed in design and execution, and these weaknesses were the primary reasons that the financial crisis and its economic effects were so severe.

Disasters require urgent action to prevent repetition. Engineers seek to enhance the reliability of a complex machine through improvements in basic design; more-rigorous testing and quality assurance; and increases in the resilience of the machine through means such as stronger materials, greater redundancy, and better backup systems. Economic policymakers’ efforts to avoid, or at least mitigate, future financial crises are proceeding along analogous lines. First, the recent reform legislation has improved the design of the regulatory framework, closing important gaps such as the lack of oversight of the shadow banking system. Likewise, in the private sector, firms have taken significant steps to improve their systems for managing risk and liquidity. Second, to reduce the probability and severity of future crises, policymakers will monitor the system more intensively. For example, the recent legislation creates a Financial Stability Oversight Council, made up of the heads of the financial regulatory agencies, which will assess potential risks to the financial system, identify regulatory gaps, and coordinate the efforts of the various agencies. Enhanced market discipline, the result of a new resolution regime for systemically critical firms and a number of measures to increase transparency, will complement regulatory oversight. Finally, numerous steps, both prescribed in the legislation and taken independently by regulators, will work to make our financial system more resilient to shocks. Examples include rules that will strengthen key financial utilities, toughen bank capital and liquidity standards, and require that more derivatives instruments be standardized and traded on exchanges rather than over the counter.

Economic engineering is effective only in combination with good economic management. For its part, the Federal Reserve has revamped its supervisory operations to provide more effective and comprehensive oversight. In particular, we are taking an approach that is both more multi-disciplinary–making greater use of the Federal Reserve’s wide expertise in macroeconomics, finance, and other fields to complement the work of bank supervisors; and more macroprudential–that is, focused on risks to the system as a whole as well as those to individual institutions. Together, better design of private- and public-sector frameworks for managing risk, better monitoring and supervision, and a more resilient financial system do not by any means guarantee that financial crises will not recur, but they should both reduce the risk of crises and mitigate the effects of any that do happen.

In short, the financial crisis did not discredit the usefulness of economic research and analysis by any means; indeed, both older and more recent ideas drawn from economic research have proved invaluable to policymakers attempting to diagnose and respond to the financial crisis. However, the crisis has raised some important questions that are already occupying researchers and should continue to do so. As I have discussed today, more work is needed on the behavior of economic agents in times of profound uncertainty; on asset price bubbles and the determinants of market liquidity; and on the implications of financial factors, including financial instability, for macroeconomics and monetary policy. Much of that work is already under way at the Bendheim center and in the Department of Economics here at Princeton.

1. For a more comprehensive discussion of vulnerabilities and triggers during the financial crisis, see Ben S. Bernanke (2010), “Causes of the Recent Financial and Economic Crisis,” testimony before the Financial Crisis Inquiry Commission, September 2. Return to text

2. See Henry Thornton ([1802] 1962), An Enquiry into the Nature and Effects of the Paper Credit of Great Britain (New York: A. M. Kelley); and Walter Bagehot ([1873] 1897), Lombard Street: A Description of the Money Market (New York: Charles Scribner’s Sons). A discussion relating the Federal Reserve’s policy actions during the financial crisis to the ideas of Bagehot is contained in Brian F. Madigan (2009), “Bagehot’s Dictum in Practice: Formulating and Implementing Policies to Combat the Financial Crisis,” speech delivered at “Financial Stability and Macroeconomic Policy,” a symposium sponsored by the Federal Reserve Bank of Kansas City, held in Jackson Hole, Wyo., August 20-22. See also Ben S. Bernanke (2008), “Liquidity Provision by the Federal Reserve,” speech delivered at the Federal Reserve Bank of Atlanta Financial Markets Conference (via satellite), held in Sea Island, Ga., May 13. Return to text

3. See Gary B. Gorton (2008), “The Panic of 2007,” Leaving the Board paper presented at “Maintaining Stability in a Changing Financial System,” a symposium sponsored by the Federal Reserve Bank of Kansas City, held in Jackson Hole, Wyo., August 21-23. Also see Markus K. Brunnermeier (2009), “Deciphering the Liquidity and Credit Crunch 2007-2008,” Journal of Economic Perspectives, vol. 23 (Winter), pp. 77-100. Return to text

4. Bagehot also suggested that “these loans should only be made at a very high rate of interest” (Lombard Street, p. 99; see note 2). Some modern commentators have rationalized Bagehot’s dictum to lend at a high or “penalty” rate as a way to mitigate moral hazard–that is, to help maintain incentives for private-sector banks to provide for adequate liquidity in advance of any crisis. However, the risk of moral hazard did not appear to be Bagehot’s principal motivation for recommending a high rate; rather, he saw it as a tool to dissuade unnecessary borrowing and thus help protect the Bank of England’s own finite store of liquid assets. See Bernanke, “Liquidity Provision,” in note 2 for further documentation. Today, potential limitations on the central bank’s lending capacity are not nearly so pressing an issue as in Bagehot’s time, when the central bank’s ability to provide liquidity was far more tenuous. Generally, the Federal Reserve lent at rates above the “normal” rate for the market but lower than the rate prevailing in distressed and illiquid markets. This strategy provided needed liquidity while encouraging borrowers to return to private markets when conditions normalized. Return to text

5. A substantial modern literature has updated and formalized many of the insights of Bagehot and Thornton. A classic example is Douglas W. Diamond and Philip H. Dybvig (1983), “Bank Runs, Deposit Insurance, and Liquidity” Journal of Political Economy, vol. 91 (3), pp. 401-19. Return to text

6. George Akerlof, A. Michael Spence, and Joseph Stiglitz shared the 2001 Nobel Prize in Economics for their leadership in the development of information economics. Return to text

7. The requirements related to credit risk were contained in section 941 of the Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203 (July 2010); with regard to compensation practices, see Board of Governors of the Federal Reserve System (2009), “Federal Reserve Issues Proposed Guidance on Incentive Compensation,” press release, October 22; also see Board of Governors of the Federal Reserve System (2010), “Federal Reserve, OCC, OTS, FDIC Issue Final Guidance on Incentive Compensation,” joint press release, June 21.

Why might government intervention be needed to improve private-sector incentives, when incentives presumably exist for the private-sector principals and agents to work out the best incentive structure for themselves? The possibility of problems arising regarding collective action when a firm has many shareholders is one rationale. The standard reason for intervening in banks’ risk-taking practices is the existence of deposit insurance, which itself distorts private risk-taking incentives by eliminating any incentive of depositors to monitor the activities of their bank; for an early discussion, see John Kareken and Neil Wallace (1978), “Deposit Insurance and Bank Regulation: A Partial-Equilibrium Exposition,” Journal of Business, vol. 51 (July), pp. 413-38. Indeed, the Federal Reserve invoked a “safety and soundness” rationale for its guidance on incentive compensation practices. More generally, as the crisis revealed, bad incentives can lead to problems that affect not just the individuals involved but the broader financial system as well; such spillovers suggest that regulation can help improve outcomes. Return to text

8. The Federal Reserve did so by, for example, (1) acting rapidly when confronted with the zero lower bound, as discussed in David Reifschneider and John C. Williams (2000), “Three Lessons for Monetary Policy in a Low-Inflation Era,” Journal of Money, Credit and Banking, vol. 32 (November), pp. 936-66; (2) providing forward guidance regarding short-term interest rates, as discussed in Gauti Eggertsson and Michael Woodford (2003), “The Zero Bound on Interest-Rates and Optimal Monetary Policy,” Brookings Papers on Economic Activity, vol. 2003 (1), pp. 139-211; and (3) expanding the Federal Reserve’s balance sheet through purchases of longer-term securities, as discussed in Ben S. Bernanke, Vincent R. Reinhart, and Brian P. Sack (2004), “Monetary Policy Alternatives at the Zero Bound: An Empirical Assessment,” Brookings Papers on Economic Activity, vol. 2004 (2), pp. 1-78. Return to text

9. Herein I use the extension of Von Neumann-Morgenstern expected utility, which focused on objective probabilities over risks, to situations in which individuals assign subjective probabilities over risks. For a review of some classic contributions in this area, see Jacques H. Drèze (1974), “Axiomatic Theories of Choice, Cardinal Utility and Subjective Probability: A Review,” in Jacques H. Drèze, ed., Allocation under Uncertainty: Equilibrium and Optimality (London: Macmillan), pp. 3-23. Some authors have used risk to refer to a situation of objective probabilities and uncertainty to refer to a situation of subjective probabilities (see, for example, David M. Kreps (1990), A Course in Microeconomic Theory (Princeton, N.J.: Princeton University Press)). As highlighted below, others refer to uncertainty as a situation in which subjective probabilities cannot be assessed. As this discussion makes clear, it is probably best to focus on the context in which the terms risk and uncertainty are used. Return to text

10. The classic reference on ambiguity aversion is due to Daniel Ellsberg (1961), “Risk, Ambiguity, and the Savage Axioms,” The Quarterly Journal of Economics, vol. 75 (4), pp. 643-69; for a more recent, and abstract, theoretical treatment, see Larry G. Epstein (1999), “A Definition of Uncertainty Aversion,” Review of Economic Studies, vol. 66 (July), pp. 579-608. Return to text

11. See Markus K. Brunnermeier (2008), “Bubbles,” in Steven N. Durlauf and Lawrence E. Blume, eds., The New Palgrave Dictionary of Economics, 2nd ed. (New York: Palgrave Macmillan). Return to text

12. Good work has been done in this area; see, for example, Franklin Allen, Elena Carletti, Jan P. Krahnen, and Marcel Tyrell, eds. (forthcoming), Liquidity and Crises (New York: Oxford University Press). Return to text

13. See Mark Gertler and Nobuhiro Kiyotaki (forthcoming), Handbook of Monetary Economics (482 KB PDF) Leaving the Board Return to text

14. See, for example, Marvin Goodfriend and Bennett T. McCallum (2007), “Banking and Interest Rates in Monetary Policy Analysis: A Quantitative Exploration,” Journal of Monetary Economics, vol. 54 (5), pp.1480-1507; and Lawrence J. Christiano, Roberto Motto, and Massimo Rostagno (2009), “Financial Factors in Economic Fluctuations,” paper presented at “Financial Markets and Monetary Policy,” a conference sponsored by the Federal Reserve Board and the Journal of Money, Credit and Banking, held in Washington, June 4-5. For examples of studies that emphasize bank capital as a constraint on financial intermediation, see Skander J. Van den Heuvel (2008), “The Welfare Cost of Bank Capital Requirements,” Journal of Monetary Economics, vol. 55 (March), pp. 298-320; Césaire A. Meh and Kevin Moran (2008), “The Role of Bank Capital in the Propagation of Shocks,” Bank of Canada Working Paper 2008-36 (Ottawa, Ontario, Canada: Bank of Canada, October); and Mark Gertler and Peter Karadi (2009), “A Model of Unconventional Monetary Policy,” manuscript, New York University, June. Return to text

15. An example of recent research on this subject is: Joseph Gagnon, Matthew Raskin, Julie Remache, and Brian Sack (2010), “Large-Scale Asset Purchases by the Federal Reserve: Did They Work?” Leaving the Board Staff Report no. 441 (New York: Federal Reserve Bank of New York, March). See also Gertler and Karadi (2009), footnote 14. Return to text


setembro 21, 2010

Um super-interessante livro que a empresa Forrester acaba de lançar: Empowered (Empoderado)! O site do livro é o

A chamada para o livro veio por email pelo Forrester’s September CIO First Look.  E os dizeres são:

“Whether you like it or not, your customers and employees have access to simple and free technologies that connect them with thousands — if not millions — of like-minded individuals. Their topic of conversation? You: Your brand, your product, your company. As these technologies become more accessible through mediums beyond IT’s control, CIOs have but one choice: See the benefits by empowering employees, or swim against the current. Forrester’s latest book, Empowered, provides the method for reaping the benefits. Complete with case studies, tools, methodologies, and step-by-step advice, this book provides the blueprints of success.
Learning IT’s role in this is a key step toward IT’s transformation to a business technology (BT) organization. Without IT’s help, your business users will launch their own internal collaboration spaces and interact with external customers through unsanctioned channels. Many of these covert innovation efforts will fizzle and die, creating embarrassing customer-facing failures or hurting your chances of harnessing internal collaboration. Or, they could explode beyond anyone’s control. Forrester finds that BT leaders are reacting not by blocking access but by lending their expertise to increase the chances of technology success and empowering the users to solve customer and business problems.
 CIOs who want to change IT’s reputation from roadblock to enabler must lead their organization’s empowered strategy. But this is unfamiliar territory to most IT leaders. In many ways, CIOs must change direction in order to ensure that employees and the brand are kept safe. If you want to know what direction that is and how to get there, be sure to attend our teleconference on the empowered strategy, “Empowered IT Strategy: How Can You Empower Employees To Solve Customer And Business Problems?” Deciding whether or not to go ahead with individual projects within this campaign will be among the most difficult challenges. Luckily, Forrester has designed an interactive tool to guide leaders through the evaluation and help them make a decision. Much of our recent and upcoming research builds on the themes outlined in Empowered, helping you get ahead of the trend. Recent reports include:

“Insights For CIOs: Make Mobility Standard Business Practice” by Tim Sheedy
“IT Governance In A BT World” by Craig Symons
“Put Your Emerging-Technology Strategy Into A Business Context” by Bobby Cameron

As well as some upcoming research:
“Build An Empowered IT Strategy” by Ted Schadler
“Support HEROes With Social Innovation Networks” by Nigel Fenwick

Empowered Table Of Contents

part one. HEROes

Chapter 1: Why your business needs HEROes

part two. What HEROes Do

Chapter 2: Employee HEROes and their projects

Chapter 3: Peer Influence Analysis

Chapter 4: Delivering groundswell customer service

Chapter 5: Empowering customers with mobile

Chapter 6: Amplifying your fans

part three. The HERO-powered business

Chapter 7: Do-it-yourself technology fuels the HERO compact

Chapter 8: Is your company ready for HEROes?

Chapter 9: Leading and managing HEROes

Chapter 10: Helping HEROes innovate

Chapter 11: Helping HEROes collaborate

Chapter 12: Keeping HEROes safe

Chapter 13: Supporting HEROes with technology innovation

Chapter 14: Becoming HERO-powered

Is Google a Monopolist? A Debate

setembro 18, 2010

Interessante debate que surigu ontem em The Wall Street Journal!


Is Google a Monopolist? A Debate

by Amit Singhal and Charles Rule
Friday, September 17, 2010


Amit Singhal of Google argues the competition is one click away. Charles Rule, an attorney whose firm represents corporations suing Google, counters that the company commands a share of search advertising in excess of 70%—the threshold for monopoly under the Sherman Act.

• Amit Singhal: Competition in an Instant

• Charles Rule: ‘Trust Us’ Isn’t An Answer

Competition in an Instant
By Amit Singhal

Last week, “Googling something” took on a whole new meaning. Instead of typing your question into the search box and hitting Enter, our newest invention—Google Instant—shows constantly evolving results based on the individual letters you type.

Instant is just the latest in a long line of search improvements. Five years ago, search results were just “ten blue links”—simple web pages with some text. Today search engines provide answers in the form of images, books, news, music, maps and even “real time” results from sites such as Twitter.

The reason for all these improvements is simple: It’s what you want. When you type in “weather” (or just “w” in the case of Google Instant), you want the weather forecast right away—not a collection of links about meteorology. Type in “flights to San Francisco,” and you most likely want flight options and prices, not more links asking you to enter the same query again.

We know these things with a fair degree of certainty. We hire lots of great computer scientists, psychologists, and linguists, who all contribute to the quality of our results. We carefully analyze how people use Google, and what they want. And what they want is quite obvious: the most useful, relevant results, as quickly as possible.

Sounds pretty simple. But as Google has become a bigger part of people’s lives, a handful of critics and competitors have raised questions about the “fairness” of our search engine—why do some websites get higher rankings than others?

It’s important to remember that we built Google to delight our users—not necessarily website owners. Given that not every website can be at the top of the results, or even appear on the first page of our results, it’s not surprising that some less relevant, lower-quality websites will be unhappy with their rankings. Some might say that an alphabetical listing or a perfectly randomized list would be most “fair”—but that would clearly be pretty useless for users.

People often ask how we rank our “own” content, like maps, news or images. In the case of images or news, it’s not actually Google’s content, but rather snippets and links to content offered by publishers. We’re merely grouping particular types of content together to make things easier for users.

In other cases, we might show you a Google Map when you search for an address. But our users expect that, and we make a point of including competing map services in our search results (go ahead, search for “maps” in Google). And sometimes users just want quick answers. If you type “100 US dollars in British pounds,” for example, you probably want to know that it’s “£63.9p”—not just see links to currency conversion websites.

Google’s search algorithm is actually one of the world’s worst kept secrets. PageRank, one of our allegedly “secret ingredients,” is a formula that can be found in its entirety everywhere from academic journals to Wikipedia. We provide more information about our ranking signals than any other search engine. We operate a webmaster forum, provide tips and YouTube videos, and offer diagnostic tools that help websites identify problems.

Making our systems 100% transparent would not help users, but it would help the bad guys and spammers who try game the system. When you type “Nigeria” you probably want to learn about the country. You probably don’t want to see a bunch of sites from folks offering to send you money … if you would only give them your bank account number!

We may be the world’s most popular search engine, but at the end of the day our competition is literally just one click away. If we messed with results in a way that didn’t serve our users’ interests, they would and should simply go elsewhere—not just to other search engines like Bing, but to specialized sites like Amazon, eBay or Zillow. People are increasingly experiencing the Web through social networks like Facebook. And mobile and tablet apps are a newer alternative for accessing information. Search engines aren’t the “gatekeepers” that critics claim. For example, according to the research firm Compete, Google is responsible for only 19% of traffic to

Investment and innovation are considered strong indicators of a competitive marketplace. Last week’s launch of Google Instant was a big bet for us—both in terms of the complexity of the computer science and the huge demands it puts on our systems. Competition for eyeballs on the Web helps drive that risk-taking and innovation because consumers really do have the freedom to vote with their clicks and choose another search engine or website. In an industry focused on tough questions, that’s clearly the right answer.

Mr. Singhal is a Google fellow who has worked in the field of search for over 15 years, first as an academic researcher and now as an engineer.

‘Trust Us’ Isn’t An Answer
By Charles Rule

‘What goes around, comes around.” That pretty much sums up the predicament in which Google currently finds itself. If you listen carefully to Google’s complaint that antitrust regulators have no business poking around in its business, you’ll hear the echoes—if not wholesale appropriation—of the arguments once propounded by Microsoft.

In case you might have missed it, a decade ago Microsoft was a pioneer of sorts in establishing the relationship of antitrust to high-tech. Its Windows operating system was labeled a monopoly, and the company was accused of employing a litany of “bad acts” to prevent rivals like Novell, Netscape and Sun’s Java from threatening Window’s dominance. I should know. I represented Microsoft then and still represent the company today.

Microsoft countered that, far from being a monopoly, it was under intense competitive pressure and that the allegations of bad acts were actually the self-interested complaints of rivals unable to keep pace with Microsoft’s innovations. Taking up the cause of the victims, state and federal antitrust regulators (and counterparts around the world) challenged Microsoft, and after an epic battle, they won.

Google now finds itself in those same antitrust cross-hairs, accused of being today’s monopoly gatekeeper to the Internet. There are a growing number of complaints in the U.S. and Europe that Google has used its search monopoly to exclude actual and potential rivals, big and small. How exactly? Rigging clicks by lowering competitors’ rankings in Google searches is one way. Another is locking up critical content, like video and books, so that rival search engines are frustrated in trying to provide their users with access to that content. The result has been Google’s overwhelming dominance.

Ironically, many of the most ardent defenders of Google are the same individuals—such as Eric Schmidt, Google’s CEO who was an executive at Sun and later Novell—who devoted so much time, money and effort to pushing the frontiers of the law and government regulation against Microsoft a decade ago.

Much like Microsoft’s arguments about a general software market, Google likes to claim its business is only a drop in the bucket that is the general advertising market. But after lengthy investigations, the Justice Department and Federal Trade Commission have concluded that search advertising is unique and constitutes a separate market. In the U.S., Google commands a share of search advertising well in excess of 70%—the consensus threshold for monopoly under the Sherman Act. Google’s share in most places around the world is even higher.

Like Microsoft, Google claims “competition is just a click away.” But for an advertiser hoping to reach consumers when they type in a query about the products the advertiser sells, Google is where the queries are and more than 70% of all ad-supported queries flow through Google’s search engine. Yahoo once provided a choice, and Bing is still hanging on. But there’s reason to believe that Google’s strategy has been to deprive any rival—big or small—of the queries and advertisers necessary to create real alternatives for users.

Again like Microsoft, Google claims its antitrust problems are the result of a cabal of disgruntled competitors. And it is true that Microsoft’s rivals such as Mr. Schmidt’s Sun and Novell provided much of the evidence, and at least some of the impetus, against Microsoft. But in monopolization cases, which are about exclusion of rivals from the marketplace, it is almost always the excluded victims who blow the whistle on monopolists.

Unlike Microsoft, however, Google so far has offered little more than cursory justifications for its actions. Microsoft at least believed what it was doing reflected its innovation, which, though perhaps rough on rivals, benefited consumers.

Google smugly brushes aside allegations against it, expressing indignation that anyone would deign to question such a hip, warm and fuzzy company. Google’s defense seems to be: Trust us, whatever we do will be good for the rest of you. And, we’re way smarter than you, so you’d never be able to comprehend what we’re doing anyway.

Whether Google likes it or not, the Microsoft case resolved antitrust’s role in high-tech. And the last 10 years have shown that reasonable antitrust rules can be applied to prevent exclusionary conduct by dominant tech firms without destroying market forces. Complaints by leading Googlers, who were once strong proponents of those rules, that the same rules should not apply to Google are disingenuous at best.

The application of antitrust must be consistent. Failing to apply antitrust rules evenhandedly—particularly to politically well-connected monopolists like Google—would neither be just nor promote the cause of free-market capitalism.

Mr. Rule, head of the Justice Department’s Antitrust Division in the Reagan administration, is an attorney with Cadwalader, Wickersham & Taft LLP. His firm represents Microsoft and is counsel of record for two companies currently engaged in antitrust litigation against Google—myTriggers and TradeComet.


Cloud Enterprise Architecture

setembro 17, 2010

Post do blog!


Cloud Enterprise Architecture


Enterprise Architecture Process Image via Wikipedia 

Consultancy firm Deloitte has asked ‘does Cloud makes Enterprise Architecture irrelevant?’ 

This prompted a compelling discussion on the topic in a Linkedin group where I suggested that actually Cloud is Enterprise Architecture. 

Yes “the Cloud” is a place, which people point to in a vague hand waving motion implying it’s really far away and quite ephereal, but Cloud Computing is now also a practice

Cloud EA – Maturity model

Due to the convergence of enterprise IT and Internet 2.0 standards, and the expansion via Private Cloud, the field now represents a design approach to IT systems in general as well as hosted applications and infrastructure. 

Cloud is actually becoming an excellent source of EA best practices. Standards work like Cloud Management from the DMTF now provides a fairly generalized set of blueprints for enteprise IT architecture that an organization could use as design assets independent of using any Cloud providers. 

Cloud Identity Management

Of course it’s highly likely they will use Cloud providers, and so the reason why Cloud EA will be so valuable and powerful is that it can cope with this new world as well leverage it for better internal practices too. 

For example one key area tackled in the DMTF architecture is ‘Cloud Identity’, stating that Cloud providers should utilize existing Identity Management standards to streamline their own apps, and should ideally integrate with corporate identity systems like Active Directory. 

Catering for these types of needs is a great context for driving new business start-ups too. For example Cloud Identity meets these needs, and helps quantify the activities in this section of the model. 

Their software caters for the workflow automation of staff and temporary workers using a myriad of Cloud apps like, integrating with in-house systems like Active Directory to manage the processes of provisioning and de-provisioning accounts for new staff members, and providing them a convenient single sign-on facility. 

Key features are that it can enable a corporation to leverage OpenID, providing employees their own corporate, secured version so that not only can their web experience be streamlined, but it recognizes the context of this activity too, ie. on behalf of their corporate employer. 

This means their employer can then play a role in the overall ‘Identity metasystem‘ that OpenID is intended to create, and provides tools for auditing and alerting that critically can be applied across federated environments,  a “Community Cloud” of organizations partnering together, like hospitals, healthcare and social authorities. 

It’s this same scenario that the Federated PIA is intended to address, so these Cloud EA practices can be directly aligned with their key counterparts in areas like government privacy assessment. 

Where Are the Telcos?

setembro 16, 2010

Post de ontem do blog

Where Are the Telcos?

By Ellen Rubin

Sep 15, 2010 at 2:45 PM

This week’s Verizon announcement about their new CaaS offering for SMBs highlights a strange situation in the cloud computing market. While Amazon has been growing explosively and MSP/colo providers like Rackspace, Terremark and Savvis have rushed to embrace cloud in their business models, the telcos have been slow to enter the fray.

Telcos in many ways seem like the most likely players to lead and ultimately win in the land-grab of cloud computing. They’ve got the huge scale, geographic coverage, existing enterprise relationships and experience in service delivery that would appear to give them unfair advantage. As noted in the Verizon announcement and some recent blogs, telcos have a “unique opportunity to position cloud computing as an extension of their managed networking solutions (such as MPLS-based VPNs), by offering ‘on-net’ cloud computing capabilities backed up by end-to-end service-level agreements (SLAs).” In fact, the networking infrastructure and ability to offer dedicated and secure access is one of the telcos’ greatest strengths since it addresses some of the key concerns about cloud security and bandwidth.

So it’s worth considering why the telcos aren’t yet a dominant force in the industry. To a certain extent, it’s taken a couple of years for them to perceive the threat of Amazon et al to their core businesses. The response has been primarily a defensive one, as noted by IDC’s Melanie Posey: “Right now they’re concerned with, ‘If our existing customers want cloud in addition to the traditional hosting we’re offering them, we have to have something too or they’ll take that incremental business to somebody else.’” Marketing announcements and pricing model changes have so far been the fastest and lowest-cost response to this threat. For example, some telcos are now offering per-month pricing instead of the traditional annual or multi-year structures.

In parallel, the telcos are doing the heavy lifting required to build new cloud services. A lot of the real spending so far in the cloud market is being done by these players: buying new gear from the server, storage and networking vendors; installing new software and management tools from the hypervisor and service management players; designing new architectures with the help of consulting firms; leveraging existing infrastructures from Terremark, OpSource and others, etc. This all takes significant time and money.

While this investment is taking place, there’s relatively little to see in terms of live customer deployments. But in the meantime, the first-mover cloud providers and customer early adopters are moving full-speed to test and improve their offerings and cloud footprints. They’re shaping and defining cloud requirements and best practices based on real-life customer engagements. The risk for the telcos in being late to the party is that they’re not getting the customer insights first-hand and are missing the direct experience needed for successful scale-out and service delivery. Without this, they could end up delivering too little, too late. Still, given the size and projected growth of the cloud market opportunity, there’s no doubt it would be a mistake to count the telcos out.

Crowd Accelerated Innovation

setembro 16, 2010

Eis aqui um novo TED Talk de Chris Anderson; desta feita falando sobre um novo conceito (Crowd  Accelerated  Innovation) e como o vídeo na web empodera inovação global!

O vídeo pode ser visto em:

Cloud Computing Is and Is Not about the technology

setembro 12, 2010

Post do blog de Scott Stewart –!


Cloud Computing Is and Is Not about the technology

by Scott Stewart on September 12, 2010

My previous comments about ‘cloud is not about technology’ caused some reaction and I found myself exchanging tweets on my BlackBerry (gotta love social media) with Mike Fratto, Site Editor, Network Computing, who argued in response that Cloud IS about technology

So Mike says it is all about technology and I say that is not all about technology.

But this difference in our perception and viewpoints of how we see Cloud Computing is exactly representative of the change that is occurring, it is the way it is becoming and it is the way it should be. Let me explain…

Mike is a network guy, a technologist who knows his stuff, one of the best, a well respected expert and when he talks of Cloud he talks about networks, TRILL and 802.1aq and all the technology that makes Cloud Computing work. If he thinks about how he would deliver Cloud Computing to the business he will think about deployment models, virtualization, networking products and protocols. When Mike reads the NIST definition it makes sense, its technically accurate and he thinks we should accept it and move on. Mike knows IT will continue to buy servers, racks and data centres, capex spending on IT will continue and that is why he shakes his head when someone says “IT will stop buying servers”. Mike says that technology matters and he says ‘IT cares about how they will deliver cloud services to the organization’. I am so glad that we have experts like Mike to take care of all that IT stuff for us. You see Mike is an industry expert and is very switched on and we need experts like Mike to working on our networks and taking care of things for us, out there in the cloud and IT vendor world.

There was a time, not that long ago, that I would have thought of hiring Mike but those days are gone. Because there has been a fundamental change in what I call ‘organizational IT’ within our organization and it is happening elsewhere and without any doubt will happen in many other organisations.

Now take the CIO, when I talk about Cloud Computing I talk about the business transformation – the business change behind the technology change. My perspective is from organizational IT and I am seeing everyday the change that has been occurring in the attitude of the business on how the business wants to buy IT. The business only wants to pay for the IT that it uses and it wants to pay lower amounts for users who have lower usage.

After years of investing in a sizeable expensive infrastructure that served everyone across the one organization (the same cost per user for all users) we now see IT infrastructure and the people who run it as a utility, the cost of doing business, it doesn’t add any competitive value and it can now be procured much cheaper and more efficiently as a service and pay only as you use it. We are moving all our infrastructure to a “Trusted” cloud platform, multi-tenanted, shared service, one to many, with much lower costs and far more scalability plus there was a host of other compelling reasons why we would do it.

This is why we hope there are lots of guys like Mike around that we can buy those services off.

We don’t buy IT using capex anymore, we don’t buy servers, racks or data centres. When a software vendor comes to us pitching a big capex investment in their software we take them to our cloud provider and we jointly negotiate a price on a subscription basis and then if they want they can also sell it to the other tenants. We still only pay one bill, a per user per month cost based on usage and only what we use.

I don’t have any infrastructure, no sysadmins, and because our cloud provider takes care of our IT I don’t need to know about TRILL or 802.1aq but I would be glad that experts like Mike do.

So when I use the term ‘Cloud Ccomputing’ I don’t think about the technology but I use it as a generic term that describes the disruptive transformation occurring in the industry as we move from a product based industry to a services industry.

Now when Mike says “I think the transformation is a side effect stemming from how IT delivers services to the organization via cloud regardless of the definition of “cloud” or where it is located”. I understand what Mike means, and this is mostly right, but I don’t fully agree that the transformation is actually a side effect of the technology.

This is not a change that has come about because of the technology but a change that was waiting for the technology to arrive.

Since the days that businesses stopped using its own steam infrastructure to generate their own power, switching instead to reticulated electricity, the business has understood utility, commodity and ubiquity. After investing in steam infrastructure that generated power in some cases for over 30 years the business switched to electricity and decommissioned its investment in that steam infrastructure.

To the business utility makes sense, we don’t build our own steam power plants anymore and increasingly businesses will not want to run their own IT infrastructure either, especially if their competitors aren’t doing it then they will need to find a way to close that competitive gap. That’s where Mike helps.

The concept of cloud computing is not new or revolutionary, the concept was first mooted by Professor John McCarthy in 1961 when he spoke of utility computing and it has taken this long for the technology to be able to deliver on this concept.

The concept has been around for a while and the technology also has been around for a while (1974 for virtualisation) and the changing business attitudes will now start to manifest and more businesses will want to use Cloud Computing to exploit the shift to a services based economy. A shift that it knows and understands and will gain maximium benefit from.

We may not be able to define cloud computing well, but it certainly makes sense to do it.

Perhaps we should have stuck with McCarthy’s term of “utility computing” but then the marketing machines would not have had as much fun as  they have had with the term Cloud.

Mike, on one other of your comments:

“I don’t believe for one second that there are many organizations that are even contemplating moving all, or even a majority of, their IT applications and services to an external, cloud-based service anytime soon. There may be some small start-ups doing so, but I bet they are the exception and not the rule”.

Just so you know…

This year the largest bank in Australia with over 7 million customers announced an aggressive cloud computing strategy that will see it move all its infrastructure to a cloud platform in collaboration with at least two other banks. I know the architects working on that project and the scope of that transformation is mind blowing.

This year my own organisation, a prestigious stockbroker and investment management house announced to the stock exchange a strategy for moving all it’s IT infrastructure to a Trusted cloud computing platform and this transformation is now well advanced.

My message to CIOs is that with cloud computing you do not get a choice so you had better be prepared to adapt.

My message to you is keep caring about how you will deliver Cloud Computing to organizations because many will need your help.

As you said it is really a great time to be in IT.

%d blogueiros gostam disto: