Archive for março \28\+00:00 2010

Accenture Launches Smart Grid Data Management Solution to Reduce Risks and Costs of Smart Grid Deployments

março 28, 2010

Notícia recente da Accenture, uma das maiores empresas de consultoria do mundo!


Accenture Launches Smart Grid Data Management Solution to Reduce Risks and Costs of Smart Grid Deployments

* Related Assets
March 18, 2010
Accenture Intelligent Network Data Enterprise will accelerate smart grid implementation
NEW YORK; March 18, 2010 – Accenture (NYSE: ACN) has launched the Accenture Intelligent Network Data Enterprise, a data management platform to help utilities design, deploy and manage smart grids.
The Accenture Intelligent Network Data Enterprise (INDE) enables utilities to manage, integrate and analyze real-time data generated by millions of disparate sources throughout a utility’s smart grid network.  Transforming the data into actionable and predictive insights allows a utility to take actions to increase operational performance. Designed to help utilities accelerate smart grid implementations while reducing deployment risks, INDE comprises a robust and configurable suite of software applications, management tools, databases, analytics tools and processes that perform several critical functions. The standards-based solution architecture complements existing systems and processesto resolve the complex challenges inherent with smart grid deployments. A reference architecture at its core, INDE is software agnostic and a repeatable solution that can be configured to meet the specific requirements of a utility.
The INDE solution architecture plays three critical roles:
  • The software layer between “raw” data from the grid and the utility’s existing operations and enterprise IT systems. By aggregating this data INDE, like a central nervous system, allows utilities to manage complex operations and improve business performance and customer operations.
  • The integration platform unifying a diverse array of smart grid ecosystem products such as communications, smart meters, intelligent network components and sensors. 
  • The visualization capabilities to better observe and manage intelligent device components. INDE provides a platform to use analytics to derive insight from raw data that has been transformed into events and other meta-data. 
“A smart grid is expected to generate up to eight orders of magnitude more data than today’s traditional power network. Transforming and analyzing this data into useful business and operational intelligence is one of the biggest challenges facing our industry today,” said David M Rouls, managing director, Accenture Smart Grid Services. “Our experience from working with utilities in implementing smart grids tells us fundamental change is required in many areas.  One key area is the need for utilities to address the challenges of turning data into valuable insights to drive enhanced business value, performance and customer service, to protect existing investments and produce lasting competitive advantage.”
INDE provides utilities with key capabilities such as:
  • A comprehensive, field-tested smart grid blueprint methodology;
  • A sensor network architecture that enables utilities to monitor network performance thereby anticipate faults, optimize asset management and improve reliability; 
  • An open, standards-based data acquisition, transport, event processing and storage architecture;
  • Powerful analytics capability that transforms low-level grid data into higher level information that enables business actions based on insight,
  • A visualization platform that integrates data from multiple sources and creates graphic representations of analytical results that can be readily understood.
The INDE solution architecture was first applied at Xcel Energy’s SmartGridCityTM, the world’s first fully-functional smart grid city, in Boulder, Colorado. The INDE solution has been successfully deployed and proven to operate across several smart grid enabling technologies.
INDE’s functionality can be enabled by an array of third party technologies.  In addition, Accenture plans to offer utilities the option of implementing the INDE solution based on a pre-configured suite of Oracle technologies. The Oracle-based version of INDE will accelerate the design of smart grids and help reduce the costs and risks associated with smart grid implementation.
Stephan Scholl, Senior Vice President and General Manager of Oracle Utilities said, “Oracle and Accenture share a common vision of how the smart grid will enable more efficient energy choices for utilities and their customers.  Our combined expertise in delivering mission-critical smart grid applications, security, data management and systems integration can help accelerate utilities toward a more intelligent network now and as future needs arise.”
David M Rouls, said, “Data management and real-time analytics is one of the crucial components to a smart grid strategy and deployment. Ultimately the success and costs benefits of responding to reliability, security, asset utilization or customer care issues lie in the immediacy of insights and information.”
About Accenture
Accenture is a global management consulting, technology services and outsourcing company, with more than 176,000 people serving clients in more than 120 countries. Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments.  The company generated net revenues of US$21.58 billion for the fiscal year ended Aug. 31, 2009.  Its home page is
Matthew McGuinness
+44 20 7844 9683
+44 77 400 38921
Christine Fields
+1 216 535 5092         
+1 330 234 6406                  

A conversation with Guy Kawasaki

março 22, 2010

Uma conversa interessante com o venture capitalist Guy Kawasaki, que saiu recentemente no blog!



A conversation with Guy Kawasaki

Lori Hawkins, Plugged In

Guy Kawasaki venture capitalist, former Apple fellow, author and entrepreneur is a product of Silicon Valley, but he sees the possibility for innovation everywhere.

Kawasaki, a founder and director of Garage Technology Ventures, a Palo Alto, Calif., firm that invests in startups, was in Austin last week for South by Southwest Interactive, where he judged the Microsoft BizSpark Accelerator business competition.

“It took Silicon Valley eight decades and a lot of luck to get where it is today. No one place has a monopoly on smart people,” he said. “What matters is providing a good service that people want to use. If you can’t do that, it doesn’t matter where you are.”

Kawasaki, whose latest startup is Alltop, which describes itself as an online magazine rack, says it’s easier than ever to launch a technology company but still as challenging to build a successful one.

“One thing hasn’t changed: Nine out of 10 entrepreneurs will fail,” he said. “Does that mean you shouldn’t try? Of course not.”

We asked Kawasaki to share some thoughts on today’s startup climate.

Has it become almost too easy to do a tech startup these days, especially in social media, attracting too many people who think they have a cool idea but are more or less oblivious to the hard business realities?

It may be too easy in terms of costs, but it’s still very hard to achieve critical mass and significant revenue.

This is like saying it’s too easy to publish a book or magazine. You still have to write a great book and sell it. The more companies that start, the more likely there will be some great ones in the mix. It’s the law of infinite monkeys.

Are they more oblivious? Maybe, but obliviousness is a good thing. If people knew how hard it really is to start a successful company, very few would do it.

Venture funding for true startups has been in somewhat of a retreat. That’s been perhaps more acute in Austin than in some other places. As the economy recovers, do you think that’s likely to turn around, or has the venture model changed so substantially that getting money to launch a startup will remain a serious challenge for the foreseeable future?

The venture model hasn’t changed because the venture capitalists wanted it to change. They’d love to continue the 2.5 percent baseline (in management fees) and $500,000 salaries whether you do good or bad.

The costs of starting a company have changed — free, open source tools; cheap talent; free or cheap marketing via Facebook, Twitter, and blogs; and cheap infrastructure in the clouds.

Entrepreneurs need venture capitalists less, and this is a good thing. The venture capitalists that stick it out can pick from more mature companies, and companies can pick from more stable venture capitalists as well as determine more of the terms upon which they take investments.

What’s the toughest tough-love advice you’d give entrepreneurs these days?

Add one year to your shipping schedule and divide your revenue by 100 when you do your forecasting.

I’ve never seen an entrepreneur achieve even her most conservative sales forecasts. Most entrepreneurs conservatively forecast achieving the fastest revenue ramp in the history of mankind.

Does technical expertise even count in a startup these days? If so, what kind?

Good engineering counts as much as ever. Companies still have to create great- looking products with great functionality and great reliability.

Nothing has changed here. Indeed, because the barriers to prototyping are so much lower, there’s greater competition to create great stuff.

Any guesses as to what the next breakout technologies will be?

By definition, someone in my position is unlikely to know, much less create, the next breakout technologies. The richest vein for this is two guys, two gals, or a guy and a gal in a dorm room.

What would you have asked Twitter co-founder Evan Williams at his South by Southwest Interactive keynote interview? (Editor’s note: Hundreds of attendees walked out early, saying the interviewer was boring and that Williams didn’t have anything interesting to say. There had been wide speculation he would unveil Twitter’s ad platform, its first real source of revenue.)

How have you convinced your investors that revenue doesn’t matter?

Where do you see Twitter headed?

Twitter is a platform, and as a platform, people will define what it does for them. For some, it’s warm and fuzzy social interaction. For others, like me, it’s a marketing mechanism.

There is no wrong or right — it’s whatever works for you. In many ways, Twitter is like “the Internet” in general.

As you were leaving Austin, what things stuck in your mind about the event?

As I was leaving Austin, all I could think about was how much I loved the Lucchese black ostrich boots that I had just bought at Allens Boots.

Lori Hawkins covers venture capital and startups for the Statesman.; 912-5955

Uma análise do Plano Nacional de Banda Larga enviado pelo FCC ao Congresso americano

março 18, 2010

Acabei de receber pela newsletter da Telebrasil que a FCC (a ANATEL dos EUA) enviou ao Congesso dos EUA o Plano Nacional de Banda Larga daquele país!

Uma análise do Plano Nacional de Banda Larga enviado pelo FCC ao Congresso americano
FCC envia ao Congresso norte-americano seu Plano Nacional de Banda Larga para o século XXI

A FFC – Federal Communications Commission –, que trata das telecomunicações nos EUA – enviou, em 16 de março último, ao Congresso norte-americano o que chamou de “road map” para o século XXI. O plano prevê alcançar todos os recantos do país com uma Internet “robusta”, a que todos possam ter economicamente acesso. A previsão é de conectar, na década, 100 milhões de residências, com um serviço a 100 Mbit/s.
Segundo a FCC, o plano, cujo o nome (“Connecting America: The National Broadband” ou “Conectando a América – a banda larga nacional”) já diz quase tudo e se destina a transformar a economia norte-americana e a dotar a sociedade do país do norte com a “rede de telecomunicações do futuro”.
Numa espécie de “mea culpa” – que bem poderia servir de exemplo a outros países –, admite a FCC que a nação falhou na utilização do poder da banda larga para prestar serviços governamentais nas áreas da saúde, educação, segurança pública, conservação de energia, desenvolvimento econômico e outras prioridades nacionais. Mesmo nos EUA, 100 milhões de americanos não possuem ainda banda larga em casa e 14 milhões não têm acesso a este meio, ainda que assim o desejassem.
Maior mercado de banda larga do mundo
O megaplano prevê criar, até 2020, o maior mercado de banda larga do mundo, gerando novos empregos e empresas. Prevê conectar nada menos que 100 milhões de residências, com serviços a 100 Mbit/s. As comunidades dos EUA terão a altíssima velocidade de 1 Gbit/s, conectando sua principais instituições como escolas, hospitais e instalações militares.
Toda criança norte-americana deverá ser, ao deixar o ensino médio, um alfabetizado digital. Leia-se, aqui, utilizar computador ou equivalente e navegar na Internet. Escolas, bibliotecas e populações vulneráveis das comunidades rurais não foram esquecidas. O plano prevê prover banda larga a custos acessíveis e reorientar o Fundo de Serviço Universal – o Fust de lá – para passar da tecnologia analógica de ontem para a infraestrutura digital de amanhã.
A FCC também recomenda que haja competição no “ecossistema da banda larga”, dando-lhe maior transparência e removendo barreiras para novos entrantes. As análises de mercado deverão levar em conta preço, velocidade e disponibilidade da banda larga.
O item segurança não foi esquecido pelos norte-americanos. Ela deverá ser melhorada pelo provimento do acesso imediato a uma rede nacional pública sem fio destinada para este fim.
Setenta e cinco mil páginas de comentários públicos
O plano, fruto do American Recovery and Reinvestmente Act of 2009, foi produzido com rigor, abertura e transparência – os interesses são muitos e variados – pelo grupo de trabalho da FCC, relata a própria comissão.
A FCC cumpriu exaustivamente o dever de casa e fez questão de divulgar as estatísticas de seu trabalho. Foram 36 workshops, nove audiências e 31 avisos, tudo aberto ao público. Foram produzidas ainda 75 mil páginas de comentários públicos. O debate prosseguiu on-line, com 131 blogs com 1,5 mil comentários. Ideias foram debatidas e votadas na Internet e angariaram 6,1 mil participações. As redes sociais – YouTube e Twitter – contribuíram fortemente para aumentar o volume de informações, numa demonstração de democracia eletrônica. Além disso, o grupo de trabalho da FCC também conduziu pesquisa e levantamento de dados independentes.
Metade das recomendações do plano – um vasto documento que pode ser acessado em – é voltado para a própria FCC e as restantes para o Congresso, para o poder executivo do presidente Barack Obama e para os governos estaduais e locais. O plano também leva em consideração o setor privado e o setor sem fins lucrativos. Mais informações podem ser obtidas em (JCF)

Building a Systemic Innovation Capability

março 18, 2010

Eis abaixo uma entrevista de Braden Kelley, editor de Blogging Innovation e fundador do blog, fez esta semana (as três partes da entrevista estão contidas abaixo)!


Monday, March 15, 2010

Part 1 of 3 – Building a Systemic Innovation Capability
Interview – Rowan Gibson of “Innovation to the Core”

Part 1 of 3 - Building a Systemic Innovation Capability

I had the opportunity to interview Rowan Gibson, co-author of “Innovation to the Core” about the book and about creating a systemic innovation capability inside organizations.

Rowan Gibson is a global business strategist, a bestselling author and an expert on radical innovation. In addition to “Innovation to the Core“, Rowan is author of “Rethinking The Future”, and a keynote speaker at large international conferences, corporate management events and executive summits.

We have split this in-depth interview into three parts. Here is the first part of the interview:

1. When it comes to innovation, what is the biggest challenge that you see organizations facing?

The biggest challenge is not generating new ideas and opportunities. It’s how to make innovation a deeply embedded capability. What usually happens is that companies focus most of their efforts on the front end of innovation – so they launch some kind of ideation initiative with a lot of hoopla and they get a whole bunch of ideas. But then they hit a wall because there is no back end – there is no organizational system for effectively screening ideas, aligning them with the business strategy, allocating seed funding and management resources, and guiding a mixed portfolio of opportunities through the pipeline toward commercialization. So, invariably, what we find is that the whole innovation effort eventually withers. And all those enthusiastic innovators inside and outside the company become cynical and discouraged as they watch their ideas go nowhere.

The real challenge, therefore, is to turn innovation from a buzzword into a systemic and widely distributed capability. It has to be woven into the everyday fabric of the company just like any other organizational capability, such as quality, or supply chain management, or customer service. In other words, for innovation to really work, and to be sustainable, it has to become a way of life for the organization. Yet how many companies have actually achieved that? The sad truth is this: most organizations today still have absolutely no model, no practical notion, of what the back end of innovation actually looks like. If you asked them to build a corporate innovation system that seamlessly integrates leadership commitment, infrastructure, processes, tools, talent development, cultural mechanisms and values, they wouldn’t even know where to start. That’s the challenge Innovation to the Core was meant to address.

2. Why is it so important that organizations build a foundation of insights before generating ideas?

OK, let’s go back to the front end of innovation. If you’re going to do this properly, what you’re really looking for is not just a lot of ideas. Senior managers often complain that most of the ideas they get from their employees and customers are not very good ones. So after they open up the innovation process to everyone, everywhere, they find themselves wasting valuable management time sorting through a heap of garbage to find a few interesting submissions. That’s because, frankly, they don’t really understand how the innovation process actually works.

Try to look at it this way: before you start building a house, you have to gather the right materials and lay a solid foundation, right? Remember the story of the three little pigs? If you build a house from the wrong materials it can easily be blown down, so it’s useless. Then there’s the Bible story of the house built on sand rather than rock. It makes a similar point: if you don’t have the right foundation – regardless of the quality of the building materials – the house is equally useless. So it is with new ideas and opportunities. In a sense, they need to be built from the right “materials” and they need a solid “foundation”, otherwise they won’t be very good. What I’m getting at here is that there is actually a front end to the front end of innovation. Before you start ideating, you need a set of really novel strategic insights. These are like the raw material out of which exciting innovation breakthroughs are built. If you ask people to innovate in a game-changing way without first building a foundation of novel strategic insights, you find that it’s mostly a waste of time. You get a lot of ideas that are either not new at all, or so crazy that they’re way out in space.

So how do you develop those all-important insights? I teach companies a methodology for doing that in a systematic way – it’s called “The Four Lenses of Innovation”. The fact is that in order to discover new ideas and opportunities of any real value, people need to stretch their thinking beyond the conventional. They need to develop fresh perspectives. So the “Four Lenses” represent four specific types of perspectives, or ways of looking at the world, that innovators typically use to come to their breakthrough discoveries. They are (1) Challenging orthodoxies, (2) Harnessing trends, (3) Leveraging resources in new ways, and (4) Understanding unmet needs. By using these lenses, or these particular angles of view, it’s possible to systematically look through the familiar and spot the unseen. That’s how you discover those deep insights that others have overlooked or ignored.

Once you have gathered a collection of really inspiring insights, you can then do your ideation work. You start thinking about what kinds of ideas and opportunities could be built on these unexamined dogma, unexploited trends, underutilized resources, and unvoiced customer needs. And what that gives you is not just a high quantity of ideas but also a high quality. Rather than just pulling ideas out of the air, you generate opportunities that are grounded. They are based on real industry orthodoxies that deserved to be challenged, real discontinuities that could potentially reshape the business landscape, real competencies and assets that could be leveraged to create opportunities beyond the boundaries of the existing business, and real customer needs that have not yet been addressed. So you inspire ideas that are connected to the real world; they are not in some crazy, unbounded creative space. They are founded on realities – things you can test and validate.

Now imagine that instead of merely inviting everyone, everywhere to “go forth and innovate”, you actually gave them access to these powerful strategic insights via a web-based tool, and you taught them how to ideate effectively. Can you see how that would dramatically enhance their innovation performance? That’s what I’m currently doing with all kinds of organizations around the world.

3. Innovation demand in an organization is equal in importance to innovation supply, but why don’t most companies see that?

Again, it’s because they are usually too focused on the front end. So they work hard to push up the supply of ideas, launching initiatives like online suggestion boxes, open innovation programs, creative competitions, and perhaps some kind of reward and recognition system. But they fail to create the necessary demand for innovation, meaning the natural, reflexive pull for new ideas within and across the businesses. Failure to drive and manage the demand side of innovation is often where the whole initiative falls flat.

I often ask companies a simple set of questions to gauge the level of innovation demand inside their organizations. For example, Do the executives who run your company’s core businesses demonstrate genuine interest in radical, new ideas by redeploying adequate resources behind them? Are they held personally responsible for the performance of their unit’s innovation pipeline? Do they spend a significant percentage of their time mentoring innovation projects?

One of the dilemmas today is that, in most organizations, the pressure to innovate is not very real and tangible to senior managers. If you are running one of the company’s business units, for example, what is real and tangible to you is the pressure to meet the numbers – to improve operational performance – and you know you are being monitored on a monthly, weekly, daily, or even minute-by-minute basis. But there is usually no similar pressure that is holding you directly accountable for innovation performance. So your natural bias is to worry a lot more about efficiency, and short-term earnings, and monthly variances from budget, than about the innovation performance of your business unit.

The challenge, therefore, is to create a whole set of pressure points on the demand side that will make leaders as sensitive and responsive to the need to innovate as they are to the need to make the numbers. For example, companies can give senior managers unreasonably high growth targets that call for innovative ways to dramatically outperform the average; they can force their senior managers to allocate a portion of their budget to fostering innovation projects; and they can link a sizeable part of executive compensation directly to innovation performance. These are the kinds of measures GE has taken to drive innovation demand across the organization, and it has produced impressive results because GE is a company where managers are fanatic about achieving their goals.

4. Since the book was published, have you come across other organizations that you think are doing systemic innovation really well?

When we were writing the book, we devoted quite a lot of ink to companies like Whirlpool, P&G, IBM, GE, Shell, W.L. Gore, and Cemex. These are all excellent examples. Since then, of course, I’ve been working with all sorts of other organizations to embed innovation as a systemic capability. They include pharmaceuticals giants like Bayer and Roche: tech champions like Microsoft and Nokia: financial services leaders like Generali Group; massive consulting firms like Accenture; manufacturing companies like Rexam, top automobile brands like Volkswagen; trend-setting retailers like Ahold and Metro; home appliance makers like Philips and Haier; even heavy engineering firms like Debswana diamond mining. For the last half-year, I’ve also been very busy with Mars – the global manufacturer of chocolate, pet food and other food products – and I’ve seen a lot of progress there, too.

What really satisfies me is to see companies like these gradually institutionalizing and managing innovation as a discipline. So some of them are setting up innovation directors, innovation boards, business unit innovation officers, and innovation ambassadors. They are introducing comprehensive new metrics to measure their innovation performance. They are building new processes to produce and nurture a continuous stream of innovation opportunities from inside and outside the organization, as well as robust innovation pipelines for taking ideas from mind to market. They are giving their people new tools – including the “Four Lenses” methodology and web-based innovation platforms – that open up the innovation process to everyone. They are training literally thousands of their employees to use these skills and tools, and setting up incentive schemes and reward ceremonies to encourage them to innovate every day. They are hardwiring all their HR systems – pay, spot awards, the long-term incentive plan, the balanced score card objectives – into the company’s innovation strategy. They are creating new cultural mechanisms, such as a discretionary time allowance, to foster and support innovators throughout their ranks. They are building dedicated innovation spaces where their people can ideate together. And they are working hard to make innovation a tangible corporate value, rather than just an aspiration.

There are few other companies, too – ones that I have not yet personally worked with – that I would point to as good examples of systemic innovation. They include Best Buy and McDonald’s in the USA, and Tata in India. In fact, there’s a rapidly growing list of organizations around the world that seem to be gaining traction with the innovation management challenge, and what they demonstrate is that large companies really can tackle innovation successfully in a broad-based and highly systemic way.

5. People often talk about not having time to innovate. How can people find the time for themselves or their employees?

This is such an important issue. When we asked more than five hundred senior and midlevel managers in large U.S. companies to identify the biggest barriers to innovation in their respective organizations, one of the most common responses was “lack of time.” Most of us are struggling simply to get through the day, and it’s almost impossible to think creatively, reflect on new strategic insights and innovate in a focused manner when you’re running from one meeting to the next, making loads of phone calls, writing a thousand emails and frantically trying to work through all the other tasks on your to-do list. So companies need to think seriously about freeing up more time, energy, and brainpower across the organization to devote to innovation and growth.

The fact is, none of us are going to “find” time for innovation. We are going to have to “make” time for it by driving a wedge into our agendas and turning innovation it one of our strategic priorities. In the book we say that carving out time for employees to imagine and experiment and develop their own ideas is the “first commandment of innovation”. For some companies, a discretionary time allowance seems to work quite successfully. Well-known examples would be 3M, Gore and Google, where employees can spend a percentage of their time on pet projects. Other organizations take a number of people out of their day jobs for a certain period – say, a few weeks or months – and let them concentrate on generating new insights and ideas as members of dedicated innovation teams. Here, I’m thinking of companies like Whirlpool and Cemex. In addition, Whirlpool also has a formal training program where people are given time to learn the principles, skills, and tools of innovation in the same way as they learnt Six Sigma. Then there’s the example of Shell, where the time allowance actually comes after a person or team has submitted an idea, and these people are given one or two months, rising to perhaps a whole year, to design some small-scale, low-cost experiments to test the validity of their new business concepts.

The other thing to remember, as I have been emphasizing all along, is that creating bandwidth for innovation is not just about the front end. It also has to do with freeing up top management time for the back end of innovation – time to devote to steering innovation activities, reviewing ongoing innovation projects, setting priorities, allocating resources, mentoring innovators and embedding innovation as a core competence. And making innovation stick requires a significant number of people – outside of R&D and new product development – who officially work on a full- or part-time basis on innovation activities. One global company has already appointed 1,200 part-time innovation mentors along with 50 full-time innovation consultants, who coach and support would-be innovators throughout the organization, helping them push their ideas forward.

6. What are some of the biggest barriers to innovation that you’ve seen in organizations?

The biggest barriers to innovation tend to be deep and systemic. They are embedded in a company’s leadership priorities, political structures, management processes, cultural values and everyday behavior. For example, senior managers might actually be working against innovation, because the company’s metrics system doesn’t measure them on it, and the compensation system doesn’t reward them for it, so why should they be worried about it? Or a company might be biased heavily toward its legacy business and against revolutionary new ideas that might potentially cannibalize that business. Or allocational rigidities in the budgeting process are making it difficult to get resources behind new opportunities. Or the criteria used in the company’s product development stage gate process may tend to kill great ideas too early. Or the organization might simply lack people who have had any significant training in the skills and tools of innovation. These are quite familiar problems, but in every organization the barriers to innovation are subtly different, depending on factors like corporate culture, business model, organizational structure, and so forth.

To make the transition from innovation initiative to enterprise capability, an organization needs to identify, objectively, the practices, policies, and processes inside its core managerial DNA that are toxic to innovation – like traditional management processes that systematically favor perpetuation and incrementalism over new thinking and innovation. Senior executives need to realize that building a truly innovative company is not a matter of simply asking people to be more innovative; it’s a matter of positively changing those things that today diminish or stunt the organization’s innovation potential. As Clayton Christensen puts it, “Systemic problems require systemic solutions”.

7. What skills do you believe that managers need to acquire to succeed in an innovation-led organization?

Managers are going to have to develop a “dual focus” – both on short-term operational performance and on long-term growth opportunities and innovation. The problem here is that these two sides of the business require very different skills. It takes a certain style of management to cut costs, restructure, reengineer, and downsize, and quite another style of management to create growth through new ideas and initiatives.

Today’s senior executives will have to learn to do both – they will have to be relentlessly focused on meeting the numbers within their legacy businesses, yet equally focused on generating and successfully commercializing new growth opportunities for the future. This is very easy to talk about but very difficult to do, because efficiency and innovation don’t usually “cohabit” too well – they’re uncomfortable bedfellows because there’s just an inherent tension between these two forces. So what this adds up to is an extremely difficult balancing act for today’s managers.

I know of only one way to realign the focus and commitment of top management so that it’s equally focused toward innovation, and that is to design a new set of metrics for evaluating management performance – one that puts innovation at least on a par with other performance objectives. If you think about the Balanced Scorecard inside most organizations it’s actually not very balanced, in the sense that it tends to be weighted heavily toward optimization rather than innovation. So the first step is to make sure innovation is fully represented in a company’s metrics. It has to be recognized to be equal in importance to operational excellence.

Companies might preach the need for risk taking and rule breaking, but these are not the metrics they typically use for measuring their managers’ performance. Management compensation is typically tied to other measures like cost, efficiency, speed, and customer satisfaction – and executives are paid for making progress against those metrics. Organizations that are serious about making innovation a core competence need a new set of metrics to offset this tendency and encourage managers to put as much energy into innovation as they are currently putting into optimization.

The other thing managers need to learn is that, in an innovation-led organization, strategy-making is no longer going to be a top-down, executive-only exercise. It’s something that will involve everyone in the company – and even people on the outside – and it will increasingly emerge from the bottom up. Managers must come to believe, deep down, in “innovation democracy” – the notion that ideas with billion-dollar potential can come from anyone and anywhere.

Instead of fearing that this will put them out of a job, managers should recognize their pivotal role as champions of the new innovation process. Rather than going away in a little group and trying to come up with all the new growth opportunities on their own, executives should be encouraging all of their people to think like entrepreneurs and submit new ideas. Then they need to regularly sift through the wide variety of opportunities bubbling up from below, and look for ways to invest incrementally in these opportunities. This has worked very well for Best Buy, for example, where some of the most valuable ideas in recent years have come not from top management but from line-level employees who interact with customers each and every day.

This is no doubt going to require a degree of humility on the part of top management, because it essentially means that senior executives will have to give up the old, elitist view about who is responsible for the destiny and direction of the organization, and start involving many new and different voices in the process of charting the company’s future.

8. If you were to change one thing about our educational system to better prepare students to contribute in the innovation workforce of tomorrow, what would it be?

Well, if we agree that tomorrow’s economy will be an ideas economy, or a creative economy, or – as I like to call it – an innovation economy, then what does this tell us about the kind of skills tomorrow’s workers are going to need? Look at the typical school curriculum. What are the kids learning? Most of it has to do with filling their heads with old information – with facts and figures – as opposed to enhancing their imagination, their ability to create or envisage new solutions.

The education system is set up to teach conformity. It punishes people who fail to stand in line, or who question authority. Yet when we look at successful innovators – like Steve Jobs or Richard Branson – we find that they tend to be rebels. They are contrarian in their thinking. They ‘zig’ where others ‘zag’. They have somehow developed an almost reflexive ability to question the status quo, to look at things from a completely different angle of view, to imagine revolutionary new ways of doing things, to spot opportunities that others can’t see, to understand the revolutionary portent in trends and discontinuities, to empathize with unmet needs, to take risks and follow dreams. Is that what schools and colleges are teaching their students to do? I don’t think so. In fact, I would argue that they are systematically robbing people of their ability to think creatively.

Today’s MBAs are also woefully unprepared for the new era. They may have learned how to read a balance sheet correctly, but when did they learn how to systematically discover new strategic insights, or how to come up with radical new growth opportunities, or how to recognize a really big idea when they see one, or how to rapidly reallocate resources to push ideas forward? And when did they learn how to foster the cultural and constitutional conditions inside an organization that serve as catalysts for breakthrough innovation? Let’s face it, what business schools produce en masse are business administrators, not business innovators. They reward people with MBAs, not MBIs. And, again, I believe that part of the answer is to bring more balance into the student’s priorities, so that they learn how to embrace the paradox between the relentless pursuit of efficiency and the restless search for radical, value-creating innovations.

Anyway, I’m glad to see that “Innovation to the Core” is increasingly being included on business school curriculums. I wanted it to be kind of a business education for the 21st century and it’s gratifying to see it being used that way. The book is also playing quite a role in corporate training programs inside some of the companies that have truly recognized the innovation imperative. So if that means I’m making a meaningful contribution to advancing the field of business innovation, I’ll be a happy man.

Prof. Hal Varian on computer mediated transactions

março 13, 2010

Eis abaixo mais uma das magníficas palestras do Prof. Hal Varian, da University of California, em Berkeley, e Chief Economist Officer do Google.  Ela foi apresentada no dia 18/03/2009.

Deleitem-se, pois ele está simplesmente re-escrevendo a Ciência Econômica a partir daquilo que ele vem trabalhando no Google!


março 11, 2010

Post do dia 09/03 blog do Prof. Mark Perry (!

Sweden is a powerful example of the importance of public policy. The Nordic nation became rich between 1870 and 1970 when government was very small, but then began to stagnate as welfare state policies were implemented in the 1970s and 1980s. The Center for Freedom and Prosperity Foundation video explains that Sweden is now shifting back to economic freedom in hopes of undoing the damage caused by an excessive welfare state.

Highlight: The “peeing in your pants” analogy as an economic model about half way through is priceless.

Related: Income distribution in the U.S vs. Sweden (HT: Nick Schulz)

Balanced scorecard founder on the business value of IT

março 10, 2010

 Criador da metodologia do Balanced Scorecard- BSC fala sobre o valor da TI para os negócios.  O post (em duas partes) foi publicado no blog!


Balanced scorecard founder on the business value of IT
By Christina Torode, News Director
09 Mar 2010 |

// Robert Kaplan, Harvard Business School professor and co-creator of the balanced scorecard, recently spoke with about strategy execution. His most recent book, The Execution Premium, focuses on linking strategy to operations and developing an overarching management system to sustain strategy execution. In this first part of a two-part interview, he explains how CIOs can align IT strategy with corporate objectives and demonstrate the business value of IT. 

More on IT management frameworks
New IT management framework focuses on business value

ITSM and ITIL best practices for process improvement

When developing an IT strategy, what can a CIO do to improve the value of IT to the business?
Kaplan: A certain amount of what IT does is provide basic infrastructure, a platform for handling transactions, whether with suppliers or customers, or for financial data. But that doesn’t provide a basis for competitive advantage. It’s like doing the wiring or plumbing in the house. You need to have it, but it doesn’t make the house distinctive. If [IT] wants to become more strategically relevant, they have to think, “Which applications are going to be the most significant for helping the business improve their strategy?”

There you can think of two applications: One is information provisioning, whether information is about customers, suppliers or products, so business units are kept up to date, on a robust platform. But the more powerful application is IT embedded within the product or service itself. I think of FedEx, where the IT function is part of what makes FedEx distinctive, in that the shipper and recipient both track a package and know where it is. In fact, they do the work, rather than call up the company. That’s a case where there is an intimate partnership between IT function and the business strategy. That’s the most powerful situation.

How can CIOs help the business measure the value IT is contributing to the corporate strategy?
Kaplan: You can’t really justify the IT investment just by doing an analysis within IT. I mean, you can do it on cost savings. If you’re using IT to replace people, you can measure the cash savings and calculate net present values. But if you want to evaluate the impact of IT on business value, then you have to go to a methodology — a strategy map — and balanced scorecard, because often IT doesn’t directly produce revenue. What IT does do is support a critical process like innovation or a customer management process, and the output from IT is greater satisfaction or loyalty among customers, such as the FedEx example. And because of greater loyalty, the customers transact more business with the company.

Most IT value comes indirectly through improving processes and relationships with customers and suppliers. In turn, that leads to better financial results, but it has to work through the company’s business model, and not be a standalone investment that could be justified in its own right.

Not every investment or application has to go directly to some financial number. And that’s been so frustrating to IT, and 25 years later IT still hasn’t solved this problem [of proving the business value of IT] because they don’t deliver direct benefit by themselves. It’s only when it’s bundled in with strategic processes and customer relationships that the value is created.

Most IT value comes indirectly through improving processes and relationships with customers and suppliers.
Robert Kaplan
co-creator, balanced scorecard framework

After developing a strategy map with the business unit, what’s the next step?
Kaplan: You can then develop a service agreement on routine transactional basics, but also the value that IT is creating for the business unit. You can measure that monthly or quarterly through surveys with the business units as to how well they feel IT is delivering on its commitment and on its value creation. IT can then think, “What are the critical processes we need to do well?” And it’s not just buying applications, 24/7 operations, and security and privacy. Rather, some of the critical process is developing relationships with business unit heads, and that’s still a real shift in the role and expansion of abilities for IT professionals.

They need to be able to have dialogues and discussions with business unit heads, so some of your IT people end up being account managers working with business unit people, not just software coders and people who keep applications up and running. That will make IT a much more valuable partner.

How can IT play a more active role in driving the business strategy?
Kaplan: Depending on the strategy the business is following, the demands on IT are very different. If you look at Wal-Mart, Toyota or Dell, they are trying to follow a low cost strategy. The role for IT in this case is to lower the cost of working with suppliers, handling the logistics, and the distribution. IT is doing that networking with suppliers and making a platform for which customers find it easier to transact business.

Another organization might be following a strategy of building long-lasting relationships with ongoing sales and services. There, IT is very much related to CRM, and perhaps data mining, to be able to understand customers better. Are we going to partner with businesses and customize the IT offering to the needs of individual businesses owners? That customer relationship strategy is the most winning strategy for internal functional units like IT.

But then there’s innovation. IT might be the first to offer some new capability to the company’s customers, and that provides [the company with] a competitive advantage. That requires continual innovation within the IT function, So, I think in some sense IT has to make a choice between those two [CRM or innovation] strategies to create the most value.

In the second part of this interview, Kaplan shares his thoughts on two of the latest buzzwords — agile business and predictive analysis — and discusses the reasons so many companies missed the signs of the coming recession and the ways they can correct this strategy gap.

Let us know what you think about the story; email Christina Torode, News Director.


Balanced scorecard author talks agile business and risk management (Part II)

By Christina Torode, News Director
10 Mar 2010 |

In this second half of a two-part interview, Harvard Business School Professor and author Robert Kaplan discusses how he defines two subjects that are receiving a lot of buzz these days: agile business and predictive analysis. He also shares his thoughts on why companies overlooked signs of the recession and why risk management deserves its own scorecard.

More IT strategy resources
Balanced scorecard founder on the business value of IT

The road to agile IT runs through IT services management and PPM

CIOs take business intelligence applications, strategy to next level

The first half of this Q&A focused on IT strategy and proving the value of IT to the business.

There is a lot of talk about agile business and agile methodologies. How do you define agile?
Kaplan: That’s one of those buzzwords that people have different meanings for. To be agile is the ability to sense changes in the markets and customer preferences faster, as they are evolving, and be able to respond to it. It’s not just information, but it’s really analysis to see patterns in customers’ purchasing decisions and preferences and have that [data] come into the company so they can respond to whatever these evolving needs are. It’s also keeping track of competitive forces as well, to be able to offset that. But the front end of agility is information because it’s what you’re being agile with respect to. It’s not just that you’re doing things within the company faster. It has to really respond to some market need.

Agile methodologies tie into the idea of being able to respond to change faster, but can companies get to a point of predictive analysis?
Kaplan: Predictive analyses comes from analytics that are being applied to historical data. In the old days, you were using historical data to evaluate performance and reward people. Now you’re trying to use data to help understand the future. Wal-Mart, for example, does a very good job of understanding the types of bundles consumers are likely to purchase. They’re trying to predict the patterns of consumer purchasing and then arrange the offerings to encourage the buying of multiple products and services. We have crystal balls, but they’re not very accurate. What we do have is data, and by having access to large quantities of data on consumer purchasing, then yes, that does help you predict the future.

The question is, are companies investing sufficiently in analytic methods to make sense out of the data? Raw data is useless, but if you can study the past and use various statistical methods to process the data, then you really can provide information and knowledge that’s actionable and that will be predictable in the future, as long as historical patterns are persistent.

Is there a technology or role that IT can play in taking that data and making sense of it, or is that something the business should be responsible for?
Kaplan: The more you can make the life of applied statisticians easier by providing that analytic interface between transactional data and the kinds of methods they want to use to explore the data — it could be just powerful ways to display that data on a screen so they can see patterns. I don’t think we’ve reached the stage where you can completely automate this process [with technology], though. There are judgments that need to be made as you build the models. It’s easy to get fooled by the data and make the statistics lie for you.

Why were so many financial institutions seemingly caught off guard by the falling housing market and mortgage lending crisis?
Kaplan: They didn’t have good models about the values of the securities they held and the risks they held. There were a couple of banks that had much better internal staff for looking at the transactions that were taking place in the market and then thinking what that meant to their own portfolio. But I think companies like Bear Stearns, Lehman Brothers and Wachovia … I mean, they failed miserably in understanding the deteriorating value of the securities they were holding, and this was knowable. If they had better models and analytics they could have seen this much earlier and perhaps not had the kind of failures that they did.

Risk management was siloed and considered more of a compliance issue. … Now we see that identification, mitigation and management of risk has to be on an equal level with the strategic process.
Robert Kaplan
co-author of the balanced scorecard

Many financial institutions had models and analytics in place, though?
Kaplan: The people who built the models didn’t fully understand the businesses that were behind the data that they were looking at. And they didn’t understand the pressure testing — the things to watch out for. The data they were seeing in a period of rising housing prices would not necessarily be representative of what that data would look like if housing prices were flat to declining. They needed someone to understand that the housing market had a boom and had gotten overpriced, and while you couldn’t predict when the housing price would level off or start to decline, you could test “what if” that happened and see the sensitivity of your asset holdings to that economic event. Very few banks did that type of pressure testing. We have to have some robust software that enables you to look at scenarios and do the “what ifs.” Even though you’re not predicting the future, you’re thinking about what the consequences are under various alternatives, and that’s what they failed to do. I don’t think it was a lack so much in software, but a lack of imagination. Maybe they didn’t want to think that the good times had a possibility of ending.

Have you adjusted the balanced scorecard methodology due to the recession?
Kaplan: If I had to say there was one thing missing that has been revealed in the last few years, it’s that there’s nothing about risk assessment and risk management. My current thinking on that is that I think companies need a parallel scorecard to their strategy scorecard — a risk scorecard. The risk scorecard is to think about what are the things that could go wrong? What are hurdles that could jump up, and how do we get early warning signals to suggest when some of these barriers have suddenly appeared so you can act quickly to mitigate that. [Risk management] turned out to be an extremely important function that was not done well by many of the [financial services] companies we talked about earlier. Risk management was siloed and considered more of a compliance issue and not a strategic function. Now we see that identification, mitigation and management of risk has to be on an equal level with the strategic process.

Let us know what you think about the story; email Christina Torode, News Director.

Time To Do The Math On Cloud Computing

março 9, 2010

Post do blog!

Time To Do The Math On Cloud Computing
At the upcoming Cloud Connect conference, we will evaluate a variety of scenarios that determine where and when the move to cloud computing makes financial sense.


Joe Weinman

Março 2, 2010 03:13 PMIn two weeks, I’ll be in Santa Clara at the industry’s newest event, Cloud Connect. I’m chairing a track focused on Cloudonomics, a term I coined a couple of years ago to connote the complex economics of using cloud infrastructure, platform, and software services. There are a number of great technical tracks planned, but mine is focused on what I consider to be the most important question: “Why do cloud?”This is the question that any business-focused CIO must ask. After all, CIO’s have a small number of projects that they can really focus on in any given year, and major initiatives must have a compelling rationale or won’t get supported by senior leadership, including the board. The technology will only be important if the business value is clear and compelling.

In practice, there are a number of reasons to leverage the cloud. One is agility: Resources and services that are immediately available for on-demand use clearly enhance agility over months-long engineering, procurement, and installation efforts. This can help to introduce new services, test new code, enter new markets, or meet unexpected sales spikes. Another is user experience. Larger cloud providers’ globally dispersed footprint can bring highly interactive processing closer to the end user.But perhaps topping the list is total cost reduction. According to a recent Yankee Group report, 43% of enterprises cite cost control as a rationale for interest in the cloud. So, how exactly should we think about cloud costs? That’s the topic of my track.

Consider this: There’s a range of schools of thought out there right now. Some believe cloud computing will take over all IT and that CIO’s may as well start making plans to shutter their data centers. At the other extreme, there’s a view that cloud computing is currently too expensive and that enterprises should focus their attention elsewhere.

The truth lies between these two extremes. Some applications and data will continue to reside in enterprise data centers. Other services are likely to be purely cloud-resident. And hybrid solutions–sometimes also called virtual private clouds–involving the enterprise data center coupled with cloud infrastructure are likely to offer the lowest total cost.

The key drivers of the economics of the solutions depend on a variety of factors, including the enterprise applications portfolio, demand variability, and user experience requirements. Here are some scenarios:

Cloud Cost Advantages: Assuming that all other factors are comparable, if you assess and benchmark the unit cost of cloud services to be lower than that of your owned infrastructure, then you should switch and save. After all, if gas is thirty cents cheaper per gallon at the station across the street, why wouldn’t you fill up there instead?

Servers As Parked Cars

Spiky Demand: However, cloud services may well cost more than some enterprise data centers on a unit cost basis. One might think that this would imply that you should shy away from the cloud, but that’s not the case. The key reason has to do with the usage-based pricing paradigm of cloud services. The important insight here is that even if cloud services do cost more when they are used, they cost nothing when they aren’t used. This is a very different story than the enterprise data center, where owned resources continue to cost money whether they’re used or not. This is the same difference that exists between an owned (depreciated, leased, or financed) car and a rented one. An owned car parked in your garage still carries costs, whether you drive it or not. The key factor in the economics of the cloud is then how spiky demand is. In effect, if you drive your car every day, it’s going to be cheaper to own it. If you only need a car once a week, renting is probably better. And, if you only need automobile transport for a few minutes each month, maybe you should just take a cab. If the frequency of use is low enough, it can more than compensate for a potentially more expensive unit cost, and a pure cloud strategy, even if more expensive on a unit cost basis, can still offer a compelling value proposition in terms of total cost.

Any Variability in Demand: Interestingly, while both scenarios above lead to a pure cloud advantage, it is often the case that a hybrid scenario is cost-optimal. Virtually all enterprises have some sort of variability in demand. Retailers have Thanksgiving to Christmas as well as Cyber Monday. Tax preparation firms have a peak in February of early filers and a peak on April 15th of procrastinators. Mortgage lenders are at the mercy of seasonal trends in home buying, shifts in interest rates for re-fis, and macroeconomic factors for home equity loans. We can think of all these firms, however, as having a pretty regular baseline, as well as peaks that rise above this baseline. If cloud services have a higher unit cost, a hybrid strategy is typically optimal. The strategy involves using enterprise data center resources to handle the baseline and cloud resources to handle the spikes, and it’s referred to as cloud-bursting. It is often better than using dedicated resources built to peak, which then end up underutilized.//  

Adjustments to all of these strategies must be made depending on the application and the enterprise architecture. For example, the costs of managing multiple copies of data as well as data transport must be factored in.Given the importance of this area, an entire track has been dedicated to the topic at Cloud Connect, where we will address both the math behind the economics and the technical architectural implications of that math, as well as empirical benchmarks from a variety of real world examples. To help conduct this analysis, a number of ROI tools and methodologies have now arisen for the cloud, which we will delve into. (Cloud Connect is produced by UBM TechWeb and co-sponsored by InformationWeek.)Cloud computing is a rapidly changing technology and business model. New dynamic pricing and spot auction markets are arising for capacity, and the future is likely to hold much more in the way of business model and pricing innovation, as well as ecosystem evolution. We will be addressing this topic in depth, and also seeing what the various players in the space are viewing as their advantages and direction.Cloud Connect promises to be a rich event, and I hope to see you at the ROI and economics track sessions.

Joe Weinman is VP of Strategy and Business Development with AT&T Business Solutions.Network Computing has published an in-depth report on the state of enterprise storage. Download it here (registration required). 

Eugene Fama: My Life in Finance

março 8, 2010

Uau!  Este é um dos posts mais importantes que eu já coloquei neste blog (graças a uma dica do blog do Prof. Greg Mankiw).  O original do post (publicado no dia 04/03) está no blog do próprio Fama (

Tem momentos na nossa vida que são marcados por fatos que ficarão para sempre.  A vida de um grande profissional, quando contada por meio de suas constribuições, é um destes momentos raros.  Ler sobre a vida de um grande profissional é mais que um aprendizado; é uma lição sobre o futuro!

Estou pensando este post como uma lição para meus alunos (e para aqueles que não são) e colegas, uma vez que raramente a gente vê um relato auto-biográfico de tamanha densidade!

Como estou envolvido neste momento em defender que os ativos de tecnologias de informação e comunicação (TICs) são equivalentes a quaisquer ativos financeiros, já que configuram retornos esperados e riscos, nada mais cabível do que sugerir este post!

Só posso desejar aos leitores uma boa leitura!


My Life in Finance

By Eugene F. Fama


I was invited by the editors to contribute a professional autobiography for the Annual Review of Financial Economics.  I focus on what I think is my best stuff.  Readers interested in the rest can download my vita from the website of the University of Chicago, Booth School of Business.  I only briefly discuss ideas and their origins, to give the flavor of context and motivation.  I do not attempt to review the contributions of others, which is likely to raise feathers.  Mea culpa in advance.

Finance is the most successful branch of economics in terms of theory and empirical work, the interplay between the two, and the penetration of financial research into other areas of economics and real-world applications. I have been doing research in finance almost since its start, when Markowitz (1952, 1959) and Modigliani and Miller (1958) set the field on the path to become a serious scientific discipline. It has been fun to see it all, to contribute, and to be a friend and colleague to the giants who created the field.


My grandparents emigrated to the U.S. from Sicily in the early 1900s, so I am a third generation Italian-American. I was the first in the lineage to go to university.

My passion in high school was sports. I played basketball (poorly), ran track (second in the state meet in the high jump — not bad for a 5’8″ kid), played football (class B state champions), and baseball (state semi-finals two years). I claim to be the inventor of the split end position in football, an innovation prompted by the beatings I took trying to block much bigger defensive tackles. I am in my high school’s (Malden Catholic) athletic hall of fame.

I went on to Tufts University in 1956, intending to become a high school teacher and sports coach. At the end of my second year, I married my high school sweetheart, Sallyann Dimeco, now my wife of more than 50 years. We have four adult children and ten delightful grandchildren. Sally’s family contributions dwarf mine.

At Tufts I started in romance languages but after two years became bored with rehashing Voltaire and took an economics course. I was enthralled by the subject matter and by the prospect of escaping lifetime starvation on the wages of a high school teacher. In my last two years at Tufts, I went heavy on economics. The professors, as teachers, were as inspiring as the research stars I later profited from at the University of Chicago.

My professors at Tufts encouraged me to go to graduate school. I leaned toward a business school Ph.D. My Tufts professors (mostly Harvard economics Ph.D.s) pushed Chicago as the business school with a bent toward serious economics. I was accepted at other schools, but April 1960 came along and I didn’t hear from Chicago. I called and the dean of students, Jeff Metcalf, answered. (The school was much smaller then.) They had no record of my application. But Jeff and I hit it off, and he asked about my grades. He said Chicago had a scholarship reserved for a qualified Tufts graduate. He asked if I wanted it. I accepted and, except for two great years teaching in Belgium, I have been at the University of Chicago since 1960. I wonder what path my professional life would have taken if Jeff didn’t answer the phone that day. Serendipity!

During my last year at Tufts, I worked for Harry Ernst, an economics professor who also ran a stock market forecasting service. Part of my job was to invent schemes to forecast the market. The schemes always worked on the data used to design them. But Harry was a good statistician, and he insisted on out-of-sample tests. My schemes invariably failed those tests. I didn’t fully appreciate the lesson in this at the time, but it came to me later.

During my second year at Chicago, with an end to course work and prelims in sight, I started to attend the Econometrics Workshop, at that time the hotbed for research in finance. Merton Miller had recently joined the Chicago faculty and was a regular participant, along with Harry Roberts and Lester Telser. Benoit Mandelbrot was an occasional visitor. Benoit presented in the workshop several times, and in leisurely strolls around campus, I learned lots from him about fat-tailed stable distributions and their apparent relevance in a wide range of economic and physical phenomena. Merton Miller became my mentor in finance and economics (and remained so throughout his lifetime). Harry Roberts, a statistician, instilled a philosophy for empirical work that has been my north star throughout my career.

Efficient Markets

Miller, Roberts, Telser, and Mandelbrot were intensely involved in the burgeoning work on the behavior of stock prices (facilitated by the arrival of the first reasonably powerful computers). The other focal point was MIT, with Sydney Alexander, Paul Cootner, Franco Modigliani, and Paul Samuelson. Because his co-author, Merton Miller, was now at Chicago, Franco was a frequent visitor. Like Merton, Franco was unselfish and tireless in helping people think through research ideas. Franco and Mert provided an open conduit for cross-fertilization of market research at the two universities.

At the end of my second year at Chicago, it came time to write a thesis, and I went to Miller with five topics. Mert always had uncanny insight about research ideas likely to succeed. He gently stomped on four of my topics, but was excited by the fifth. From my work for Harry Ernst at Tufts, I had daily data on the 30 Dow-Jones Industrial Stocks. I proposed to produce detailed evidence on (1) Mandelbrot’s hypothesis that stock returns conform to non-normal (fat-tailed) stable distributions and (2) the time-series properties of returns. There was existing work on both topics, but I promised a unifying perspective and a leap in the range of data brought to bear.

Vindicating Mandelbrot, my thesis (Fama 1965a) shows (in nauseating detail) that distributions of stock returns are fat-tailed: there are far more outliers than would be expected from normal distributions – a fact reconfirmed in subsequent market episodes, including the most recent. Given the accusations of ignorance on this score recently thrown our way in the popular media, it is worth emphasizing that academics in finance have been aware of the fat tails phenomenon in asset returns for about 50 years.

My thesis and the earlier work of others on the time-series properties of returns falls under what came to be called tests of market efficiency. I coined the terms “market efficiency” and “efficient markets,” but they do not appear in my thesis. They first appear in “Random Walks in Stock Market Prices,” paper number 16 in the series of Selected Papers of the Graduate School of Business, University of Chicago, reprinted in the Financial Analysts Journal (Fama 1965b).

From the inception of research on the time-series properties of stock returns, economists speculated about how prices and returns behave if markets work, that is, if prices fully reflect all available information. The initial theory was the random walk model. In two important papers, Samuelson (1965) and Mandelbrot (1966) show that the random walk prediction (price changes are iid) is too strong. The proposition that prices fully reflect available information implies only that prices are sub-martingales. Formally, the deviations of price changes or returns from the values required to compensate investors for time and risk-bearing have expected value equal to zero conditional on past information.

During the early years, in addition to my thesis, I wrote several papers on market efficiency (Fama 1963, 1965c, Fama and Blume 1966), now mostly forgotten. My main contribution to the theory of efficient markets is the 1970 review (Fama 1970). The paper emphasizes the joint hypothesis problem hidden in the sub-martingales of Mandelbrot (1966) and Samuelson (1965). Specifically, market efficiency can only be tested in the context of an asset pricing model that specifies equilibrium expected returns. In other words, to test whether prices fully reflect available information, we must specify how the market is trying to compensate investors when it sets prices. My cleanest statement of the theory of efficient markets is in chapter 5 of Fama (1976b), reiterated in my second review “Efficient Markets II” (Fama 1991a).

The joint hypothesis problem is obvious, but only on hindsight. For example, much of the early work on market efficiency focuses on the autocorrelations of stock returns. It was not recognized that market efficiency implies zero autocorrelation only if the expected returns that investors require to hold stocks are constant through time or at least serially uncorrelated, and both conditions are unlikely.

The joint hypothesis problem is generally acknowledged in work on market efficiency after Fama (1970), and it is understood that, as a result, market efficiency per se is not testable. The flip side of the joint hypothesis problem is less often acknowledged. Specifically, almost all asset pricing models assume asset markets are efficient, so tests of these models are joint tests of the models and market efficiency. Asset pricing and market efficiency are forever joined at the hip.

Event Studies

My Ph.D. thesis and other early work on market efficiency do not use the CRSP files, which were not yet available. When the files became available (thanks to years of painstaking work by Larry Fisher), Jim Lorie, the founder of CRSP, came to me worried that no one would use the data and CRSP would lose its funding. He suggested a paper on stock splits, to advertise the data. The result is Fama, Fisher, Jensen, and Roll (1969). This is the first study of the adjustment of stock prices to a specific kind of information event. Such “event studies” quickly became a research industry, vibrant to this day, and the main form of tests of market efficiency. Event studies have also found a practical application — calculating damages in legal cases.

The refereeing process for the split study was a unique experience. When more than a year passed without word from the journal, we assumed the paper would be rejected. Then a short letter arrived. The referee (Franco Modigliani) basically said: it’s great, publish it. Never again would this happen!

There is a little appreciated fact about the split paper. It contains no formal tests (standard errors, t-statistics, etc.) The results were apparently so convincing as confirmation of market efficiency that formal tests seemed irrelevant. But this was before the joint hypothesis problem was recognized, and only much later did we come to appreciate that results in event studies can be sensitive to methodology, in particular, what is assumed about equilibrium expected returns — a point emphasized in Fama (1998).

Michael Jensen and Richard Roll are members of a once-in-a-lifetime cohort of Ph.D. students that came to Chicago soon after I joined the faculty in 1963. Also in this rough cohort are (among others) Ray Ball, Marshall Blume, James MacBeth, Myron Scholes, and Ross Watts. I think I was chairman of all their thesis committees, but Merton Miller and Harry Roberts were deeply involved. Any investment in these and about 100 other Ph.D. students I have supervised has been repaid many times by what I learn from them during their careers.

Forecasting Regressions

In 1975 I published a little empirical paper, “Short-Term Interest Rates as Predictors of Inflation” (Fama 1975). The topic wasn’t new, but my approach was novel. Earlier work uses regressions of the interest rate on the inflation rate for the period covered by the interest rate. The idea is that the expected inflation rate (along with the expected real return) determines the interest rate, so the interest rate should be the dependent variable and the expected inflation rate should be the independent variable. The observed inflation rate is, of course, a noisy proxy for its expected value, so there is a measurement error problem in the regression of the ex ante interest rate on the ex post inflation rate.

My simple insight is that a regression estimates the conditional expected value of the left-hand-side variable as a function of the right-hand-side variables. Thus, to extract the forecast of inflation in the interest rate (the expected value of inflation priced into the interest rate) one regresses the ex post inflation rate on the ex ante interest rate. On hindsight, this is the obvious way to run the forecasting regression, but again it wasn’t obvious at the time.

There is a potential measurement error problem in the regression of the ex post inflation rate on the ex ante (T-bill) interest rate, caused by variation through time in the expected real return on the bill. The model of market equilibrium in “Short-Term Interest Rates as Predictors of Inflation” assumes that the expected real return is constant, and this seems to be a reasonable approximation for the 1953-1971 period of the tests. (It doesn’t work for any later period.) This result raised a furor among Keynesian macroeconomists who postulated that the expected real return was a policy variable that played a central role in controlling investment and business cycles. There was a full day seminar on my paper at MIT, where my simple result was heatedly attacked. I argued that I didn’t know what the fuss was about, since the risk premium component of the cost of capital is surely more important than the riskfree real rate, and it seems unlikely that monetary and fiscal actions can fine tune the risk premium. I don’t know if I won the debate, but it was followed by a tennis tournament, and I think I did win that.

The simple idea about forecasting regressions in Fama (1975) has served me well, many times. (When I have an idea, I beat it to death.) I have many papers that use the technique to extract the forecasts of future spot rates, returns, default premiums, etc., in the term structure of interests rates, for example Fama (1976a,c, 1984b, 1986, 1990b, 2005), Fama and Schwert (1979), Fama and Bliss (1987). In a blatant example of intellectual arbitrage, I apply the technique to study forward foreign exchange rates as predictors of future spot rates, in a paper (Fama 1984a) highly cited in that literature. The same technique is used in my work with Kenneth R. French and G. William Schwert on the predictions of stock returns in dividend yields and other variables (Fama and Schwert 1977, Fama and French 1988, 1989). And regressions of ex post variables on ex ante variables are now standard in forecasting studies, academic and applied.

Agency Problems and the Theory of Organizations

In 1976 Michael Jensen and William Meckling published their groundbreaking paper on agency problems in investment and financing decisions (Jensen and Meckling 1976). According to Kim, Morse, and Zingales (2006), this is the second most highly cited theory paper in economics published in the 1970-2005 period. It fathered an enormous literature.

When Mike came to present the paper at Chicago, he began by claiming it would destroy the corporate finance material in what he called the “white bible” (Fama and Miller, The Theory of Finance 1972). Mert and I replied that his analysis is deeper and more insightful, but in fact there is a discussion of stockholder-bondholder agency problems in chapter 4 of our book. Another example that new ideas are almost never completely new!

Spurred by Jensen and Meckling (1976), my research took a turn into agency theory. The early papers on agency theory emphasized agency problems. I was interested in studying how competitive forces lead to the evolution of mechanisms to mitigate agency problems. The first paper, “Agency Problems and the Theory of the Firm” (Fama 1980a) argues that managerial labor markets, inside and outside of firms, act to control managers faced with the temptations created by diffuse residual claims that reduce the incentives of individual residual claimants to monitor managers.

I then collaborated with Mike on three papers (Fama and Jensen (1983a,b, 1985)) that study more generally how different mechanisms arise to mitigate the agency problems associated with “separation of ownership and control” and how an organization’s activities and the special agency problems they pose, affect the nature of its residual claims and control mechanisms. For example, we argue that the redeemable residual claims of a financial mutual (for example, an open end mutual fund) provide strong discipline for its managers, but redeemability is cost effective only when the assets of the organization can be sold quickly with low transactions costs. We also argue that the nonprofit format, in which no agents have explicit residual claims to net cashflows, is a response to the agency problem associated with activities in which there is a potential supply of donations that might be expropriated by residual claimants. Two additional papers (Fama 1990a, 1991b) spell out some of the implications of Fama (1980a) and Fama and Jensen (1983a,b) for financing decisions and the nature of labor contracts.

Kim, Morse, and Zingales (2006) list the 146 papers published during 1970-2005 that have more than 500 cites in the major journals of economics. I’m blatantly bragging, but Fama (1980a) and Fama and Jensen (1983a) are among my six papers on the list. (The others are Fama 1970, Fama and MacBeth 1973, Fama and French 1992, 1993. If the list extended back to ancient times, Fama 1965a and Fama, Fisher, Jensen, and Roll 1969 would also make it.) I think of myself as an empiricist (and a simple-minded one at that), so I like my work in agency theory since it suggests that occasionally theoretical ideas get sprinkled into the mix.


Toward the end of the 1970s, around the time of the agency theory research, my work took a second turn into macroeconomics and international finance. Fischer Black had similar interests, and I profited from many long discussions with him on this and other issues during the years he spent at Chicago in the office next to mine.

Since they typically assume away transactions costs, asset pricing models in finance do not have a natural role for money. Fama and Farber (1979) model a world in which financial markets are indeed frictionless, but there are transactions costs in consumption that are reduced by holding money. Money then becomes a portfolio asset, and we investigate how nominal bonds (borrowing and lending) allow consumer-investors to split decisions about how much money to hold for transactions purposes from decisions about how much of the purchasing power risk of their money holdings they will bear. We also investigate the pricing of the purchasing power risk of the money supply in the context of the CAPM.

Extending the analysis to an international setting, Fama and Farber (1979) show that exchange rate uncertainty is not an additional risk in international investing when purchasing power parity (PPP) holds, because PPP implies that the real return on any asset is the same to the residents of all countries. The point is obvious, on hindsight, but previous papers in the international asset pricing literature assume that exchange rate uncertainty is an additional risk, without saying anything about PPP, or saying something incorrect.

Three subsequent papers (Fama 1980b, 1983, 1985) examine what the theory of finance says about the role of banks. The first two (Fama 1980b, 1983) argue that in the absence of reserve requirements, banks are just financial intermediaries, much like mutual funds, that manage asset portfolios on behalf of depositors. And like mutual fund holdings, the quantity of deposits has no role in price level determination (inflation). Bank deposits also provide access to an accounting system of exchange (via checks and electronic transfers) that is just an efficient mechanism for moving claims on assets from some consumer-investors to others, without the intervention of a hand-to-hand medium of exchange like currency. Because it pays less than full interest, currency has an important role in price level determination. The role of deposits in price level determination is, however, artificial, induced by the requirement to hold “reserves” with the central bank that pay less than full interest and are exchangeable for currency on demand.

Corporate Finance

As finance matured, it became more specialized. The teaching and research of new people tends to focus entirely on asset pricing or corporate finance. It wasn’t always so. Until several years ago, I taught both. More of my research is in asset-pricing-market-efficiency (66 papers and 1.5 books), but as a result of longtime exposure to Merton Miller, I have always been into corporate finance (15 papers and half a book).

The burning issue in corporate finance in the early 1960s was whether the propositions of Modigliani and Miller (MM 1958) and Miller and Modigliani (MM 1961) about the value irrelevance of financing decisions hold outside the confines of their highly restrictive risk classes (where a risk class includes firms with perfectly correlated net cashflows). With the perspective provided by asset pricing models, which were unavailable to MM, it became clear that their propositions do not require their risk classes. Fama (1978) tries to provide a capstone. The paper argues that the MM propositions hold in any asset pricing model that shares the basic MM assumptions (perfect capital market, including no taxes, no transactions costs, and no information asymmetries or agency problems), as long as either (i) investors and firms have equal access to the capital market (so investors can undo the financing decisions of firms), or (ii) there are perfect substitutes for the securities issued by any firm (with perfect substitute defined by whatever happens to be the right asset pricing model).

The CRSP files opened the gates for empirical asset pricing research (including work on efficient markets). Compustat similarly provides the raw material for empirical work in corporate finance. Fama and Babiak (1968) leap on the new Compustat files to test Lintner’s (1956) hypothesis that firms have target dividend payouts but annual dividends only partially adjust to their targets. Lintner estimates his model on aggregate data. We examine how the model works for the individual firms whose dividend decisions it is meant to explain. It works well in our tests, and it continues to work in subsequent trials (e.g., Fama 1974). But the speed-of-adjustment of dividends to their targets has slowed considerably, that is, dividends have become more “sticky” (Fama and French 2002). The more interesting fact, however, is the gradual disappearance of dividends. In 1978 almost 80% of NYSE-Amex-Nasdaq listed firms paid dividends, falling to about 20% in 1999 (Fama and French 2001).

Post-MM corporate finance has two main theories, the pecking order model of Myers (1984) and Myers and Majluf (1984) and the tradeoff model (which has many authors). These theories make predictions about financing decisions when different pieces of the perfect capital markets assumption of MM do not hold. The pecking order model does reasonably well, until the early 1980s when new issues of common stock (which the model predicts are rare) become commonplace (Fama and French 2005). There is some empirical support for the leverage targets that are the centerpiece of the tradeoff model, but the speed-of-adjustment of leverage to its targets is so slow that the existence of targets becomes questionable. (This is the conclusion of Fama and French 2002 and other recent work.) In the end, it’s not clear that the capital structure irrelevance propositions of Modigliani and Miller are less realistic as rough approximations than the popular alternatives. (This is the conclusion of Fama and French 2002.)

In my view, the big open challenge in corporate finance is to produce evidence on how taxes affect market values and thus optimal financing decisions. Modigliani and Miller (1963) suggest that debt has large tax benefits, and taxation disadvantages dividends. To this day, this is the position commonly advanced in corporate finance courses. Miller (1977), however, presents a scenario in which the tax benefits of debt due to the tax deductibility of interest payments at the corporate level are offset by taxation of interest receipts at the personal level, and leverage has no effect on a firm’s market value. Miller and Scholes (1978) present a scenario in which dividend and debt choices have no effect on the market values of firms. Miller (1977) and Miller and Scholes (1978) recognize that that there are scenarios in which taxes do affect optimal dividend and debt decisions. In the end, the challenge is empirical measurement of tax effects (the marginal tax rates implicit) in the pricing of dividends and interest. So far the challenge goes unmet.

Fama and French (1998) take a crack at this first order issue, without success. The problem is that dividend and debt decisions are related to expected net cashflows — the main determinant of the market value of a firm’s securities. Because proxies for expected net cashflows are far from perfect, the cross-section regressions of Fama and French (1998) do not produce clean estimates of how the taxation of dividends and interest affects the market values of a firm’s stocks and bonds. There are also papers that just assume debt has tax benefits that can be measured from tax rate schedules. Without evidence on the tax effects in the pricing of interest, such exercises are empty.


Without being there one can’t imagine what finance was like before formal asset pricing models. For example, at Chicago and elsewhere, investments courses were about security analysis: how to pick undervalued stocks. In 1963 I taught the first course at Chicago devoted to Markowitz’ (1959) portfolio model and its famous offspring, the asset pricing model (CAPM) of Sharpe (1964) and Lintner (1965).

The CAPM provides the first precise definition of risk and how it drives expected return, until then vague and sloppy concepts. The absence of formal models of risk and expected return placed serious limitations on research that even grazed the topic. For example, the path breaking paper of Modigliani and Miller (1958) uses arbitrage within risk classes to show that (given their assumptions) financing decisions do not affect a firm’s market value. They define a risk class as firms with perfectly correlated net cash flows. This is restrictive and it led to years of bickering about the applicability of their analysis and conclusions. The problem was due to the absence of formal asset pricing models that define risk and how it relates to expected return.

The arrival of the CAPM was like the time after a thunderstorm, when the air suddenly clears. Extensions soon appeared, but the quantum leaps are the intertemporal model (ICAPM) of Merton (1973a), which generalizes the CAPM to a multiperiod world with possibly multiple dimensions of risk, and the consumption CAPM of Lucas (1978), Breeden (1979), and others.

Though not about risk and expected return, any history of the excitement in finance in the 1960s and 1970s must mention the options pricing work of Black and Scholes (1973) and Merton (1973b). These are the most successful papers in economics – ever – in terms of academic and applied impact. Every Ph.D. student in economics is exposed to this work, and the papers are the foundation of a massive industry in financial derivatives.

There are many early tests of the CAPM, but the main survivors are Black, Jensen, and Scholes (BJS 1972) and Fama and MacBeth (1973). Prior to these papers, the typical test of the CAPM was a cross-section regression of the average returns on a set of assets on estimates of their market βs and other variables. (The CAPM predicts, of course, that the β premium is positive, and β suffices to describe the cross-section of expected asset returns.) BJS were suspicious that the slopes in these cross-section regressions seemed too precise (the reported standard errors seemed too small). They guessed rightly that the problem was the OLS assumption that there is no cross-correlation in the regression residuals.

Fama and MacBeth (1973) provide a simple solution to the cross-correlation problem. Instead of a regression of average asset returns on their βs and other variables, one does the regression month-by-month. The slopes are then monthly portfolio returns whose average values can be used to test the CAPM predictions that the β premium is positive and other variables add nothing to the explanation of the cross-section of expected returns. (The point is explained best in chapter 8 of Fama (1976b).) The month-by-month variation in the regression slopes captures all effects of the cross-correlation of the regression residuals, and these effects are automatically embedded in the time-series standard errors of the average slopes. The approach thus captures residual covariances without requiring an estimate of the residual covariance matrix.

The Fama-MacBeth approach is standard in tests of asset pricing models that use cross-section regressions, but the benefits of the approach carry over to panels (time series of cross-sections) of all sorts. Kenneth French and I emphasize this point (advertise is more accurate) in our corporate finance empirical work (e.g., Fama and French 1998, 2002). Outside of finance, research in economics that uses panel regressions has only recently begun to acknowledge that residual covariance is a pervasive problem. Various new robust regression techniques are available, but the Fama-MacBeth approach remains a simple option.

Given the way my recent empirical work with Kenneth French dumps on the CAPM, it is only fair to acknowledge that the CAPM gets lots of credit for forcing money managers to take more seriously the challenges posed by the work on efficient markets. Before the CAPM, money management was entirely active, and performance reporting was shoddy. The CAPM gave us a clean story about risk and expected return (i.e., a model of market equilibrium) that allowed us to judge the performance of active managers. Using the CAPM, Jensen (1968) rang the bell on the mutual fund industry. Performance evaluation via the CAPM quickly became standard both among academics and practitioners, passive management got a foothold, and active managers became aware that their feet would forever be put to the fire.

The Three-Factor Model

The evidence in Black, Jensen, and Scholes (1972) and Fama and MacBeth (1973) is generally favorable to the CAPM, or at least to Black’s (1972) version of the CAPM. Subsequently, violations of the model, labeled anomalies, begin to surface. Banz (1981) finds that β does not fully explain the higher average returns of small (low market capitalization) stocks. Basu (1983) finds that the positive relation between the earning-price ratio (E/P) and average return is left unexplained by market β. Rosenberg, Reid, and Lanstein (1985) find a positive relation between average stock return and the book-to-market ratio (B/M) that is missed by the CAPM. Bhandari (1988) documents a similar result for market leverage (the ratio of debt to the market value of equity, D/M). Ball (1978) and Keim (1988) argue that variables like size, E/P, B/M, and D/M are natural candidates to expose the failures of asset pricing models as explanations of expected returns since all these variables use the stock price, which, given expected dividends, is inversely related to the expected stock return.

The individual papers on CAPM anomalies did not seem to threaten the dominance of the model. My guess is that viewed one at a time, the anomalies seemed like curiosity items that show that the CAPM is just a model, an approximation that can’t be expected to explain the entire cross-section of expected stock returns. I see no other way to explain the impact of Fama and French (1992), “The Cross-Section of Expected Stock Returns,” which contains nothing new. The CAPM anomalies in the paper are those listed above, and the evidence that there is no reliable relation between average return and market β was available in Reinganum (1981) and Lakonishok and Shapiro (1986). Apparently, seeing all the negative evidence in one place led readers to accept our conclusion that the CAPM just doesn’t work. The model is an elegantly simple and intuitively appealing tour de force that laid the foundations of asset pricing theory, but its major predictions seem to be violated systematically in the data.

An asset pricing model can only be dethroned by a model that provides a better description of average returns. The three-factor model (Fama and French 1993) is our shot. The model proposes that along with market β, sensitivities to returns on two additional portfolios, SMB and HML, explain the cross-section of expected stock returns. The size factor, SMB, is the difference between the returns on diversified portfolios of small and big stocks, and the value/growth factor, HML, is the difference between the returns on diversified portfolios of high and low B/M (i.e., value and growth) stocks. The SMB and HML returns are, of course, brute force constructs designed to capture the patterns in average returns related to size and value versus growth stocks that are left unexplained by the CAPM.

Ken French and I have many papers that address questions about the three-factor model and the size and value/growth patterns in average returns the model is meant to explain. For example, to examine whether the size and value/growth patterns in average returns observed by Fama and French (1992) for the post 1962 period are the chance result of data dredging, Davis, Fama, and French (2000) extend the tests back to 1927, and Fama and French (1998) examine international data. The results are similar to those in Fama and French (1992). Fama and French (1996, 2008) examine whether the three-factor model can explain the anomalies that cause problems for the CAPM. The three-factor model does well on the anomalies associated with variants of price ratios, but it is just a model and it fails to absorb some other anomalies. The most prominent is the momentum in short-term returns documented by Jegadeesh and Titman (1993), which is a problem for all asset pricing models that do not add exposure to momentum as an explanatory factor. After 1993, work, both academic and applied, directed at measuring the performance of managed portfolios routinely use the benchmarks provided by the three-factor model, often augmented with a momentum factor (for example, Carhart 1997, and more recently Kosowski et al. 2006 or Fama and French 2009).

From its beginnings there has been controversy about how to interpret the size and especially the value/growth premiums in average returns captured by the three-factor model. Fama and French (1993, 1996) propose a multifactor version of Merton’s (1973a) ICAPM. The weakness of this position is the question it leaves open. What are the state variables that drive the size and value premiums, and why do they lead to variation in expected returns missed by market β? There is a literature that proposes answers to this question, but in my view the evidence so far is unconvincing.

The chief competitor to our ICAPM risk story for the value premium is the overreaction hypothesis of DeBondt and Thaler (1987) and Lakonishok, Shleifer, and Vishny (1994). They postulate that market prices overreact to the recent good times of growth stocks and the bad times of value stocks. Subsequent price corrections then produce the value premium (high average returns of value stocks relative to growth stocks). The weakness of this position is the presumption that investors never learn about their behavioral biases, which is necessary to explain the persistence of the value premium.

Asset pricing theory typically assumes that portfolio decisions depend only on the properties of the return distributions of assets and portfolios. Another possibility, suggested by Fama and French (2007) and related to the stories in Daniel and Titman (1997) and Barberis and Shleifer (2003), is that tastes for other characteristics of assets, unrelated to properties of returns, also play a role. (“Socially responsible investing” is an example.) Perhaps many investors simply get utility from holding growth stocks, which tend to be profitable fast-growing firms, and they are averse to value stocks, which tend to be relatively unprofitable with few growth opportunities. If such tastes persist, they can have persistent effects on asset prices and expected returns, as long as they don’t lead to arbitrage opportunities.

To what extent is the value premium in expected stock returns due to ICAPM state variable risks, investor overreaction, or tastes for assets as consumption goods? We may never know. Moreover, given the blatant empirical motivation of the three-factor model (and the four-factor offspring of Carhart 1997), perhaps we should just view the model as an attempt to find a set of portfolios that span the mean-variance-efficient set and so can be used to describe expected returns on all assets and portfolios (Huberman and Kandel 1987).

The academic research on the size and value premiums in average stock returns has transformed the investment management industry, both on the supply side and on the demand side. Whatever their views about the origins of the premiums, institutional investors commonly frame their asset allocation decisions in two dimensions, size and value versus growth, and the portfolio menus offered by money managers are typically framed in the same way. And it is testimony to the credibility of research in finance that all this happened in a very short period of time.


The first 50 years of research in finance has been a great ride. I’m confident finance will continue to be a great ride into the indefinite future.

Addendum — Provided by the tenured finance faculty of Chicago Booth

When my paper was posted on the Forum of the website of the Chicago Booth Initiative on Global Markets, the tenured finance faculty introduced it with the following comments. EFF
This post makes available an autobiographical note by Gene Fama that was commissioned by the Annual Review of Financial Economics. Gene’s remarkable career and vision, to say nothing of his engaging writing style, make this short piece a must read for anyone interested in finance. However, as his colleagues, we believe his modesty led him to omit three crucial aspects of his contributions.

First, Gene was (and still is) essential to shaping the nature of the finance group at Chicago. As he explains in a somewhat understated fashion, he and Merton Miller transformed the finance group turning it into a research oriented unit. For the last 47 years he has held court on Tuesday afternoons in the finance workshop, in a room that now bears his name. Through the workshop, generations of students, colleagues, and visitors have been and continue to be exposed to his research style of developing and rigorously testing theories with real world data that has become the hallmark of Chicago finance.

Second, and equally important, is his leadership. Rather than rest on his laurels or impose his own views on the group, Gene has always sought the truth, even when it appeared at odds with his own views. He has promoted a contest of ideas and outlooks, all subject to his exceptional standards of quality. The makeup of the group has shifted as the world and what we know about it has changed. The current finance group at Chicago includes a diverse set of people who specialize in all areas of modern finance including, behavioral economics, pure theory, and emerging, non-traditional areas such as entrepreneurship and development that were unheard of when Gene arrived at Chicago. Contrary to the caricatured descriptions, there is no single Chicago view of finance, except that the path to truth comes from the rigorous development and confrontation of theories with data.

Finally, each of us has our own personal examples of Gene’s generosity, kindness and mentorship. He is an impeccable role model. He is in his office every day, and his door is always open. By personal example, he sets the standards for the values and ethics by which we do research and run our school. All of us have learned enormously from Gene’s generous willingness to discuss his and our work, and gently and patiently to explain and debate that work with generations of faculty. Gene likely enjoys as high a ranking in the “thanks for comments” footnotes of published papers as he does in citations. He has made the finance group an exciting, collegial, and welcoming place to work. He has greatly enhanced all of our research careers and accomplishments. He is a great friend, and we can only begin to express our gratitude.

We hope you enjoy reading Gene’s description of his career that might just as well be described as the story of how modern finance evolved at Chicago.

Gene’s Tenured Finance Faculty Colleagues at Chicago Booth

John H. Cochrane, George M. Constantinides, Douglas W. Diamond, Milton Harris, John C. Heaton, Steven Neil Kaplan, Anil K Kashyap, Richard Leftwich, Tobias J. Moskowitz, Lubos Pastor, Raghuram G. Rajan, Richard Thaler, Pietro Veronesi, Robert W. Vishny, and Luigi Zingales

The comments of Andy Lo and George Constantinides are gratefully acknowledged. Special thanks to John Cochrane, Kenneth French, and Tobias Moskowitz.

Literature Cited

Ball R. 1978. Anomalies in relationships between securities’ yields and yield-surrogates, Journal of Financial Economics. 6:103-126.

Banz RW. 1981. The relationship between return and market value of common stocks. Journal of Financial Economics. 9:3-18.

Barberis N, Shleifer A. 2003. Style investing. Journal of Financial Economics. 68:161-199.

Basu S. 1977. Investment performance of common stocks in relation to their price-earnings ratios: A test of the efficient market hypothesis. Journal of Finance. 12:129-56.

Basu S. 1983. The relationship between earnings yield, market value, and return for NYSE common stocks: Further evidence. Journal of Financial Economics. 12:129-56.

Bhandari LC. 1988. Debt/equity ratio and expected common stock returns: Empirical evidence. Journal of Finance. 43:507-28.

Black F. 1972. Capital market equilibrium with restricted borrowing. Journal of Business. 45:444‑454.

Black, F, Jensen MC, Scholes M. 1972. The capital asset pricing model: Some empirical tests. Studies the Theory of Capital Markets. Jensen MC, ed. New York: Praeger. 79-121.

Black F, Scholes M. 1973. The pricing of options and corporate liabilities. Journal of Political Economy. 81: 638-654.

Breeden DT. 1979. An intertemporal asset pricing model with stochastic consumption and investment opportunities. Journal of Financial Economics. 7:265-296.

Carhart MM. 1997. On persistence in mutual fund performance. Journal of Finance 52:57-82.

Daniel K, Titman S. 1997. Evidence on the characteristics of cross sectional variation in stock returns. Journal of Finance. 52:1-33.

Davis JL, Fama EF, French KR. 2000. Characteristics, covariances, and average returns: 1929-1997. Journal of Finance. 55:389-406.

DeBondt WFM, Thaler RH. 1987. Further evidence on investor overreaction and stock market seasonality. Journal of Finance. 42:557-581.

Fama EF. 1963. Mandelbrot and the stable paretian hypothesis. Journal of Business. 36:420-429.

Fama EF. 1965a. The behavior of stock market prices. Journal of Business. 38:34-105.

Fama EF. 1965b. Random walks in stock market prices. Financial Analysts Journal September/October. 55-59.

Fama EF. 1965a. Tomorrow on the New York Stock Exchange. Journal of Business. 38:285-299.

Fama EF. 1970. Efficient capital markets: A review of theory and empirical work. Journalof Finance. 25:383-417.

Fama EF. 1974. The Empirical relationships between the dividend and investment decisions of firms. American Economic Review. 64:304-318.

Fama EF. 1975. Short-term interest rates as predictors of inflation. American Economic Review 65:269-282.

Fama EF. 1976a. Forward rates as predictors of future spot rates. Journal of Financial Economics. 3:361-377.

Fama EF. 1976b. Foundations of Finance. New York: Basic Books.

Fama EF. 1976c. Inflation uncertainty and expected returns on Treasury bills. Journal of Political Economy. 84: 427-448.

Fama EF. 1978. The effects of a firm’s investment and financing decisions on the welfare of its securityholders. American Economic Review. 68:272-284.

Fama EF. 1980a. Agency problems and the theory of the firm. Journal of Political Economy. 88:288-307.

Fama EF. 1980b. Banking in the theory of finance. Journal of Monetary Economics. 6:39-57.

Fama EF. 1983. Financial intermediation and price level control. Journal of Monetary Economics. 12:7-28.

Fama EF. 1984a. Forward and spot exchange rates. Journal of Monetary Economics. 14:319‑338.

Fama EF. 1984b. The information in the term structure. Journal of Financial Economics. 13:509-528.

Fama EF. 1984c. Term premiums in bond returns. Journal of Financial Economics. 13:529-546.

Fama EF. 1985. What’s different about banks? Journal of Monetary Economics. 15:29-39.

Fama EF. 1986. Term premiums and default premiums in money markets. Journal of Financial Economics. 17:175-196.

Fama EF. 1990a. Contract costs and financing decisions. Journal of Business. 63:S71‑91.

Fama EF. 1990b. Term structure forecasts of interest rates, inflation, and real returns. Journal of Monetary Economics. 25:59-76.

Fama EF. 1991a. Efficient markets II. Journal of Finance. 46:1575-1617.

Fama EF. 1991b. Time, salary, and incentive payoffs in labor contracts. Journal of Labor Economics. 9:25-44.

Fama,EF. 1998. Market efficiency long-term returns, and behavioral finance, Journal of Financial Economics. 49:283-306.

Fama EF. 2005. The behavior of interest rates. Review of Financial Studies. 19:359-379.

Fama EF, Babiak H. 1968. Dividend policy of individual firms: An empirical analysis. Journal of the American Statistical Association. 63:1132-1161.

Fama EF, Bliss RR. 1987. The information in long-maturity forward rates. American Economic Review. 77:680-692.

Fama EF, Blume M. 1966. Filter rules and stock market trading. Journal of Business. 39:226‑241.

Fama EF, Farber A. 1979. Money, bonds and foreign exchange. American Economic Review. 69:639-649.

Fama EF, Fisher L, Jensen M, Roll R. 1969. The adjustment of stock prices to new information. International Economic Review. 10: 1-21.

Fama EF, French KR. 1988. Dividend yields and expected stock returns. Journal of Financial Economics. 22: 3-25.

Fama EF, French KR. 1989. Business conditions and expected returns on stocks and bonds. Journal of Financial Economics. 25: 23-49.

Fama EF, French KR. 1992. The cross-section of expected stock returns. Journal of Finance 47:427-465.

Fama EF, French KR. 1993. Common risk factors in the returns on stocks and bonds. Journal of Financial Economics. 33:3-56.

Fama EF, French KR. 1995. Size and book-to-market factors in earnings and returns. Journal of Finance. 50:131-156.

Fama EF, French KR. 1996. Multifactor explanations of asset pricing anomalies. Journal of Finance. 51: 55-84.

Fama EF, French KR. 1997. Industry costs of equity Journal of Financial Economics. 43:153‑193.

Fama EF, French KR. 1998. Value versus growth: The international evidence Journal of Finance. 53: 1975-1999.

Fama EF, French KR. 1998. Taxes, financing decisions, and firm value. Journal of Finance. 53:819-843.

Fama EF, French KR. 2001. Disappearing dividends: Changing firm characteristics or lower propensity to pay? Journal of Financial Economics. 60:3-43.

Fama EF, French KR. 2002. Testing tradeoff and pecking order predictions about dividends and debt. Review of Financial Studies. 15:1-33.

Fama EF, French KR. 2005. Financing decisions: Who issues stock? Journal of Financial Economics. 76:549-582.

Fama EF, French KR. 2006. The value premium and the CAPM. Journal of Finance. 61:2163‑2185.

Fama EF, French KR. 2007. Disagreement, tastes, and asset prices. Journal of Financial Economics. 83:667-689.

Fama EF, French KR. 2008. Dissecting Anomalies. Journal of Finance. 63:1653-1678.

Fama EF, French KR. 2009. Luck versus skill in the cross-section of mutual fund returns. Manuscript, University of Chicago, December, forthcoming in the Journal of Finance.

Fama E F, Jensen MC. 1983a. Separation of Ownership and Control. Journal of Law and Economics. 26:301-25.

Fama E F, Jensen MC. 1983b. Agency problems and residual claims. Journal of Law and Economics. 26:327-49.

Fama E F, Jensen MC. 1985. Organizational forms and investment decisions. Journal of Financial Economics. 14:101-120.

Fama EF, MacBeth JD. 1973. Risk, return, and equilibrium: Empirical tests. Journal of Political Economy. 81:607-636.

Fama EF, Miller MH. 1972. The Theory of Finance. New York: Holt, Rinehart, and Winston.

Fama EF, Schwert GW. 1979. Inflation, interest and relative prices. Journal of Business. 52:183‑209.

Fama EF, Schwert GW. 1977. Asset returns and inflation. Journal of Financial Economics. 5:115-146.

Huberman G, Kandel S. 1987. Mean-variance spanning. Journal of Finance. 42: 873-888.

Jegadeesh N, Titman S. 1993. Returns to buying winners and selling losers: Implications for stock market efficiency. Journal of Finance. 48:65-91.

Jensen MC. 1968. The performance of mutual funds in the period 1945-1964. Journal of Finance. 23:2033-2058.

Jensen MC, Meckling WH. 1976. Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics. 3:305-60.

Keim DB. 1988. Stock market regularities: A synthesis of the evidence and explanations Stock Market Anomalies. Dimson E (ed.). Cambridge: Cambridge University Press.

Kim EH, Morse A, Zingales L. 2006. What has mattered in economics since 1970. Journal of Economic Perspectives. 20:189-202.

Kosowski R, Timmermann A, Wermers R, White H. 2006. Can mutual fund “stars” really pick stocks? New evidence from a bootstrap analysis. Journal of Finance. 61:2551-2595.

Lakonishok J, Shapiro AC. 1986. Systematic risk, total risk, and size as determinants of stock market returns. Journal of Banking and Finance. 10:115-132.

Lakonishok J, Shleifer A, Vishny RW. 1994. Contrarian investment, extrapolation, and risk. Journal of Finance. 49:1541-1578.

Lintner J. 1956. Distribution of incomes of corporations among dividends, retained earnings and taxes. American Economic Review. 46:97-113.

Lintner J. 1965. The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. Review of Economics and Statistics. 47:13-37.

Lucas REJr. 1978. Asset prices in an exchange economy. Econometrica. 46:1429-1446.

Mandelbrot B, 1966. Forecasts of future prices, unbiased markets, and martingale models. Journal of Business (Special Supplement, January). 39:242-255.

Markowitz H. 1952. Portfolio selection. Journal of Finance. 7:77-99.

Markowitz H. 1959. Portfolio Selection: Efficient Diversification of Investments. Cowles Foundation Monograph No. 16. New York: John Wiley & Sons, Inc.

Merton RC. 1973a. An intertemporal capital asset pricing model. Econometrica. 41:867-887.

Merton RC. 1973b. Theory of rational options pricing. Bell Journal of Economics and Management Science. 4:141-183.

Miller MH. 1977. Debt and taxes. Journal of Finance. 32:261‑275.

Miller MH, Modigliani F. 1961. Dividend policy, growth, and the valuation of shares. Journal of Business. 34:422-433.

Miller M. H., Scholes MS. 1978. Dividends and taxes. Journal of Financial Economics. 6:333‑64.

Modigliani F, Miller MH. 1958. The cost of capital, corporation finance, and the theory of investment. American Economic Review. 48:261-97.

Modigliani F, Miller MH. 1963. Corporate income taxes and the cost of capital: A correction. American Economic Review. 53:433-443.

Myers SC. 1984. The capital structure puzzle. Journal of Finance. 39:575-592.

Myers SC, Majluf NS. 1984. Corporate financing and investment decisions when firms have information the investors do not have. Journal of Financial Economics. 13:187-221.

Reinganum MR. 1981. A New Empirical Perspective on the CAPM. Journal of Financial and Quantitative Analysis. 16:439-462.

Rosenberg B, Reid K, Lanstein R. 1985. Persuasive evidence of market inefficiency. Journal of Portfolio Management. 11:9-17.

Samuelson P. 1965. Proof that properly anticipated prices fluctuate randomly. Industrial Management Review. 6:41-49.

Sharpe WF. 1964. Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance. 19:425-442.




Where SOA Meets Cloud

março 5, 2010

Post de hoje do blog!

Where SOA Meets Cloud
David Linthicum

More Confusion Around Cloud Computing and SOA

By David Linthicum on March 5, 2010 7:17 AM 0 0 Vote 0 Votes

I caught this article on the UK CIO Magazine Web site entitled, “Cloud Computing is the new SOA.”   Actually, a debate around the issues with the hype around cloud computing looking a lot like the hype that surrounded SOA for so many years. 

“So let’s get to the topic at hand: why is Cloud Computing the new SOA?

From an IT or enterprise architect’s point of view, there are definitely similarities. If we put the commercial and financial aspects of the Cloud Computing model to one side for a moment, and just concentrate on how Cloud Computing platforms work to deliver software functionality, then what we’re looking at is a software service delivery platform – something that is conceptually at the heart of every SOA initiative. From an application and data integration standpoint, too, the principles of SOA shine through Cloud Computing very strongly indeed.”

While I do see similar patterns, and as I highlight very clearly in my book, we need to distinguish between SOA and cloud computing, and the best way to do that is to consider SOA as a pattern of architecture, and cloud computing as a potential set of target platforms for that architecture.  Pretty simple, but I suspect that many will confuse SOA and cloud computing going forward.

Indeed, when looking at cloud computing SOA is the best approach.   However, SOA has a lot of baggage these days considering that the concept has been around for years, and while many enterprises may have purchased an ESB or two (AKA “SOA in a box,” yeah right!), they still don’t have a clue how to leverage SOA to drive architecture. 

So, now comes the cloud computing opportunity, and another chance to get our architectures healthy and to leverage private and public cloud resources that could drive more efficiency and effectiveness into the enterprise.    To do this we need to get a good grasp of SOA, the ‘what” you do, and a good grasp of cloud computing…the architectural options.    

I’m still holding out hope.

%d blogueiros gostam disto: