What to read (and what to ignore) in analyst reports
I think it’s important to understand the intentions of “sell-side”
analysts, which is the industry term for equity analysts who work for
the brokerage houses. These analysts don’t write their reports for the
public. They write reports to serve the institutional investment
community who are their clients. These institutions, such as mutual
funds, pension funds and hedge funds, pay for stock analysis through
trading commissions. They direct more trading dollars to firms who
employ analysts they like. So, the job of a sell side analyst is to be
well-liked by clients, and one way to do this is to publish reports that
clients find interesting. The reports mostly serve as marketing pieces
for the analyst because they initiate direct conversations with clients.When
you realize research reports are really marketing pieces, it’s easy to
see why several smart analysts can have wildly different opinions on the
same stock. It’s easier to get attention from clients if your story on a
stock has a different angle. So, while analysts certainly have positive
intentions and try to be right all of the time, the real motivating
factor behind their work is to form a differentiated view and market
that as a report with a stock target price and recommendation (usually
buy, hold or sell) on the front page.
If all you do is look to a sell-side analyst’s report for a target price or buy/sell recommendation, then you’re missing the point, and you may as well never read another report. In the world of professional money management, most clients (called the “buy side”) don’t care at all about an analyst’s recommendation or target price. Instead, they care about the data an analyst may have gathered, or the critical thinking that went into making a particular recommendation or earnings forecast.
In other words, professional investors already have an investment thesis on a stock they are following, and they really don’t look to sell-side analysts to give them bottom-line recommendations. Instead, they gather all kinds of varying viewpoints from competing analysts and consider which ideas, opinions and data points make the most sense.
Unfortunately, the investment community is designed to focus on short-term performance, so analysts tend to present analysis to support a short-term recommendation. Will Apple’s gross margin rise or fall as its manufacturing partners switch over to making the newest iPhone? Will the timing of Samsung’s newest smartphone conflict with Apple’s launch? These are just two examples of the short-term-relevant but long-term-irrelevant problems that analysts spend time trying to solve. In my first experience as an analyst on Bay Street in 2000, I listened to my colleague, who covered the oil and gas sector, talking about a cold snap that would drive up demand for natural gas over the next few weeks. It couldn’t be more irrelevant to the long-term performance of the business, and I was shocked that anyone cared about such nonsense. But they did. And they still do.
Some of the time, you’ll see analysts discuss factors that really do matter in the long term. Will the rising market share of Android combined with low-cost hardware from China put pressure on Apple’s ability to charge as much as it does for iPhones? That’s a long-term trend worth paying attention to, in my view. But even when analysts discuss this stuff, they tend to focus on how it will affect earnings in the next 12 months, rather than thinking about the longer term effects.
So, should you bother to read analyst reports or think about the details of these reports as written about in the financial media? Yes and no. I think you should completely throw away any target price and recommendation information, and toss out any discussion that doesn’t pertain to the long-term performance of the business. Leave that to the pros who chase this kind of unimportant information. Spend your time looking at the more interesting discussions that actually matter to the long-term performance of a business.
If all you do is look to a sell-side analyst’s report for a target price or buy/sell recommendation, then you’re missing the point, and you may as well never read another report. In the world of professional money management, most clients (called the “buy side”) don’t care at all about an analyst’s recommendation or target price. Instead, they care about the data an analyst may have gathered, or the critical thinking that went into making a particular recommendation or earnings forecast.
In other words, professional investors already have an investment thesis on a stock they are following, and they really don’t look to sell-side analysts to give them bottom-line recommendations. Instead, they gather all kinds of varying viewpoints from competing analysts and consider which ideas, opinions and data points make the most sense.
Unfortunately, the investment community is designed to focus on short-term performance, so analysts tend to present analysis to support a short-term recommendation. Will Apple’s gross margin rise or fall as its manufacturing partners switch over to making the newest iPhone? Will the timing of Samsung’s newest smartphone conflict with Apple’s launch? These are just two examples of the short-term-relevant but long-term-irrelevant problems that analysts spend time trying to solve. In my first experience as an analyst on Bay Street in 2000, I listened to my colleague, who covered the oil and gas sector, talking about a cold snap that would drive up demand for natural gas over the next few weeks. It couldn’t be more irrelevant to the long-term performance of the business, and I was shocked that anyone cared about such nonsense. But they did. And they still do.
Some of the time, you’ll see analysts discuss factors that really do matter in the long term. Will the rising market share of Android combined with low-cost hardware from China put pressure on Apple’s ability to charge as much as it does for iPhones? That’s a long-term trend worth paying attention to, in my view. But even when analysts discuss this stuff, they tend to focus on how it will affect earnings in the next 12 months, rather than thinking about the longer term effects.
So, should you bother to read analyst reports or think about the details of these reports as written about in the financial media? Yes and no. I think you should completely throw away any target price and recommendation information, and toss out any discussion that doesn’t pertain to the long-term performance of the business. Leave that to the pros who chase this kind of unimportant information. Spend your time looking at the more interesting discussions that actually matter to the long-term performance of a business.
Is Portfolio Theory Harming Your Portfolio?
Executive
Summary
Modern Portfolio Theory (MPT)
teaches us that active equity managers who use judgment to make investment
decisions won’t be able to match the returns (after fees and expenses) of
blindly-invested, passively-managed index funds. Data on returns supports the theory, so it’s
no surprise that investors are leaving actively managed funds in droves for the
better average returns of super-diversified index strategies. Yet the reality is much murkier than we’ve
been led to believe.
It turns out that the portfolio
theories which inspired the creation and popularity of index funds and
top-down, quantitatively-driven index-like strategies, are both flawed and
impractical. There’s compelling evidence,
moreover, that a subset of active managers do persistently outperform
indexes. However, this important fact
has been lost because we allow MPT to define the debate in its own misleading
terms, tilting the field in its favor and hiding the reality about active
manager performance in a complex game of circular arguments.
MPT relies on a number of
unrealistic assumptions including an inaccurate definition of risk. Yet this characterization of risk sets the
rules for comparing active vs. passive strategies, often causing active
strategies to appear more risky and less efficient than their index
counterparts. The same flawed logic is
used to risk-adjust returns, biasing them downward for more active,
concentrated managers, and rendering this highly important measure highly
suspect. Furthermore, reliance on MPT’s
measure of risk pressures active managers to superdiversify. The average active fund is thus disfigured to
the point where the typical “active” manager is not very active at all, casting
the fund in an unfavorable light in a beauty contest versus super-efficient
index funds.
Stripping away the influence of
portfolio theory involves isolating and evaluating the relatively small group
of equity managers who rely heavily on judgment to build concentrated equity
portfolios. Empirical data from multiple
studies show that these concentrated managers, in fact, persistently outperform
indexes. The implications of this statement
are enormous. Concentrated manager
returns present the best test of whether human judgment can add value in
allocating capital, and they win, convincingly.
Yet while judgment has prevailed over passive investing, few have taken
notice. Most investors continue to look
at average active manager returns, not recognizing that these returns are
minimally influenced by judgment.
Regardless of MPT’s shortcomings
on both a theoretical and empirical level, its dominating influence will not
easily be dislodged. MPT is deeply woven
into the fabric of our financial system, its mathematical grounding and precise
answers inspire confidence. Further, its
application is crucial in bringing increased scale and profitability to the
financial services industry. Few want to
see change. As such, common sense and
judgment will continue to diminish in importance as top-down, quantitative
strategies and blind diversification gain investment dollars.
An informed investor should
welcome this shift. As
highly-diversified strategies gain assets, inefficiencies become more prevalent
because share prices are increasingly driven by factors other than
fundamentals. Individual investors,
seeking to exploit these inefficiencies and outperform indexes, should invest
in several concentrated funds with strong track records. Managers of these funds have proven
themselves adept at turning inefficiencies into strong returns for their
investors, and persistence data demonstrates that past performance can indicate
which managers are likely to continue to outperform. Concentrated fund returns may exhibit more
volatility than indexes, but we now have proof that over the long-term, good
judgment will be rewarded.
Is Portfolio Theory Harming Your Portfolio?
Paul Samuelson was a giant in the field of economics. He taught the subject at M.I.T. for over fifty
years, rewrote large portions of economic theory, and was the first American
awarded the Nobel Prize in Economics.
Samuelson’s work was usually at
the vanguard of economic theory. He was
instrumental in bringing mathematical rigor to the soft science of economics
and it wasn’t surprising that he would attempt to engender the same
metamorphosis in economics’ close cousin, finance. So it was in the early 1970’s that, based on
both his own work and that of a few other important scholars, he became convinced
that human judgment could be proven useless in making investment decisions in
the stock market. Moreover, the behavior
of financial instruments could better be described and predicted using
mathematics and statistics.
In 1974 he penned an article
entitled, “Challenge to Judgment.” In
it, Professor Samuelson, who represented “the new world of the academics with
their stochastic processes”, challenged the old, practical world of money
managers to show that any group of them could consistently beat the market
averages. Absent that proof, Samuelson
argued that portfolio managers should “go out of business – take up plumbing,
teach Greek, or help produce the annual GNP by serving as corporate
executives.”[1] Investors were better off investing in a highly
diversified, passively-managed portfolio that mimicked an index than using
judgment to pick stocks.
The Challenge marked the start of
a dramatic shift in our approach to finance.
Up to this point we had mostly evaluated investments one at a time,
carefully trying to understand the specific circumstances around each to derive
its chances of success or failure and determine its value. However, the compelling new theories and
mathematical formulae from the world of academia suggested we could do better
by building large portfolios based on top-down mathematical models which
replaced or minimized the need for judgment.
Samuelson’s Challenge was never
adequately addressed by the active fund management community. Perhaps awed by the brilliance of the
theories, the credentials of the academicians behind them, and the unassailable
mathematical “proof,” practitioners in the fund industry seemed to opt for the
“if you can’t beat ‘em, join ‘em” strategy.
Samuelson presciently forecasted this in his piece as well, noting that
the two worlds – the practicing active managers and the academic
quantitative-economists – would begin to converge. The two are now so intertwined that it is
often difficult to tell where one stops and the other starts.
Unable to meet Samuelson’s
Challenge, active managers have steadily ceded share to passive-style
vehicles. Passively managed funds now
control 20% of all domestic equity fund assets, according to Morningstar, from
almost nothing thirty years ago[2]. It is likely this figure dramatically
understates the case since many actively managed funds are so highly
diversified they should be reclassified “quasi-passive.” Passively managed equity funds have recorded
positive flows for over a decade while active equity funds are on track for
three straight years of outflows[3]. And though the outflows from actively managed
funds are small relative to the massive size of the industry, the directional
signal is telling.
The reason for the outflows is no
mystery, Samuelson and his cadre appear to be correct. Active managers in general have been shown to
underperform passive funds, especially when taking into account their higher
management fees, taxes, sales charges, and trading costs. If you can make more money in index funds
then why bother with the hassle of trying to find a good manager? According to Samuelson, there’s no such
thing.
The equation is not as simple as
it seems, however, and individual investors may serve themselves well by
digging deeper into the active versus passive debate before making the
switch. There’s compelling evidence that
the core theories behind the push to passive management do not work and they
distort the facts around the passive versus active debate, giving passive
management the false appearance of having an edge. Most importantly, there is compelling
empirical research that shows active managers who are truly “active,” do persistently
outperform indexes. The astute
individual investor can seize the opportunity that blind, passive index
investing provides in the form of increased market inefficiencies by hiring
active managers who have shown the ability to exploit and profit from these
inefficiencies.
History
We can trace the beginning of our fascination with the idea
of passively managed funds back to 1952.
That year, a student of linear programming, Harry Markowitz, first
applied his craft to the world of finance in a paper entitled “Portfolio
Selection.” In it, Markowitz provided
mathematical proof that proper diversification could minimize a portfolio’s
variance for a given level of return. Mean-variance
was used as a proxy for risk because assets whose prices were more volatile
were seen as more likely to produce losses.
It was the first time anyone had formally quantified this tradeoff
between risk and return. Paying special
attention to how an asset’s returns correlated with other assets allowed
mathematicians to create groups of portfolios which minimized risk for a given
level of return, or that maximized return for a given level of risk. These large, mathematically optimized
portfolios formed the “efficient frontier” and helped inspire today’s highly
diversified, passively managed funds.
“Portfolio Selection” spawned
William Sharpe’s “Capital Asset Pricing Model” (CAPM),[4]
which made Markowitz’s work more user-friendly.
CAPM introduced “beta,” a measure that incorporated a security’s
variance versus an underlying index (rather than vs. every other security in a
portfolio), and that represented systematic risk. The name of the game according to CAPM was to
diversify away stock-specific or idiosyncratic risk leaving only market
(systematic) risk, which was defined by beta.
According to the theory, investors are foolish to hold a small number of
stocks because they’re taking stock-specific risk when they don’t have to. Since other investors are buying the same
securities in diversified portfolios, the non-diversified investor will bear
more risk for equal return and therefore pay too much for a given stock which
is priced for inclusion in a diversified portfolio. A portfolio should be optimized in a manner
such that it has the lowest possible risk (beta) for a given level of expected
return which in practice means holding the market portfolio and lending or
borrowing to adjust risk.
Finally, the “Efficient Market
Hypothesis” (EMH) was introduced in 1970, by Eugene Fama, in the form we’ve
come to recognize. Providing what some
call the capstone to modern portfolio theory, the efficient-market hypothesis
asserted that because the stock market is such a successful mechanism for
pricing securities it is difficult or impossible for an investor to achieve
returns above the market average in any consistent fashion. Prices reflect all relevant information, and
changes in securities prices are mostly unpredictable, so using judgment to
pick stocks may be ineffective in the long-run.
Fama named three versions of EMH
– weak, semi-strong, and strong. The
weak version holds that past prices don’t predict future prices, so technical
analysis (which is based on past trading information) is irrelevant. In the semi-strong version all publicly
available information (not just price information) is instantly discounted in
stock prices. And in the strong version,
even non-public information is discounted such that insiders wouldn’t be able
to profit consistently from trading around their knowledge.
The three theories -- EMH, CAPM,
and Portfolio Selection --were classified together as “Modern Portfolio Theory”
(MPT). They were so compelling and useful that they came to provide the
backbone for much of modern financial economics and earned both Markowitz and
Sharpe Nobel prizes. They were
identified together, built on one another, and became joined in practical
expressions. And although they each got
there taking different avenues, their conclusions were much the same – buy the
market basket.
Practical Adaptation
The practical application of MPT was an enormous
success. The MPT thought process is now
so ingrained in our capital markets that the theories are taken for gospel and
their results viewed as “the truth”– whether allocating assets in a diversified
portfolio, making corporate finance decisions, developing a risk management
strategy, or valuing companies and securities such as mortgage derivatives or
just about any financial instrument.
Furthermore, by allowing market
participants to make assessments quickly and confidently as to the allocation
of capital, MPT has allowed the markets to become much deeper, more liquid, and
more efficient. Efficient capital
markets add to the value of the overall economy by allowing enterprises
(whether farmers, households, small businesses or large businesses) to attract
the right capital and capital structure and accept the kinds of risks for which
they are best suited -- while protecting themselves from risks they don’t wish
to take. In short, these theories of
financial economics play a key role in providing a fundamental framework for
our capital markets.
The mutual fund industry eagerly
adopted these quantitative methodologies.
At the end of 1975, John Bogle launched the First Index Investment Trust
(later renamed Vanguard 500), the first stock index fund for individual
investors, which is now one of the largest mutual funds in the world. And despite the fact that the majority of
financial economic work implicitly, and in some cases explicitly, questioned
the value of active fund management, active fund companies and their portfolio
managers came to embrace many of MPT’s key concepts.
While diversification has always been a selling point for
actively managed mutual funds, the average number of holdings in a fund have
increased dramatically since MPT made the scene. The average number of stocks held in actively
managed funds is up roughly one hundred percent since 1980, according to data
from the Center for Research in Security Prices.[5] Some might call it “super-diversification”
while others apply the label “overdiversification,” but the average fund
holdings had risen to approximately 140 positions by 2000.[6] The actual number of holdings in a given year
could easily surpass 200 because portfolio turnover exceeds 100 percent per
year on average.7
Funds have become more
quantitatively-driven in other respects as well. The industry has seen an explosion of “quant”
funds, many of which were founded on MPT’s core premises. Other actively managed funds come very close
to being index funds in an effort to find the “efficient frontier.” Some of these funds are known as enhanced
index funds. Some of them are “closet index
funds” – funds whose managers masquerade as “active” managers but hug an index
so tightly their returns will never stray far from it. A recent study showed that these “closet”
index funds have increased from one percent of assets under management in 1980
to more than twenty-seven percent in 2003[7]. Recently we’ve even seen active fund
complexes offering “active ETFs” so that they can cash in on the hottest
investment vehicle of the moment.
Not only have funds become more
index-like, the methods for measuring portfolios and managing fund complexes
have also been adapted for a quantitatively driven industry. Active portfolio returns are benchmarked
against indexes. Portfolio managers are
often measured and compensated based on their beta-adjusted results. In addition, managers who oversee fund
complexes typically use top-down statistical measures to monitor portfolio
managers since doing so on a position-by-position basis has become impractical. They may take into account beta, tracking
error (how much performance varies vs. a particular index), alpha,
risk-adjusted returns, and value-at- risk, to mention a few.
So, What’s the Problem?
By marrying itself to
quantitative theories, the actively managed equity fund industry has warped
itself into something that closely resembles what it ought to be fighting
against – the efficient, passive index fund.
In so doing it has doomed itself to an inescapably unfavorable
comparison with these highly efficient index funds by minimizing the role of
the “active” manager. Investors in
actively managed funds suffer – they receive quasiactive management at full
active management prices.
Not only was it a strategic error
to minimize the comparative advantage afforded through true “active”
management, but it turns out that these quantitative theories weren’t worth
marrying in the first place. Here we
make a distinction. By quantitative
theories we mean Portfolio Selection and CAPM but not the Efficient Market
Hypothesis. EMH isn’t problematic as it
doesn’t attempt to define relationships in capital markets through quantitative
equations, as do the other two theories.
At a certain level, EMH makes common sense and is validated by empirical
data. Although it has been disputed
constantly since it was first introduced, this controversy is probably
overstated. The attention is driven by
those at the extremes – those who believe markets are perfectly efficient at
all times and those at the other end of the spectrum who think the idea of
efficient markets is ridiculous. In
reality, there is an abundance of evidence that markets are less than perfectly
efficient, yet most practitioners and academics find that exploiting these
inefficiencies is, at minimum, very difficult.
It is not easy to consistently outperform the market, but talented
managers can and empirical data supports this fact as we will see later.
Portfolio Selection and CAPM are
at the heart of the controversy. Both
represent brilliant theoretical work accompanied by sound mathematical proof
and practical formulas. However, they
were science experiments. They worked
well in a laboratory where the environment around them could be perfectly
controlled, but when put into practice the theories’ underlying assumptions and
logic didn’t translate.
One of the most basic, pervasive,
and troubling issues with quantitative finance is that it relies so deeply on
the idea that risk is embodied in variance from the mean, or some derivative of
that measure. When Harry Markowitz first
theorized that there was a tradeoff between returns and variance he didn’t
directly associate variance with risk, but noted instead that, in financial
writings, if risk were replaced by variance of return, “little change of
apparent meaning would result.”[8] Amazingly enough, there’s not much empirical “proof” as to why we should
use variance as a measure of risk, yet it plays a critical role in almost all
large financial transactions. It seems
that academicians needed a way to quantify risk to fit mathematical models and
they grabbed variance, not because it described risk very well, but because it
was the best quantitative option available.
But just because it is convenient, and it carries a certain intuitive
appeal, doesn’t make it right.
Risk is a complex notion. We’ve been studying it for centuries. Whole books have been devoted to the subject,
yet it’s still difficult to define precisely.
While not many people would dispute Markowitz’s premise that we demand
higher returns for riskier assets, the idea that assets whose prices have
varied significantly warrant higher expected returns doesn’t hold up in
empirical tests. In other words, there’s
more to risk than variance alone.
Risk is often in the eye of the
beholder. While “quants” (who rely
heavily on MPT) might view a stock that has fallen in value by 50 percent over
a short period of time as quite risky (i.e. it has a high beta), others might
view the investment as extremely safe, offering an almost guaranteed
return. Perhaps the stock trades well
below the cash on its books and the company is likely to generate cash going
forward. This latter group of investors
might even view volatility as a positive; not something that they need to be
paid more to accept. On the other hand,
a stock that has climbed slowly and steadily for years and accordingly has a
relatively low beta might sell at an astronomical multiple to revenue or
earnings. A risk-averse, beta-focused
investor is happy to add the stock to his diversified portfolio, while
demanding relatively small expected upside, because of the stock’s consistent
track record and low volatility. But a
fundamentally-inclined investor might consider the stock a high risk
investment, even in a diversified portfolio, due to its valuation. There’s a tradeoff between risk and return,
but volatility and return shouldn’t necessarily have this same relationship.
Another issue with the use of
variance in practice is that it is backward-looking, coming from historical
samples of returns. So a key question
is, “can we rely on measures from the past to see the risks of the
future?” Think about how you make
decisions that involve risk in your everyday life. You probably rely fairly heavily on past
experience. But is that all you rely on
or do you factor in current circumstances?
The fact is that no two decisions are ever precisely the same because
the world is not static and circumstances change, even if the change is
difficult to detect. Often circumstances
have changed sufficiently since you were last faced with a like decision that
you think the probabilities of different outcomes have changed as well. When we use historic volatility as the sole
measure of risk (or for that matter, any historic quantitative measure) we’re
relying 100 percent on the past to predict the future. But volatilities are volatile (sorry) and
historic volatility has proven unreliable at predicting future volatility.
Constantly changing volatilities
create a great practical difficulty.
Over what time frame should we measure historic variance? Is it one year, ninety days, nine days, or
ten minutes? We’ll get different values
for each (often dramatically different) and because models are highly sensitive
to this value, the output will vary considerably depending on which time period
we use. Another troubling assumption
that must hold to make Markowitz’s theory valid is that asset returns abide by
the rules of stable normal distributions – otherwise the math behind the
theories won’t hold up. In reality,
return distributions are frequently neither stable (meaning they change over
time) nor normal (for instance, they may be nonsymmetrical or wider than a
normal distribution), which means formulas derived from Portfolio Selection
generate highly unreliable results.
Even though the assumptions
behind Portfolio Theory are often out of touch with reality, the model may
still be useful if it produces valid results.
Unfortunately, it doesn’t.
Numerous empirical studies have shown that taking on more risk (as
represented by volatility) doesn’t reliably deliver additional reward.[9][10] So, the quantitative cooks continue to tinker
with recipes to fit variables to an equation that can make sense of financial
markets. New multi-variable regression
models are introduced to describe alternative factors that influence returns
most, but these efforts amount to data mining.
Just because these new and improved formulas generate more respectable
correlations doesn’t mean there’s a causal link between their variables and the
returns they predict – as such, observed relationships can be fleeting. While the multi-variable models can solve
some of the problems in certain instances, these reworked formulas still suffer
from many obstacles to successful, practical implementation.
It’s an uncomfortable fact for financial
economists, but returns and return expectations are influenced in a highly
dynamic fashion by many variables which largely defy quantification. Why should we believe we can build formulas
that capture the behavior of management, employees, and customers of a
business, as well as investors? Even if there were a magical formula we could
use to describe human behaviors, it would likely change from asset to asset and
over time.
While Markowitz’s theory has
serious issues when applied to real life, Sharpe’s CAPM is in even worse
shape. CAPM is built on the back of
Markowitz’s theory so it starts with all of the baggage and incorrect
assumptions and then adds more. Some of
the doozies include an assumption that all investors could borrow and lend at
the riskless rate and an assumption that investors all have identical views of
expected correlations, returns, and risks.
That these quantitative financial
models don’t work in practice isn’t controversial. The theories have been losing the battle in
scholarly articles for the last three decades.
Even many of the influential researchers behind modern portfolio theory
admit to their shortcomings. Markowitz
is quoted as describing his book on Portfolio Theory as “really a closed
logical piece” – i.e., something that only works in the lab.[11] Eugene Fama called CAPM “atrocious as an
empirical model” and said “CAPM’s empirical problems probably invalidate its
use in applications” Fama & French
(2004).[12] Even the ardent supporter of EMH, Paul
Samuelson, noted "... few not-very-significant apparent exceptions"
to micro-efficient markets, and admitted the existence of some exceptionally
talented people who can probably garner superior risk-corrected returns.[13]
The real controversy is that,
even though its chief architects admit the quantitative theories are ill-suited
for practical use, and empirical data confirms it, they are still
embraced,(indeed some might say “worshipped”) by operators in our capital
markets, and heavily relied on to make important financial decisions. The theories have become so deeply ingrained
in our financial system that we can’t see their folly. Their mathematics, as well as the precise
nature of their output, gives us a sense of comfort which is critical in
deploying large sums of money. They also
lead to a misallocation of resources, however, causing giant distortions.
Diversification and Quantitative
Finance
Equity markets are where the
blood, sweat, tears, and raw emotion of human enterprise meet the hard facts of
price realization. Like a cold front
passing through on a humid August afternoon, it’s a transition often full of
energy and surprise. Providing theories
that translated to precise formulas, quantitative finance promised to take some
of the emotion out of this transition - to quantify it, to provide the “right”
answers, to at least make the hail storms more predictable. The quantitative certainty appealed to us and
we latched on. Even though we understand
that the forecasts the theories provide aren’t right much of the time, we keep
listening to them because doing so is comforting.
Diversification is a case in
point. Prior to MPT our take on
diversification was rudimentary. In
fact, it probably hadn’t changed much since early humans hid their food stores
in numerous places to avoid total loss from scavengers. Don’t put all of your eggs in one basket -
our modus operandi. Markowitz and Sharpe put meat around the
bones of this naïve view, delivering a formula that helped us quantify the
benefits of adding baskets, while describing the most promising
arrangements.
While we had always found
diversification appealing, MPT ignited an all-out love affair with the
concept. Not only did diversifying feel
“safe” we now knew it was “smart” because its benefits had been quantified and
real mathematical proofs supplied. Professional
money managers who applied MPT in practice built large, volatilityminimizing
portfolios to gain the efficient frontier, and a conventional wisdom took hold
that the more diversified a portfolio the better. Like baseball and apple pie,
super-diversification became universally accepted. That the concept underlying this aggressive
diversification didn’t work wasn’t a point of discussion.
Part of what solidified the push
for more aggressive diversification was the strategy’s warm embrace from the
financial services community at large.
Diversification has become the Holy Grail for financial advisors and
planners who preach its virtues with an unquestioning, cult-like
enthusiasm. Almost every piece of
marketing literature generated by these outlets extoll its benefits. The mutual fund industry played an important
role, too. The idea that investors could
dine from Markowitz’s risk-minimizing, free lunch buffet by diversifying their
portfolios was music to the fund companies’ ears. After all, a key selling point for mutual
funds was their ability to offer investors an inexpensive way to diversify
holdings while letting a professional manager invest their money. The concept fed on itself -- as more assets
poured into the funds the portfolios needed to become more diversified due to
liquidity constraints. It appeared to be
a win-win. The larger a fund became, the
more diversified, and not only were investors happy but so were the fund
companies. But fund companies and
planners had everything to gain from pushing diversification, it made them more
vital to their clients and more profitable.
The concept was not only difficult for the average investor to implement
and understand, but it also gave an implicit “okay” to super-large,
super-diversified, super-profitable funds.
Though the fund companies
undoubtedly win by managing more assets, are investors in active funds best
served owning highly diversified portfolios?
The appeal to diversification, according to quantitative finance, is the
idea that it allows us to enjoy the average of all the returns from the assets
in a portfolio, while lowering our risk to a level below the average of the
combined volatilities. But since we
can’t call volatility risk and we can’t reliably predict volatilities or
correlations, then how can we compile diversified portfolios and claim they are
on some sort of efficient frontier?
These super-diversified portfolios may be inefficient -- it may be
possible to earn higher rates of return with less risk. It may be that by combining a group of
securities hand-selected for their limited downside and high potential return,
the skilled active manager with a relatively concentrated portfolio has greater
potential to offer lower risk and higher returns than a fully diversified
portfolio.
Not only are we unlikely to find
an “efficient frontier” by super-diversifying an actively managed portfolio,
but diversification adds a cost that is rarely acknowledged. A fund manager’s job is to identify assets
that are priced “inefficiently,” where the market has ostensibly made an error
and a stock is available at a level that allows for relatively little risk
versus expected return. But finding
inefficiencies and maintaining a portfolio is difficult work and requires
resources (a manager’s time and brain power, among the most important of
these). Resources are not unlimited
(most importantly a manager’s time).
Therefore, the amount of resources devoted to each specific investment
varies inversely with the amount of investments owned in the portfolio. The more positions added to the portfolio,
the less likely a manager is to capture these difficult-to-find inefficiencies
because he/she has less time and other resources available to do so.
Over-diversification not only
decreases a manager’s ability to find inefficiencies but it may, in fact,
increase risk. Warren Buffett expressed
the idea more eloquently, “We believe that a policy of portfolio concentration
may well decrease risk if it raises, as it should, both the intensity
with which an investor thinks about a business and the comfort-level he must
feel with its economic characteristics before buying into it.”[14] Over-diversification inhibits a manager’s
ability to understand the risks taken with each security, potentially creating
greater risk. This argument turns CAPM
on its head. A highly diversified,
active manager cannot fully understand the risks he is taking on his positions
so he may be paying too much for them, thus operating below the efficient
frontier. While the concentrated manager
is able to pick securities with an intimate understanding of their risk which
helps him uncover assets whose prospective return more than compensates for the
risk taken. The concentrated manager
aims to buy assets that are beyond
the efficient frontier.
Diversification is a helpful
tool, but it should only be employed to the point where its costs equal its
benefits. Adding positions beyond that
point is watering down a portfolio – the benefits are minimal, but the costs
detract from a manager’s ability to add value.
The average actively managed equity mutual fund today is diversifying
far beyond the point where costs exceed benefits. These funds cease to be actively managed in
the traditional sense but their active management fees and other expenses
continue to be real. Thus, passive funds
with their low fees and turnover, easily outperform the average actively
managed fund.
The individual investor can
achieve greater success spreading money among talented managers who have each
limited diversification to the point where its costs are equal to its benefits. An individual investor’s tolerance for risk
can be expressed by choice of manager as some concentrated funds are run
conservatively while others accept more risk.
Is a relatively concentrated
strategy really more risky for the investor?
There’s no doubt that the concentrated portfolio will exhibit more
volatility on average than a highly diversified one, but as discussed earlier,
volatility isn’t a very useful descriptor of risk (all bets are off if you’re
talking about short-term money). Without
an accurate way to quantify risk we can’t make the generalization. But just because we don’t have a good
top-down, historically-based mechanism for understanding risk doesn’t mean that
we can’t tell how risky an asset is. If
we think that risk is roughly equivalent to the probability of losing money on
an investment, then perhaps we should ask, “are you more likely to lose money
owning a concentrated portfolio or a highly diversified portfolio?” The common sense answer is that it depends on
what’s in each portfolio! Perhaps then
the risk in a portfolio is better described by taking a bottoms-up view of the
fundamentals of the businesses owned, and how those fundamentals manifest
themselves in stock prices, rather than computing the portfolio’s historic variability
with respect to the market?
Running a concentrated portfolio
means taking stock-specific risk but that doesn’t necessarily mean taking more
overall risk than a diversified portfolio.
Finding appropriate stock specific risk is how active managers (should)
make their living. If a manager is
successful in finding these inefficiencies, he is more than compensated to take
those risks. Thus taking stock specific
risk should be a net positive for a talented active manager. This is reflected in the empirical data as we
will see later.
Practical Implications
So how much is the correct amount
of diversification?[15] Unfortunately there isn’t a “right”
answer. The correct amount of
diversification will vary from manager to manager depending on style and
resources available, among other factors.
More important than pegging an optimal absolute number is making the
conceptual leap from thinking that unbounded diversification is good, to
understanding that diversification carries costs for fundamental active
managers and acknowledging that diversification’s benefits, in terms of
risk-minimization, are not fully understood.
“Diversification is a protection against ignorance. It makes very little sense for those who know
what they’re doing” sums up Warren Buffett.16 When managers adopt a framework whereby each
position added carries an important cost that can dilute the value of their
work, and come to accept that taking stockspecific risk in less than perfectly
efficient markets can be a net positive, concentration will increase. The degree of concentration in a fund should
reflect the confidence a manager has in the inefficiencies found, and the
weight of those investments should reflect the probability of success as well
as the level of asymmetry present in the prospective return profiles of the
assets.
Those who crave more concrete
numbers can look at empirical work built around the MPT framework. Research has shown that much stock specific
risk (non-market related volatility) can be eliminated by owning portfolios of
relatively few stocks. Some say as few
as ten, others say 60. Yet these studies
often assume randomly chosen portfolios, while most portfolio managers pick and
choose stocks in a manner that attempts to limit volatility— thus the actual
number of stocks required to get most of the volatility-lowering benefits of
diversification may be lower. We should
also question the assumption that reducing volatility is paramount, as it
throws us off the more appropriate fundamental scent of risk and return.
The idea of limiting
diversification is an uncomfortable one for the mutual fund industry, but
coming to terms with it is necessary to end share losses to passive
strategies. The push for diversification
has attracted a mountain of assets to active strategies and created individual
funds that are mind-bogglingly large.
Fund companies certainly won’t shut down these giant, over-diversified
funds; they make too much money operating them.
You, the investor, however, can pull your assets from these super-sized
mutual funds and reallocate them to smaller, more concentrated portfolios.
Shifting emphasis to small,
concentrated portfolios would hurt fund companies in the short-run. Economies of scale would be reduced so the
funds would be less profitable for the fund complexes and slightly more
expensive for individual investors (fund complexes have taken most of the
benefits from economies of scale). But
it would enable fund managers the opportunity to better display their
talent. Instead of having few large
portfolios, fund complexes could have many small portfolios. While additional choices might create more
confusion for individual investors, the rewards could be significant for
finding a talented manager. Investors
could still diversify by owning a portfolio of concentrated portfolios but they
would want to adhere to the same rules as far as paying attention to the costs
of diversifying when they chose managers.
The Triumph of Judgment?
Ironically, it turns out that
Samuelson’s claim that “it is virtually impossible for academic researchers …
to identify any member of the subset with flair” was too weak, it should have
been re-worded “totally impossible.”[16] His Challenge to Judgment was flawed from
the start. Measuring persistent
“risk-corrected” returns is akin to measuring all the love in the world. We simply don’t have a yardstick. If we don’t have a reliable measurement for
risk, how can we measure performance in any relative sense? Performance must always be adjusted for risk
bias if we assume that there’s a tradeoff between the two. Portfolios which contain dramatically less
risk than an index should return less than the index, on average. However, we can’t measure or correct for this
disparity in a reliable manner.
Likewise, measuring a fund’s ability to persistently perform versus an
index is futile because this is also a relative measure and needs to be
adjusted for changes in risk over time.
But let’s put aside this thorny
issue for a bit and look at the empirical studies. You may be asking, “why are we looking for
persistence in actively managed returns in the first place? Don’t we just want to know if active managers
in general outperform the market?”
Because there are so many funds managing so many assets, it’s
mathematically impossible for the group to perform in a manner much different
than the market as a whole. So we don’t
expect to see outperformance from active managers as a class. Persistent returns, however, whether above or
below the market, are a marker for active talent or lack thereof. If the market is completely efficient then
returns will be random and we wouldn’t see managers consistently outperforming
or underperforming the market over time.
Remarkably, the evidence shows
plenty of support for the notion that persistence exists (and also support for
the opposite). This is remarkable
because the data is heavily biased in favor of the quantitative non-believers
(though the academic community would often lead us to believe the
opposite.) As we’ve already noted, the
sample set of “active” managers is heavily influenced by funds that are not
really active. In a 2007 study, K.J.
Martijn Cremers and Antti Petajisto found evidence that as of 2003, close to
thirty percent of assets under “active” management were in fact managed by
closet indexers. In addition, it seems
very reasonable to assume that, of the remaining 70%, significant portions are
not the stock picking fundamental managers Samuelson challenged. Rather, they are often top-down,
quantitatively driven, efficient frontier-hunting managers who might be
mistaken for machines. These funds make
bets and have some tracking error so in that sense they’re active, but their
inclusion in the data only serves to muddy the impact of true “fundamental”
active managers.
Researchers often make a big deal
about “survivorship bias” - the idea that only the better active funds survive
so the databases are skewed in a positive fashion. This is corrected by including all of the
poor returns from “dead” funds in the data.
However, no one corrects for the fact that as a fund has success and
grows larger, it will naturally migrate toward average returns while becoming more
diversified. The more diversified, the
harder it is to show outsized performance or persistence.
Finally, sometimes studies find
persistence but then dismiss it as statistically insignificant and declare that
there’s a lack of evidence. Achieving
statistical significance, however, is an especially high hurdle when the sample
sets are not very large and the data are skewed by funds that are not truly
“active.” Thus, it’s not surprising that we wouldn’t see strong evidence for
persistence.
While the empirical data on
persistence is mildly supportive of the argument for active management, we
still question it because of its inherent biases. Notably, however, there is a growing body of
research that shows strong persistence in funds that are not highly
diversified. This is noteworthy, of
course, because these are precisely the funds we would expect to display
persistence if managers are capable of adding value. These are the active managers who make a
living on their judgment; their track records represent the true test of
Samuelson’s Challenge.
Multiple studies indicate that
funds which are more actively managed, or more concentrated, outperform indexes
and do so with persistence (Kacperczyk, Sialm and Zheng (2005), Cohen, Polk,
Silli (2010), Bakks, Busse, and Greene (2006), Wermers (2003), and Brands,
Brown, Gallagher (2003), Cremers and Petajisto (2007)).
Funds with the highest Active Share [most active
management] outperform their benchmarks both before and after expenses, while
funds with the lowest Active Share underperform after expenses …. The best
performers are concentrated stock pickers ….We also find strong evidence for
performance persistence for the funds with the highest Active Share, even after
controlling for momentum. From an
investor’s point of view, funds with the highest Active Share, smallest assets,
and best one-year performance seem very attractive, outperforming their
benchmarks by 6.5% per year net of fees and expenses. [17]
While we need to acknowledge that
because we can’t measure risk, these studies, like any empirical work, need to
be taken with a grain of salt. It is
nonetheless interesting that if we compare the studies that focus on teasing
apart the influence of more active, concentrated management, to the broad
all-inclusive studies, there’s a large change in the signal received.
Not Quite a Triumph, but an
Opportunity
It is ironic that while the
financial services industry and investors have spent the last thirty years
rushing after the quantitatively inspired ideas of Samuelson, Markowitz, Sharpe,
Fama, and others, academics, on balance, have been running in the opposite
direction -- relentlessly throwing into doubt the underlying tenets of
quantitative finance. Moreover,
Samuelson’s flawed Challenge was met and returned with the brute force of
empirical results from the practitioners most reliant on judgment
--concentrated active fund managers.
Judgment pulled-off the big upset, rallied from behind to win the game,
but almost no one has taken notice.
The triumph of judgment has been
overlooked by most investors due to the confusion created by MPT. We’ve allowed one of the players in the match
(MPT) to dictate the rules of the game.
MPT’s rules not only insure its own victory, but they also create a
complex web of circular arguments that inhibit our ability to discern the
truth. MPT’s characterization of risk is
the ruler we use for comparing active vs. passive strategies, often causing
active strategies to appear more risky and less efficient than their index
counterparts. The same flawed logic is
used to risk-adjust returns, biasing them downward for more active,
concentrated managers, and rendering this highly important measure highly
suspect. Furthermore, reliance on MPT’s
measure of risk pressures active managers to super-diversify. The average active fund is thus disfigured to
the point where the typical “active” manager is not very active at all, casting
the fund in an unfavorable light in a beauty contest versus super-efficient
index funds.
Even when investors recognize judgment’s triumph, we should
not hold our breath for an industry-wide demotion of quantitative theory and
practice. MPT is too deeply woven into
the fabric of our financial system. It is
difficult for investors to see past the very real performance numbers that show
the average active manager
underperforms corresponding passive index funds. There’s a feeling of safety that accompanies
index investing; neither the advisor nor the investor risks losing face or
losing a job over putting money to work in a broad index.
We enjoy the mathematical
certainty of MPT, it’s reassuring that we can fix a value to assets, and that
we can quantify risk in a non-subjective manner – free from human error. Further, moving to judgment-based finance
isn’t good business for the financial services industry. The industry depends on quantitative finance
to bring it scale and profitability.
None of the above are valid arguments, but that’s not the point. When defending an entrenched system that
furthers the economic interests of powerful entities, the rationale doesn’t
need to be sound, it just has to be somewhat convincing.
The fact of the matter is that
quantitative finance is close enough to being “right” most of the time, so we
put up with it. History repeats itself
often enough that the patterns from the past lull us into the belief that they
can reliably predict the future. Yet
our investing lives, much like our non-investing lives, are defined at the
extremes. Just as we may get the best
measure of ourselves in the worst of times, so to a devastatingly ugly market
painfully exposes which investment strategies are not meant to last. The issue with statistical finance is that
while it may adequately predict the future the majority of the time, it is at
the extremes, when we most need an accurate picture of the future, that
historic relationships break-down and statistical methodologies fail us. While some argue that a system which works 99
percent of the time is good enough, these are the same people who would sell
you a burglar alarm that works perfectly well until a would-be-criminal
approaches your home. What good is a
system that breaks down only when you most need it? See the financial crisis of 2008 or Long-Term
Capital Management for a compelling answer.
What would a return to judgment
mean on an everyday, practical level? Of
course it would entail relying a lot less on top-down statistical measures and
methodologies. It wouldn’t mean,
however, we forsake the use of all quantitative theories, mathematics, and statistics,
and instead rely solely on gut instincts.
Rather, it would involve taking a bottoms-up, fundamental approach to
understanding the nature of the entity behind a financial instrument. Taking a holistic view of that entity –
understanding basics such as its balance sheet and off-balance sheet
arrangements, competitive positioning, growth strategy, and its principals,
among other things is key. That
fundamental analysis is crucial in projecting the entity’s cashflow and in
arriving at an appropriate assessment of risk, which helps determine intrinsic
value. The result isn’t precise, but the
point is that it’s far better to be approximately right than precisely
wrong. Quantitative measures like
risk-adjusted returns and value-at-risk may still play a role, but it is
greatly diminished (as is the automated system for processing and investing
that MPT fostered.) In a judgment-based
world, financial services firms understand that scaling and automating their businesses
will maximize profits in the short term, but can be disastrous over the
long-term.
Where does all of this leave you
the investor? A system that is
increasingly dominated by mechanistic, top-down focused,
quantitatively-oriented investors, creates an exciting opportunity for informed
individual investors with a fundamental bent.
Never before has so much money sloshed around our capital markets
without the benefit of judgment.
If everyone eschews judgment, who will make market prices
even approximately right, or ferret out the offerings of thieves and promoters
of worthless securities? Paradoxically,
the efficiency of securities markets is a public good that can be destroyed by
the unqualified faith of its believers.[18]
We’ll never get to a point where
all investors eschew judgment - there are too many individuals ready to apply
common sense to make a profit. However,
as more money flows from truly active managers to investment vehicles that
deploy money “blindly,” inefficiencies become more prevalent creating
opportunities for those whose eyes are open to them.
Do your homework and uncover
mispriced assets on your own or look for concentrated, fundamentally-driven,
relatively small funds with talented managers. Since persistence has been
demonstrated in this subset, it turns out that a good manager may be identified
from past performance among other considerations. Find managers who fit your style and risk
tolerance, and invest for long term returns.
Take advantage of the fact that your neighbors are leaving for passive
funds, as their passive investments could provide the inefficiency your manager
seeks to exploit. But, by all means,
avoid investing in highly diversified active funds whose returns closely match
an index. If index returns are what you
seek, then pull your money and invest in efficient passive index funds or ETFs.
Most of the wealth in the world
has resulted from individual entrepreneurs using their judgment to invest in
opportunities (inefficiencies) in a highly concentrated, even exclusive,
fashion. Think about that for a moment,
because it’s a big statement. Sure,
wealth has been lost using this formula, but the good has dramatically
outweighed the bad. Although far from
perfect, human judgment has advanced us a very long way. While public markets are much more efficient
than the entrepreneurs’ private markets, they still contain
inefficiencies. Accordingly, good judgment
will reward investors over time.
Demoting a time-tested, highly successful system that favors judgment in
preference for one supported by an unsound infrastructure of quantitative
theories and formulas, doesn’t make a lot of sense. Make this flaw your opportunity.
[1]
Paul A. Samuelson, “Challenge to Judgment,” The
Journal of Portfolio Management, Fall 1974, Vol. 1, No. 1: p 18.
[2]
Kevin McDevitt, “Equity Outflows Continue, But Perspective is Needed,” Morningstar
Direct Funds Flows Update, September, 2010, P. 2,
http://corporate.morningstar.com/augflows10/augflows10.pdf
[3]
Ibid
[4]
Also developed independently by John Lintner (1965) and Jan Mossin (1966).
[5]
Joshua
M. Pollet and Mungo Wilson, “How Does Size Affect Mutual Fund Behavior?” Journal of Finance, VOL. LXIII, NO.
6, December 2008, p. 2948
Investopedia, “Turnover Ratios Weak Indicator of Fund
Quality”, William Harding from Morningstar quoted,
http://www.investopedia.com/articles/mutualfund/09/mutual-fund-turnover-rate.asp
[7] Martijn Cremers, Antti Petajisto,
“How Active Is Your Fund Manager? A New Measure That Predicts Performance”,
January 15, 2007, Yale School of Management, Figure 4, p. 38.
[8]
Harry Markowitz, “Portfolio Selection,” 1952,
The Journal of Finance, Vol. 7, No. 1. (Mar., 1952), p. 89.
[9]
The
Capital Asset Pricing Model: Theory and Evidence”, Eugene Fama and Kenneth
French, The Journal of Economic Perspectives, Vol.
[10]
,
No. 3 (Summer, 2004), pp. 25-46
[11]
Welles, 1971, 25, cited in “A Call For Judgment,” Amar Bhide, 2010, p. 122.
[12]
Fama and French, 2004.
[13]
Robert C. Merton, “Paul Samuelson and Financial Economics,” American Economist, 2006.
[14]
Warren Buffett, Berkshire Hathaway Chairman’s Letter to Shareholders, 1993.
[15]
For this article we use the term “diversification” loosely to describe the
number of positions in a portfolio.
There are many alternate methods for calculating a portfolio’s
diversification, but changing our definition wouldn’t change our conclusions. 16
Berkshire Hathaway annual
meeting 1996.
[16]
Samuelson, 19.
[17]
Martijn
Cremers, Antti Petajisto, “How Active Is Your Fund Manager? A New Measure That
Predicts Performance”, Yale School of
Management, January 15, 2007, various pages.
[18]
Bhide, p. 116
No comments:
Post a Comment
Feel free to constructively critique!