They Can’t All Be That Smart
A Due Diligence Framework for Factor Strategies

Smart Beta is a label applied broadly to all factor-based investment strategies. In a recent WSJ article on Smart Beta, Yves Choueifaty, the CEO of Tobam, said “There’s a huge range of possibilities in the smart-beta world, and they can’t all be that smart.” This paper separates the factor investing landscape, gives a framework to analyze the edge of various approaches and lets you decide which factor-based strategy is worth your money.

Analysis of a factor-investing strategy should focus on two of the manager’s skills: the ability to identify specific factors that accurately generate out-performance and the manager’s technique in constructing a portfolio of stocks with those factors. Factors are not commodities, and one should know how managers are selecting stocks, but we are focusing on portfolio construction and the soundness of different approaches.

Active share can be a useful tool in this investigation. Active share by itself is not a metric that inherently identifies manager skill. Nor is it the best metric to determine the risk of the portfolio versus an active benchmark. Tracking error is a more comprehensive metric for the trailing differences in the portfolio returns and Information Ratios to understand the balance of how much active risk you are taking for active return. But active share is a very useful tool in investigating the choices managers make in building factor portfolios.

Through the lenses of active share, tracking error, and information ratios, we consider the relative merits of factor-based portfolio construction approaches: Fundamental Weighting, Smart Beta and Factor Alpha. Understanding the differences between these approaches will help you better incorporate factors into your overall portfolio.

Fundamental Weighting
Most benchmarks weigh constituents by market capitalization. Some factor investing approaches pivot away from weighting on market cap, and weighting on another fundamental factor like sales or earnings. The argument for these strategies is that weighting by market cap is not the smartest investment solution out there: the top quintile of the S&P 500 by market cap underperforms the average stock by -0.65% annualized1, and market cap weighting allocates 65% of the benchmark to those largest names.

For a comparison of fundamental weighting schemes, the table below shows the characteristics and annualized returns for weighting on Market Cap, Sales, Earnings, Book Value of Equity and Dividends. There are some benefits to the approach, for example eliminating companies with negative earnings. On average, about 8.3% of Large Stocks companies are generating negative earnings2, and avoiding those is smart. The largest benefit is an implied value-tilt to the strategy: over-weighting companies with strong earnings and average market caps creates an implicit Price/Earnings tilt. This is apparent in the characteristics table: Sales-weighting gives the cheapest on Price/Sales, Dividend-Weighted gives the highest yield, etc.

But pivoting from market cap to a fundamental factor weighting scheme does not create large risk-return benefits. Raw fundamental factors correlate highly with market cap; companies with huge revenues tend to have large market caps. As of December 31st, 2016, weighting on Earnings has a 0.85 correlation with weighting on market capitalization3. In market cap weighting, the top 25 names are 34% of the portfolio. In an earnings-weighted scheme those same 25 companies are still 34% of the portfolio, just shifting weights a bit from one name to another.

Active share shows how little fundamental weighting moves the portfolios, with active shares in the 20-30% range. Excess returns range from slightly underperforming market cap to outperforming by +72bps. The modest excess return comes with much higher active risk, and tracking errors ranging from 4.5% to 5.8%. This generates poor information ratios, the ratio of active return to active risk.

Market Cap Weighted vs. Earnings Weighted - December 31st, 2016
Portfolio Weight for Market Cap Weighted vs. Earnings Weighted – December 31st, 2016


Characteristics and Annualized Returns by Weighting Scheme (U.S. Large Stocks, 1963-2016)
Characteristics and Annualized Returns by Weighting Scheme (U.S. Large Stocks, 1963-2016)

The reason that the risk-return benefits are small is because Fundamental Weighting is an indirect allocation to a Value strategy. Value investing on ratios is identifying investment opportunities with the comparison of a fundamental factor in the context of the price you pay. Fundamental weighting is only taking half of the strategy into account, looking for large earnings but ignoring the price you’re paying for them. Some Fundamental Weighted products will be more sophisticated than simply weighting on sales, earnings, book value or dividends. But weighting on fundamental factors instead of market cap doesn’t create a significant edge.

Risk-Focused versus Return-Focused
A post by Cliff Asness at AQR suggested that Smart Beta portfolios should be minimizing active share. Smart Beta portfolios are “only about getting exposure to the desired factor or factors while taking as little other exposures as possible.” This statement cemented the idea that there is a group of Smart Beta products that are risk-focused in nature: Start with the market portfolio, identify your skill and then take only the exposure on those factors.

In evaluating this portfolio construction technique, let’s suspend the idea that we’re all starting with unique factors and take a hypothetical example where the skill of all portfolio managers is a generic factor with only three states: Good, Neutral and Bad. Most of the stocks (80%) are Neutral and give a market return, with an equal amount of Good Stocks at alpha of +4%, and Bad Stocks underperforming at -4%. To establish some terminology: the strength of the signal is +4% alpha and the breadth of the signal is the top and bottom 10%.

Hypothetical Alpha: 10% Bad Stocks, 10% Good Stocks, 80% Neutral Stocks

In the risk-focused Smart Beta framework, you only deviate from the benchmark when you have strong conviction. In this case, start with the market, and then “sell” (do not own) the 10% of the market you’ve identified as Bad Stocks to “buy” (double down on) the 10% you’ve identified as Good Stocks. For the remaining 80% of the market, you have no edge, so match the market portfolio. The logic seems sound: you’ve maximized the usage of your skill within your risk-focused framework. Only change the stocks you have an opinion on, and if you have no opinion, leave the portfolio at market exposure.

Risk-Focused Smart Beta

Another equally viable framework is to focus on returns first. Using the same example where there is a group of stocks with an excess return of +4% annualized, a return-focused manager would only own stocks from that group and then try to balance out the risks of the portfolio to match the market’s risk factors. Let’s call this the “Factor Alpha” approach, focusing on maximizing excess returns first and then controlling for risks.

Return-Focused Factor Alpha

Portfolio Construction in Practice
A sensitivity analysis based on a single factor can test how the breadth of signal affects the risk-return profile of either approach. The universe for this analysis is a modified Russell 1000. The market-cap weighting methodology of the Russell benchmark includes a long tail of mid-cap to small-cap names. To get around this, only the top 95% of names by market-cap are included, trimming a long-tail of small-cap companies. Portfolios are formed monthly with a 12-month holding period, with analysis on the combined portfolio.

The factor used in the analysis was Shareholder Yield, the net return of capital through dividend yield and buyback yield. The following chart show the annualized returns for portfolios grouped into deciles by Shareholder Yield. There is significant outperformance from the best decile and underperformance from the worst decile. The relative performance narrows quickly, with declining utility in the second and third deciles. The returns of the fourth to seventh decile demonstrate little edge and these groups should be considered low-conviction.

Excess Return by Shareholder Yield Decile, Russell 1000 Constituents vs. Equally Weighted Universe 1968-2016
Excess Return by Shareholder Yield Decile, Russell 1000 Constituents vs. Equally Weighted Universe 1968-2016

Using Yield as our basic alpha signal, the analysis was run for both the Smart Beta approach and the Factor Alpha approach, using a different cutoff for the breadth of signal. The universes are the same, and the alpha signal is the same, but we are scaling in how much confidence we have in our alpha signal. For the Smart Beta approach, we are increasing the active component of the portfolio and reducing the passive component by increments of 2.5%. To be specific, at 10%, we have trimmed the top and bottom deciles, equally weighted the names within the top decile with the combined weight of both groups. For the Factor Alpha approach, we are only purchasing the groupings based on the top x%, and incrementally decreasing the concentration of the portfolio by 2.5%. At 10%, we are only purchasing the equally-weighted top decile, and no other constituents.

The excess return and tracking error match our intuitive expectations: the Smart Beta approach starts with little excess return and little active risk, and both return and risk scale up the more active one gets. The Factor Alpha approach starts with high excess return and higher active risk and scales down the broader the portfolio gets. What’s interesting is that the Information Ratio, the balance of active returns and risk converge fairly quickly. To be fair, for the first few groupings, the Smart Beta approach is working from a very low tracking error where a shift in excess return of just a few basis points has a significant impact on information ratio. But by the time the portfolio gets to the top decile, the information ratios from each approach converge. The Smart Beta and Factor Alpha approaches generate very competitive risk-return profiles, although the overall level of active return and risk are higher in the Factor Alpha Approach.

In both approaches, the Information Ratio then degrades the further you dig deeper into your alpha signal. The reason for the degradation is that benefit of the alpha signal. For Shareholder Yield, the active return drops off more quickly than the active risk, degrading the risk-return profile for either approach. A key aspect of Modern Portfolio Theory is the Benefit of Diversification: the total risk of the portfolio is reduced by holding more securities. In factor investing, there is also a Benefit of Concentration: the total return of the portfolio is increased by holding securities with stronger factors. As you dig deeper to lower conviction names in the active component of the portfolio, the edge from factor returns is diluted.

Excess Return, Tracking Error, Information Ratio and Active Share of Smart Beta vs. Factor Alpha
Sensitivity Analysis on Increasing Active Universe, Russell 1000 Universe, 1968-2016.  x-axis is the percentage of the Shareholder Yield used to form the portfolio with the Smart Beta or Factor Alpha approach.

To keep this example in context, this is a very basic case using one factor as the alpha signal. Active quantitative managers have a greater number of factors available to them than just Yield and can boost their alpha signal beyond the single factor. But for large cap stocks, Shareholder Yield provides a pretty reasonable expectation on alpha signals: the best and worst stocks by a factor will have the strongest out and underperformance, but as the characteristics degrade the excess returns diminish. Alpha signals are just not as effective as the universe broadens. It is unlikely that a manager has discovered the perfect investment signal separating the universe in half between equal conviction winners from losers. When evaluating a manager’s construction choices, one should search for conviction around the breadth of their alpha signal.

Using Risk Controls
Changing weighting schemes creates active risk with the passive market-cap weighted benchmark. The portfolio construction process used above was basic as the active constituents were equally weighted. This active weighting creates the opportunity for outperformance, but also creates differences with the benchmark. Part of this risk comes from the alpha source: investing in high yielding companies does generate excess return over long periods of time, but can also create periods of time when it underperforms. But some of the risk comes from other bets created when the portfolio is formed: differences in sectors, like an overweight to Energy, or differences in factors, like an underweight to Market Cap.

Sector differences are a large driver in these differences of returns. The following table shows some of the choices made in the basic analysis above. When trimming the bottom 5% of stocks by market cap to avoid small caps, you introduce 46bps of tracking error. But when moving from the market-cap weighted portfolio to equally-weighted constituents, the tracking error jumps to over 4%. This is the same universe of stocks as the market cap weighted, but simply unwinding the market-cap factor used in the passive benchmark creates large active risk for the portfolio.

To manage active risk, you can adjust the portfolio from equally-weighted to a risk-controlled weighting on sectors. In the basic example below, controlling for active sector allocations, and shaping the portfolio back to the same weightings as the benchmark, can remove over 125bps of the active risk.

Tracking Error, Russell 1000 Universe 1968-2016
Tracking Error, Russell 1000 Universe 1968-2016

The question is how broad of a portfolio do you need in order to take advantage of risk controls like sector awareness. Taking this same usage of sector risk-controls back to the Factor Alpha framework, another analysis was run utilizing the same percentages of Shareholder Yield, but with an additional set of risk-controls to reduce exposures versus the benchmark. To be explicit, the portfolios are formed selecting on the best of a factor, but instead of equally-weighting the stocks we see if we can shape the portfolio to get the sector exposures as close as possible to the benchmark. Sector weightings are not neutralized, as the focus is on generating excess return through factors, but they are more controlled than in an equally-weighted portfolio.

With highly concentrated portfolios of only 2.5% to 5% of the universe, there is little room to maneuver the portfolio sectors. But by the time we have expanded to just the top decile of the factor, which is only about 30 to 50 names, risk-controls are able to shape the portfolio and reduce the overall active risk. This approach reduces active risk while maintaining the same profile of excess returns and active exposures versus the benchmark, increasing the risk-adjusted return through the information ratio significantly.

Excess Return, Tracking Error, Information Ratio and Active Share of Smart Beta vs. Factor Alpha. Sensitivity Analysis Increasing Active Universe, Russell 1000 Universe 1968-2016
Excess Return, Tracking Error, Information Ratio and Active Share of Smart Beta vs. Factor Alpha. Sensitivity Analysis Increasing Active Universe, Russell 1000 Universe 1968-2016

There are a number of ways to introduce risk controls, through risk models or explicit constraints. And Smart Beta also has the ability to add risk controls. The difference is again going to be on the philosophy of what is being delivered. The Smart Beta approach starts with de minimus risk, and gradually dials up alpha. A Factor Alpha approach has the ability to deliver significant excess return while managing active risk in the portfolio.

Validating Portfolio Construction through the Lens of Active Share
Not all factor-based investing approaches are smart, but there are several different ways to construct smart portfolios. Both Smart Beta and Factor Alpha approaches can generate strong risk-return profiles, with one approach focusing on risk while the other focuses on returns, but in either approach there can be misalignment between the return of factors and the portfolio construction methodology.

When analyzing a factor portfolio, you should understand 1) the breadth of the excess return from the alpha signal and 2) whether the manager is using a risk-focused Smart Beta or return-focused Factor Alpha approach. In either approach, active share should line up with where signal conviction diminishes. If they don’t, it’s possible that the manager has a misalignment in portfolio construction.

Analyzing Fees through the Lens of Active Share
Active share also helps to establish what the fee of a product should be. It disaggregates the passive component of every strategy, contextualizing the fees being paid to an active manager. Fees are under a tremendous pressure in our industry. The 2016 ICI Factbook shows that since 2000, the average fee paid to equity mutual fund managers has declined from 99bps to 68bps, a decline of 31%. The shift to passive management has been a component of this, but fee renegotiation is part of the decline. The average fee on active equity has declined from 106bps to 81bps, a decline of 21%, which means about half of the decline in overall fees paid is from compression of the fees paid to active management.

Active share gives transparency to what you are paying for. The average passive index fund is at 11bps. The lowest cost ETFs are trading at 5bps, and large institutions can get passive exposure for a single basis point. What active share gives is a quick metric into how much of the portfolio is passive, with the idea that the passive component of investments is commoditized.

The difference between a Smart Beta and a Factor Alpha approach to building a portfolio shows why there should be a different fee structure between the two approaches. Smart Beta comes with a large passive component to its portfolio, which should come at passive costs. The Factor Alpha approach has little passive exposure because the bulk of its investments are driven by the skill of the manager.

Both Smart Beta and Factor Alpha approaches allow for exposure to factors which can enhance returns, but the implementations are very different. After you figure out your cost for market access, there are only two inputs to determine what one should be willing to pay for a manager: what’s the estimated skill on the active component of the portfolio, and how much are you willing to pay for that skill?

Once those two numbers are determined, they are simply inputs into the formula. Let’s propose a smart beta example where the cost of passive market access is 10bps, and the skill of the alpha is determined to be 4% and you’re prepared to pay 20% of alpha to get access to that skill. Based on the active share, you can determine an expected fee for the portfolio no matter the approach that it’s using.

Hypothetical Fee Calculation for Smart Beta vs. Factor Alpha, active fees based on 20% of Active Skill, passive fee at 10bps
Hypothetical Fee Calculation for Smart Beta vs. Factor Alpha, active fees based on 20% of Active Skill, passive fee at 10bps

Active share helps identify misalignments between construction methodology and fees in the industry. is a website launched in 2016 that has constituent data for mutual funds and exchange traded funds and explicitly calculates the Active Fee using this same methodology, and can help investors determine misalignments. Misalignments in portfolio construction isn’t just limited to quantitative managers. Fundamental managers also struggle with quantifying their skill and how to implement it in a portfolio. With either approach, one should analyze low-active share portfolios with average to above-average fees to determine whether they have an incredible alpha source on their active component, or whether they are misconstructed or mispriced. High active share at below-average fees offer an opportunity for lower cost access to investment skill, and the investigative burden should center on validating the skill of the manager.

Understand the Source of Alpha: Factors are not commodities, and can differ greatly between quantitative managers. It’s important to know the strength of outperformance in the best names, but also have an understanding of the breadth of the signal. The broader the alpha, the more appropriate it is to have a higher active share in Smart Beta, and a lower active share in a Factor Alpha approach. Most alphas degrade quickly after the top decile and turn low conviction by the top third of the universe.

Smart Beta and Factor Alpha start with Different Goals: Smart Beta is an approach focused on minimizing risk, while Factor Alpha is focused on maximizing the excess return versus the benchmark. Both provide similar information ratios, which degrade at a rate determined by the alpha signal. But to borrow a common phrase in finance, “you can’t eat an Information Ratio.” To be more explicit, returns are going to achieve investors’ funding goals, not risk controls. Investors looking for returns should consider Factor Alpha portfolios.

Risk Controls Matter: With either approach, having a layer of risk controls significantly improves the risk-return profile.  In Factor Alpha, you can maintain the excess return while lowering the excess risk, improving the Information Ratio significantly.

Investigate Portfolios for Construction Alignment and Fees: Knowing a manager’s investment focus, active share, and alpha signal allow for an advisor to determine if there is a misalignment in the construction of the portfolio. Active share also helps determine whether the investment solution is priced appropriately.


1. Analysis from 1963-2016. S&P 500 constituents from 1990-2016, largest 500 Compustat companies from 1963-1990.
2. Large Stocks is U.S. Compustat stocks with a market capitalization greater than average. Analysis from 1982-2016.
3. Large Stocks is U.S. Compustat stocks with a market capitalization greater than average, as of December 30th, 2016.



Factors are Not Commodities

The narrative of Smart Beta products is that factors are becoming an investment commodity. Factors are not commodities, but unique expressions of investment themes. One Value strategy can be very different from another, and can lead to very different results. There are many places that factor portfolios can differ. The difficulty for asset allocators is in identifying how factor strategies differ from one another, when they all purport to use the same themes: Value, Momentum and Quality.

Over the last couple of years, several Multi-factor funds that combine Value, Momentum and Quality were launched. As these products compete to garner assets, price competition has started amongst rivals. In December, Blackrock cut fees to smart beta ETFs in competition with Goldman Sachs which has staked out a cost leadership position in the market space. Michael Porter, the expert in competitive strategy, wrote in 1980 that there are three generic strategies that can be applied to any business for identifying a competitive advantage: cost leadership, differentiation or focus.   Cost leadership can be an effective strategy, but the key to any price war is the products need to be near-perfect substitutes for one another, such as commodities. This paper focuses on how quantitative asset managers can have large differences in factor definitions, differences in combining factors into themes, and differences in portfolio construction techniques leading to a wide range of investment experiences in multi-factor investment products.

Factor Definitions

Value investing through ratios seems to be very straightforward. Price/Earnings ratios are quoted widely as a common metric to gauge the relative cheapness of one stock to another. “Information Content of Equity Analyst Reports” by Asquith, Mikhail and Au found that 99% of equity analyst reports use earnings multiples in analyzing a company. The P/E ratio is used widely because it is straightforward and makes intuitive sense: as an equity owner you are entitled to the residual earnings of the company after expenses, interest and taxes. A ratio of price to earnings tells you how much you’re paying for every dollar of earnings.

Getting a P/E ratio is as simple a exercise as opening up a web browser and typing in a search. But if you’ve ever compared P/E ratios from multiple sources, you can get very different numbers for the same company. Take Allergan (NYSE: AGN) as an example. As of January 12th, 2017, Yahoo! Finance had AGN with a P/E of 6.06. But Google Finance had 15.84. If you have access to a Bloomberg terminal, Bloomberg had it as a P/E of 734. Factset has no P/E ratio. You can feel like you’re stuck in Arthur Block’s Segal’s Law: “a man with a watch knows what time it is. A man with two watches is never sure.”

These discrepancies happen because there are a lot of different ways to put together a P/E ratio. One could use Earnings per Share divided by the price of the stock. If so, should you use basic or diluted EPS? There’s a difference if you switch to the LTM Net Income dividend by the total Market Cap of the company, as shares can change over a given quarter. But the reason for Allergan’s different ratios is that some financial information providers use bottom-line earnings while others take Income before Extraordinaries and Discontinued Operations. On August 2nd, Teva (NYSE: TEVA) acquired Allergan’s generics business “Actavis Generics” for $33.4 billion in cash and 100 million shares of Teva, generating $16bn in earnings from Discontinued Operations. After unwinding this, the company actually lost $1.7bn in the third quarter. Hence no P/E ratio. Depending on whether an adjustment is made on this, Allergan will either appear as a top percentile cheap stock on Earnings Yield (inverse of the P/E ratio) or in the 94th percentile.

Accounting adjustments for Extraordinaries and Discontinued Operations aren’t the only item affecting an earnings ratios. When considering earnings, you want to measure the available economic surplus that flows to the holder of the common equity. If preferred stock exists, it supercedes the claims of common shareholders. Fannie Mae (OTC: FNMA) is a great example of how preferred dividends can absorb earnings from common shareholders. During the 2008 crisis, Fannie Mae issued a senior tranche of preferred stock that is owned by the U.S. Treasury, and paying a $9.7bn dividend of the $9.8bn in earnings the company generates. There is a junior preferred tranche held by investors like Pershing Square and the Fairholme funds which is currently not receiving dividends and are submitting legal challenges to receive some portion of the earnings. This leaves Common shareholders behind a long line of investors with a prioritized claim on earnings. But some methodologies adjust earnings take preferred dividends after earnings, while others do not, creating a difference in having a P/E of 2.3 (an Earnings Yield of 43%) or a P/E of 185 (Earnings Yield of 0.5%).

These comments are not about the cheapness of Allergan or Fannie Mae, rather the importance of your definition of “earnings” and the adjustments you apply. If these considerations sound like fundamental investing, it’s because they are. Fundamental analysts consider these adjustments in the analysis of a company.   Factor investors work through the same accounting issues as fundamental investors, with the additional burden of trying to systematically adjust to create the best metric that accounts for the accounting differences across thousands of companies. Investing results can be very different based on these adjustments. In the U.S. Large Stocks Universe, there is a +38bps improvement on the best decile of Earnings Yield if you adjust for Discontinued Items, Extraordinaries and Preferred Dividends. To set some scale, in the eVestment Universe the difference between a median and a top quartile manager just +60bps a year.

Compustat Large Stocks Universe, 1963-2016

Adjustments to Value signals are not limited to Price-to-Earnings. Book Value can be adjusted for the accounting of Goodwill and Intangibles. Dividend Yield can be calculated using the dividends paid over the trailing 12-months, or annualizing the most recent payment. In 2004, Microsoft paid out $32 billion of its $50 billion in cash in a one-time $3 per share dividend when the stock was trading at around $29. Should you include that dividend in calculating yield, knowing that future investors won’t receive similar dividend streams?

Differences in signal construction are not limited to Value factors. Momentum investors know that there are actually three phenomena observed in past price movement: short-term reversals in the first month, medium-term momentum over the next 12 months and long-term reversals over a 3 to 5-year period. Get two momentum investors into a room, and they will disagree over whether to delay the momentum signal one month to avoid reversals, the 12-months minus 1-month. Quality investors argue the usage of non-current balance sheet items, or the loss of effectiveness in investing on changes in analyst estimates. Volatility can be measured using raw volatility, beta, or idiosyncratic vol, to name just a few methods.

Factors are constructed as unique expressions of an investment idea and are not the same for everyone. Small differences can have large impact on which stocks get into the portfolios. These effects are more significant using an optimizer which can maximize errors, or concentrating portfolios giving more weight on individual names. This is far from simply grabbing a P/E ratio from a Bloomberg data feed.  There is skill in constructing factors.

Alpha Signals

Quantitative managers tend to combine individual factors together into themes like Value, Momentum and Quality. But there are several ways that managers can combine factors into models for stock selection. And models can get very complicated. In the process of manager selection, allocators have the difficult task of gauging the effectiveness of these models. The common mistake is assuming complexity equals effectiveness.

To demonstrate how complexity can degrade performance, we can take five factors in the Large Stocks space and aggregate them into a Value theme: Sales/Price, Earnings/Price, EBITDA/EV, Free Cash Flow/Enterprise Value and Shareholder Yield (a combination of dividend and buyback yield).

The most straightforward is an equally-weighted model: give every factor the same weight. This combination of the five factors generates an annual excess return of 4.06% in the top decile. An ordinary linear regression increases the weighting of Free Cash Flow to Enterprise Value and lowers the weighting on Earnings/Price, because it was less effective over that time frame. This increases the apparent effectiveness by +15bps annualized, not a lot, but remember this is Large Cap where edge is harder to generate. Other linear regressions, like ridge or lasso, might be used for parameter shrinkage or variable selection and try to enhance these results.

Moving up the complexity scale, non-linear or machine learning models like Neural Networks, Support Vector Machines or Decision Trees can be used to build the investment signal. There has been a lot of news around Big Data and the increased usage of machine learning algorithms to help predict outcomes. For this example, we’ve built an approach using a Support Vector Regression, a common non-linear machine-learning technique. At first look, the Support Vector Regression looks very effective, increasing the outperformance of selecting stocks on Value to 4.55%, almost a half of a percent annualized return over the equally weighted model.

Compustat Large Stocks Universe, 1963-2016

The appeal of the machine-learning approach is strong. Intuitively, the complex process should do better than the simple, and the first pass results look promising. But the excess returns do not hold up on examination.  This apparent edge is from overfitting a model. Quantitative managers might have different ways of constructing factors, but we are all working with data that does not change as we research ideas: quarterly financial and pricing data back to 1963. As we build models, we can torture that data to create the illusion of increased effectiveness. The linear regression and support vector machines are creating weightings out of the same data used to generate the results, which will always look better.

The statistical method to help guard against overfitting is bootstrapping. The process creates in-sample and out-of-sample tests by taking random subsamples of the dates, as well as subsets of the companies included in the analysis. Regression weightings are generated on an in-sample dataset and tested on an out-of-sample dataset. The process is repeated a hundred times to see how well the weighting process holds up.

In the bootstrapped results, you can see how the unfitted equally weighted model maintains its effectiveness at about the same level. The in-sample data looks just like the first analysis: the linear regression does slightly better and the SVR does significantly better. When applying the highly-fitted Support Vector Regression to the out-of-sample data, the effectiveness inverts. Performance degrades at a statistically significant level once you implement on investments that weren’t part of your training data.

Compustat Large Stocks Universe, 1963-2016

This doesn’t mean that all weighted or machine learning models are broken, rather that complex model construction comes with the risk of overfitting to the data and can dilute the edge of factors. Overfitting is not intentional, but a by-product of having dedicated research resources that are constantly looking for ways to improve upon their process. When evaluating the factor landscape, understand the model used to construct the seemingly similar themes of Value, Momentum or Quality. Complexity in itself is not an edge for performance, and makes the process less transparent to investors creating a “black box” from the density of mathematics. Simple models are more intuitive and likely to hold up in the true out-of-sample dataset, the future.

Multifactor Signals

Multifactor ETFs have a lot of moving parts: the definition of factors, the construction process of building investment themes, as well as the portfolio construction techniques. Market-capitalization ETFs are very straightforward in comparison. Different products use broad, similar universes and weight on a single factor. And market capitalization has one of the most common definitions used for investing: shares outstanding multiplied by the price per share. The result is that different products by different managers have extremely similar results, and these products can be substitutes for one another.

The following two tables show the 2016 returns for three of the most popular market cap ETFs: the SPDR® S&P 500 ETF (SPY), the iShares Russell 1000 ETF (IWB) and the Vanguard S&P 500 ETF (VOO). These are widely held, and as of December 30th, 2016 together have almost $300 billion in assets. For 2016, the returns of these three ETFs are within 17bps of each other. When looking at the annualized daily tracking error for the year, we can see that they track one another very closely. Looking at these returns, it makes sense that the key selection criteria between the funds would be based on the lowest fee.

For a comparison, we can examine four multifactor ETFs that were launched in 2015: iShares Edge MSCI Multifactor USA ETF (LRGF), the SPDR® MSCI USA StrategicFactorsTM ETF (QUS), the Goldman Sachs ActiveBeta U.S. Large Cap Equity ETF (GSLC) and the JP Morgan Diversified Return U.S. Equity ETF (JPUS). Each fund uses a broad large cap universe, and then selects or weights stocks based on a combination of Factor themes: Value, Momentum and Quality metrics. At first glance, it looks like these should be very similar with one another.

Each fund is based on an index, which consists of a publicly stated methodology for how the indexes are constructed. When digging through the construction methodologies, you start seeing that different factors are used in building these themes. The only common Value metric used across all four is Book-to-Price. Two funds do use Sales to Price, but otherwise each fund is using one or two metrics unique to their competitors. QUS does not include momentum, but the other three funds use different expressions of momentum, with two conditionalizing on volatility. The most common Quality metric is Return on Equity, used in three funds, followed by Debt-to-Equity is used in two. Even though most of these funds use the equally-weighted approach in building their investment themes of Value, Momentum and Quality, because of the different inputs, the stock selection will be very different.

These different rankings are then utilized for stock selection and weighting in different portfolio construction techniques. When comparing holdings as of December 30th, 2016, the breadth of securities held range anywhere from 139 to 614 stocks in the fund. Maximum weights range from 3.3% to 0.6%, with the top 25 securities accounting from 43% to 14% of the total assets. They each use different techniques and risk models with unique constraints to shape weightings, leading to widely different portfolios. Looking at these four funds, as well as the SPY S&P 500 fund, they can have higher active share with each other than they do with the overall market.

These differences in signal, construction and holdings leads to very different investment results. When comparing the results for 2016, the best fund had a return of 12.96% while the worst returned 8.73%, a return gap of 423bps for the year. Also, when looking at the daily tracking error between the products, they generate a wider difference of returns with each other than they do with the market.

Keep in perspective that this is a single year. Low performance in 2016 is not an indictment of GSLC; it’s most likely that GSLC was caught in the underperformance of volatility given that it focuses on low volatility names in both its Volatility and Momentum ActiveBeta® Indexes. To confirm that, you would want to run the holdings through a factor attribution framework.

The central point is that even though these four funds look very similar, they generate very different results. Factor products that generate several hundreds of basis points of difference in a single year are not a commoditized product, and should not be chosen for investment in because of a few basis points in fees. Cost leadership is the key feature for generic market-capitalization weighted schemes, but product differentiation and focus in the context of fees should be the reasons for choosing multifactor products.


There is significant edge in how factor signals are constructed. The difficulty is creating transparency around this edge for investors. Complexity of stock selection and construction methodology decrease transparency, almost as much as active quantitative managers that create a “black box” around their stock ranking methodologies. This leaves investors at a disadvantage on trying to differentiate between quantitative products. This inability to differentiate is why price wars are starting between products that have strong differences in features and results.

Investors need education on this differentiation so they’re not selecting only on the lowest fees. Large institutional and investment consultant manager selection groups will have the difficulty of adding top-tier quantitative investment staff to help with this differentiation. Smaller groups and individual investors will have to advance their own understanding of how quantitative products are constructed. For the entire range factor investors, it will help to build trusted partnerships with quantitative asset managers willing to share insights and help understand the factor investing landscape.


Thanks to Patrick O’Shaughnessy and Ehren Stanhope for feedback and edits

The Risk of Low Volatility Strategies

Most factor-based, otherwise known as Smart Beta, ETF strategies are based on a single concept like value or momentum. Over the last two years, the largest flows have been to ETFs investing in low volatility stocks.  The most popular being the iShares Edge MSCI Min Vol USA ETF (USMV), which as of September 30th had grown to $14.4bn USD, more than doubling over the last 12 months.

With product proliferation, there are now a number of low volatility ETFs, each with a different portfolio construction methodology as well as their own method on the best way to select stocks with low volatility.  The most straight-forward method is to look at the raw volatility of the trailing returns, either on a daily or monthly basis, for anywhere from three months to one year.  Some other strategies use the Beta of the stock:  the covariance of the stock with market returns, scaled by the volatility of the market.  In 2014, Frazzini and Pedersen published a “Betting Against Beta” (BaB) strategy, that goes long low-beta and short high-beta, leveraging and deleveraging each side to a beta of 1.  Another option is tracking error, the volatility of the excess returns stock versus the market.  Ang, Hodrick, Xing and Zhang’s 2006 paper “The Cross-Section of Volatility and Expected Returns” introduced the idea of idiosyncratic volatility:  the excess volatility of the stock after a regression on the Fama-French factors.  A last way to measure volatility would be through implied volatility, which uses options pricing to derive the expected future volatility.

We could spend a lot of time arguing the merits of each metric, but in practice the results from investing in volatility factors look very similar.  They exhibit the same return profile:  a portfolio of stocks with high volatility in the past gives high volatility in the future, along with strong underperformance.  Stocks with low volatility continue to have low volatility.  Coupled with modest outperformance, the risk-adjusted Sharpe Ratios look strong. For comparison, the following charts show the excess return, the excess volatility, and the sharpe ratios of portfolios based on Value and Volatility characteristics within the Large Stocks universe, stocks with a market cap greater than average.  Stocks are selected monthly with a holding period of one year as part of the decile portfolio.  While Value has stronger overall return, volatility is less volatile, giving similar Sharpe Ratios between Value and Volatility.  Volatility factors also correlate very highly with one another, much higher than Value characteristics.  This would indicate that Value factors capture different expressions of valuation, where all the volatility metrics seem to be based on the same market phenomenon.

Table 1: Total Return, Volatility and Sharpe Ratios of Value and Volatility Metrics, Compustat Large Stocks Universe 1969-2015. Implied Volatility from OptionsMetrics database, 1996-2015.
Table 1: Total Return, Volatility and Sharpe Ratios of Value and Volatility Metrics, Compustat Large Stocks Universe 1969-2015. Implied Volatility from OptionsMetrics database, 1996-2015.


Table 2: Correlation of Decile Spreads of Value and Volatility Metrics, Compustat Large Stocks Universe 1969-2015. Implied Volatility from OptionsMetrics database, 1996-2015.
Table 2: Correlation of Decile Spreads of Value and Volatility Metrics, Compustat Large Stocks Universe 1969-2015. Implied Volatility from OptionsMetrics database, 1996-2015.

A recent McKinsey report cites research that high-net worth investors top focuses are protecting principal, hedging against downside risks, minimizing volatility and generating income.  After the market crash of 2008-2009, it’s easy to see how advisors and plan sponsors could be drawn to “Defensive Equity” or “Low Risk” strategies as ways to protect against future drawdowns.  From the point of an advisor, low volatility strategies ETFs cover three of these, offering down-side protection with equity-like returns.

The risk of low volatility strategies is its usage within the total allocation of a portfolio. For the asset management industry, the value chain for clients is to hire advisors to establish an asset allocation for them. Once an allocation is decided upon, a manager selection process determines the best people to manage assets within those groups. Passive investing has supplanted active mandates within these allocation buckets. For example, if a consultant believes active managers have no edge in large cap stocks, just buy the Vanguard S&P 500 ETF for five basis points.  Small cap stocks are less efficient and active managers have edge there, so use individual mandates in that space.

It is unclear how Smart Beta strategies fit into this.  If it is viewed as a separate asset class, it is invested in based on the total expected return, volatility and diversification it adds to the total portfolio.  If it is viewed as part of an equity allocation, it is judged on the excess return versus a passive benchmark, scaled by the excess volatility.  In the case of a passive benchmark and an active manager, these roles are clear.  Smart Beta makes this more confusing.  Is it a passive allocation to an asset class, or is it a cheap source of alpha?

Volatility factors might deliver solid risk-adjusted returns for an allocator, but they are lacking in the realm of active management.  Low volatility has had modestly higher performance with a lower raw volatility, but it also came with higher excess volatility.  Using the same basic portfolios formed on the deciles of each factor in Large Stocks as above, the tracking error of volatility factors shows higher excess volatility than Value factors.  The tracking error of the top decile of raw volatility is 9.7% versus the equally weighted universe, versus tracking errors in the 6.5%-7.8% range for value factors.  With lower excess returns, Volatility factors have an information ratio about half to one quarter of that of Value and Yield.

Table 3: Tracking Error and Information Ratios of Value and Volatility Metrics, Compustat Large Stocks Universe 1969-2015. Implied Volatility from OptionsMetrics database, 1996-2015.
Table 3: Tracking Error and Information Ratios of Value and Volatility Metrics, Compustat Large Stocks Universe 1969-2015. Implied Volatility from OptionsMetrics database, 1996-2015.

The higher tracking error can be managed down in portfolio construction:  equally weighted versus market-cap weighted, sector agnostic versus sector relative.  But this still leaves a large amount of excess volatility in the portfolio.  The MSCI Min Volatility USA Index, which the iShares Edge ETF is based on, is a good example.  In the index construction methodology, there are several risk factor and sector constraints, but it still leaves the MSCI Min Volatility USA Index with a tracking error of 5.73% to the broader MSCI USA benchmark since it was incepted in 1988.  Looking at the history of this index through the lens of active management, this gives it an Information Ratio of only 0.25.

Table 3: Summary Statistics of MSCI Min Vol USA Index versus MSCI USA Index, Jul-1989 to Sep-2016, Source: Bloomberg
Table 3: Summary Statistics of MSCI Min Vol USA Index versus MSCI USA Index, Jul-1989 to Sep-2016, Source: Bloomberg

Tracking Error and Information Ratios seem a bit clinical compared to the real-time experience the investor has.  A better way to show the effect of this is the rolling 1-year excess return of the strategy versus the broader market benchmark.  This tracking error difference leads to multiple periods of time where there is strong relative underperformance. In over 20% of the rolling one-year observations, the MSCI Min Vol USA Index is trailing the MSCI USA Index by over -5%.  Several periods of over -10% underperformance, and one time reaching -15%.

Chart 1: Rolling 12-Month Excess Return of MSCI Min Vol USA Index over MSCI USA Index, Jul-1989 to Sep-2016, Source: Bloomberg
Chart 1: Rolling 12-Month Excess Return of MSCI Min Vol USA Index over MSCI USA Index, Jul-1989 to Sep-2016, Source: Bloomberg

The framing of how the investment is perceived matters to the advisor and to the discipline they will have in maintaining an investment in it.  Allocations tend to be more strategic, and not subject to the relative performance of one asset class versus another.  Bonds are supposed to behave differently than equities, which is why you own both.  Investments within the allocation tend to be questioned on a more regular basis.

It is impossible to determine how every person is using low volatility ETFs, but asset flows should give some insight.  If the flows were not reacting to near term performance on a relative basis, then it is being used in a strategic allocation.  But the flows to the Volatility ETFs appear to be based a recent spike in the relative performance, and specifically on near-term performance.  The orange line shows the trailing 12-month performance of the USMV ETF relative to the MSCI USA Index.  Low volatility stocks have been outperforming the average stock since the beginning of 2015, with peak outperformance coming around the second quarter of 2016.  This coincided with very strong flows into the product, where by the end of the second quarter there had been almost $8 billion invested over the trailing 12 months with a coincident trailing 12-month outperformance of USMV over the MSCI USA benchmark of +13%.

Chart 2: Rolling 12-Month Excess Return of USMV versus MSCI USA Index with Rolling 12-Month Net Flows to USMV ETF, Source: Bloomberg
Chart 2: Rolling 12-Month Excess Return of USMV versus MSCI USA Index with Rolling 12-Month Net Flows to USMV ETF, Source: Bloomberg

To his credit, Andrew Ang, who heads the Factor-Investing group at Blackrock which runs USMV, is trying to educate investors about how best to utilize low volatility investing.  He wrote in a September 2016 Forbes article “Investors’ aim with low-volatility strategies shouldn’t be to outperform the market, rather to reduce risk and to measure that performance over a full market cycle.”  But it seems to fall on deaf ears, as assets chase one-year relative returns.

As investors flock to the low volatility ETF based on near-term outperformance, they also sell after near-term underperformance.  These types of fund flow reactions to recent performance will only increase the difference in the time-weighted and money-weighted returns of the fund.  As of September 30th, the USMV had returned 14.39% since inception in October 2011, only +22 basis points over the MSCI USA Index which only returned 14.17%.  But because of the flows chasing performance, the average money-weighted return of USMV is only 11.89% over that time frame, creating a gap for investors of 250bps because of return-chasing.  This gap looks like it’s only going to increase.  In the third quarter of 2016, MSCI USA was up +4.06% while USMV was down -1.16%, a gap of -5.22%.  In reaction, USMV saw redemptions of -$877 million in the month of October.

Table 5: Annualized Return of USMV versus MSCI USA Index, Time-Weighted and Money-Weighted, Source: Bloomberg
Table 5: Annualized Return of USMV versus MSCI USA Index, Time-Weighted and Money-Weighted, Source: Bloomberg

Some of this gap is going to be created by individual investors, operating without an advisor or a structured asset allocation plan.  Some of this will also be factor timers, who are shifting allocations based on when they believe a factor will generate outperformance.  But some of the investors in low volatility ETFs are advisors trying to figure out how to use them in a long-term structured allocation.  In my opinion, a long investment in low volatility portfolios can have a place in a portfolio if you’re thinking about it as an allocation.  If you’re going to judge an investment in low volatility as a cheap active equity investment, there are better factors such as value and momentum that offer the opportunity for greater excess return given the active risk taken.

As a last thought, the shift towards factor-based “Smart Beta” ETFs makes me believe more than ever that advisors need to learn all they can about how factor investing works.  Manager selection involves a long process of getting to know the people and style of their investment.  Investing in an ETF seems easier, but it also comes with reduced switching costs.  And while index construction methodology is published as transparent, there could be less understanding of the process given the lack of interaction with the investment manager and the complexity of some of the strategies.  Lower switching costs and lower understanding of factor investing leads to less investment discipline and a continued degradation of investor returns on a money-weighted basis.

-Thanks to Ehren Stanhope and Patrick O’Shaughnessy for the feedback.  Appreciate the help guys.