AgPa #30: Agnostic Fundamental Analysis (1/3)

Agnostic fundamental analysis works (2018)
Söhnke M. Bartram, Mark Grinblatt
Journal of Financial Economics 128(1), 125-147, URL/SSRN

This week, I will finally review the AGNOSTIC Paper that played a special role in the process of finding a name for this website. I like the paper because the approach is quite different from the common factor-investing-view of the literature. In fact, the paper nicely bridges the gap between “traditional” fundamental analysis and systematic strategies.

  • Part 1: Agnostic Fundamental Analysis in the US
  • Part 2: Agnostic Fundamental Analysis around the World
  • Part 3: Agnostic Fundamental Analysis with Modern Statistical Tools

The paper was fairly successful and received substantial attention. As a consequence, there are two published follow-ups. To get the full picture, I will look at all of them. So this is again a little series within AGNOSTIC Papers.

Everything that follows is only my summary of the original paper. So unless indicated otherwise, all tables and charts belong to the authors of the paper and I am just quoting them. The authors deserve full credit for creating this material, so please always cite the original source.

Setup and Idea

The paper starts with a very basic question: what is the job of investors in the stock market? The answer, of course, is to earn the highest possible risk-adjusted return. And how do investors do that? They collect all kind of information, process them into an estimate for the “fair” price, and buy (sell) stocks that trade below (above) this estimate. This is the simple but powerful idea of market efficiency. We are all greedy and use everything we can to make money. The result of this joint effort is that market prices reflect a lot of information.

So far, nothing of this is groundbreaking new. However, the authors argue that existing approaches of the literature to test market efficiency are not really optimal.1It is generally very difficult (if not impossible) to test the efficient market hypothesis because we never know the “fair price” of a security with certainty. Most of today’s factors (for example, value or momentum) typically consider only a small subset of variables. Obviously, I cannot criticize this approach too much. People won Nobel prizes for the underlying academic work and some fund managers became very rich from applying it. So it cannot be too bad.

Nevertheless, sorting stocks by a few variables remains inconsistent with the idea of an investor who tries to beat the efficient market hypothesis and looks at everything that is out there. For example, pure momentum investors knowingly ignore fundamental data although it is required to “beat” the market in the sense of the efficient market hypothesis. On the other hand, you can also argue that if you have both pure momentum and pure fundamental investors, the aggregate market can still be efficient although each individual investor doesn’t look at the entire information set.

This week’s paper challenges this existing factor-view and aims to take a more holistic approach. The authors hypothesize that stocks (and the underlying businesses) have an intrinsic “fair” value that is solely determined by fundamentals. In the next step, they construct a simple and transparent statistical valuation model. With the “fair” value estimates from this model, they can test if the current price indeed reflects all fundamental data. If their agnostic fundamental analysis “works”, i.e. if buying (selling) undervalued (overvalued) stocks is profitable, this is at least some evidence against the idea of efficient markets…

Data and Methodology

The most important part of the paper is the agnostic valuation model. For that purpose, the authors use a simple linear regression to explain the current market capitalization of companies by essentially all their most recent fundamentals from the balance sheet, income statement, and cash-flow statement. They do this analysis at the end of each month and estimate peer-implied fundamental values for the entire cross-section of US companies from the fitted values of the respective regression.

The underlying idea of this valuation-approach is the law of one price. If two companies have exactly the same fundamental characteristics, they should (in theory) also have the same value. Conceptually, this is very similar to they way fundamental investors evaluate stocks with multiples. Comparable companies should trade at the same multiple (P/E ratio, etc.) and if there are deviations, this is a potential opportunity to profit. The limitation of such approaches, of course, is the fact that it’s never possible to quantify all characteristics of a company (think about stuff like corporate culture). So you can never be sure if deviations from the peer-implied value are indeed mispricings or justified by unconsidered variables.

Nonetheless, throwing all fundamental variables into a simple linear regression is definitely a plausible starting point and avoids a lot of data-snooping problems. To implement this methodology, the authors obtain fundamental and market data from Compustat and CRSP, respectively. The sample ranges from March 1987 to December 2012 and includes all US companies with a stock price above $5 and available data on all accounting variables. The authors also exclude the Financial Services industry as those companies have very different financial statements.2For example, Banks don’t have an operating income and, by their nature, much higher debt ratios.

The final agnostic valuation regression explains market capitalization by 28 fundamental variables. Note that the only purpose of this regression is estimating fitted values, i.e. the peer-implied fundamental values. By including so many, often strongly related, variables the regression coefficients are almost certainly not meaningful.3For the geeks, the regression sufferers from multi-collinearity. In addition to that, accounting data typically suffers from extreme outliers which also affect the estimates. The authors mitigate this issue by winsorizing variables at their 5% and 95% quantiles. They also use an alternative regression technique (so called Theil-Sen (TS) estimator) that is less sensitive to outliers to stress-test their results.

Finally, the authors calculate a mispricing signal as the relative difference between the current market capitalization and the peer-implied “fair” value. The higher (lower) this signal, the more overvalued (undervalued) the stock. With these signals, they construct a systematic investment strategy that bets on the convergence of peer-implied fundamental values and market prices. For that purpose, the authors sort stocks according to their mispricing signal into quintiles and the following table provides summary statistics about some characteristics in each group.

Table 1 of Bartram and Grinblatt (2018).

The first observation is the magnitude of the mispricing signals. For the most overvalued stocks (Q1 Overvalued), the signal is on average -2.0253 over the entire sample period. For the most undervalued stocks (Q5 Undervalued), it is 5.3858. Expecting such dramatic returns from convergence of price and value is of course unrealistic. After all, a stock can only fall by 100%, not 202.53%… The authors argue that is is a direct consequence of the simple linear regression and the volatile data. Therefore, the absolute value of the mispricing signal is almost useless and the authors just use it to determine the relative attractiveness of stocks in the universe.

Apart from that, it is interesting that undervalued stocks tend to be smaller. The average market capitalization for Q5 is only $381.3m and almost 90% lower than the $3,541.8m for Q1. Undervalued stocks also tend to have lower betas and higher gross profitability. In my opinion, this is a very nice result because it is consistent with a discounted cash flow logic. All else equal, higher profitability and lower betas lead to higher cash flows and a lower discount rate, respectively. Both effects increase the (theoretical) value of a company. Finally, undervalued stocks tend to have negative exposure to momentum (the return from t-1 to t-11 is much lower). I don’t think this is too surprising.

Important Results and Takeaways

Undervalued stocks outperformed overvalued stocks by about 0.5% per month

To test if their agnostic fundamental analysis works and if there are indeed profits from the convergence of price and peer-implied fundamental value, the authors present backtests for the quintile portfolios. The following table summarizes average monthly returns for equal- and value-weighted portfolios for the entire sample period and two sub-sample periods.

Table 2 of Bartram and Grinblatt (2018).

In line with the idea of convergence, monthly returns increase almost monotonically over the quintiles. Undervalued stocks outperform overvalued stocks on average by about 0.5% per month which translates into annualized differences of more than 6% per year. These differences are statistically significant and apparently also not concentrated in certain months. Undervalued stocks outperformed quite consistently during more than 50% of the sample period. Also note that the performance difference was even stronger for the more robust regression technique (final column “TS”) where the monthly return differences reach up to 0.79% for equal-weighted portfolios.

In summary, the simple agnostic fundamental analysis worked in the past and would have generated considerable profits. Given the discussion above, the authors regard this result as evidence against the efficient market hypothesis as current prices apparently do not reflect all fundamental data. This is encouraging for fundamental investors as the results suggest that their effort paid off.

Agnostic fundamental analysis yielded significant alpha

Apart from the monthly “raw” returns, the authors also benchmark the strategy against several known return predictors. The following table shows the factor loadings and alphas of the mispricing-portfolios with respect to a 8-factor model.4The authors add momentum, short-term reversal, and long-term reversal to the Fama-French 5-factor model. So this is a quite comprehensive test. I will obviously not comment on every single number in the table but rather focus on the big picture.

Table 4 of Bartram and Grinblatt (2018).

The long-short “Undervalued-Overvalued” portfolio generated significant alpha against all factors for both equal- and value-weighting. This indicates that the idea of agnostic fundamental analysis reaches beyond known return predictors and seems to be indeed something new. Depending on the specifications, monthly alphas reach up to 0.87% and therefore suggest strong profits. As for the previous results, the more robust regression technique (“TS”) worked again better. In my opinion this is quite reasonable because more sophisticated methodology should be compensated in a competitive market.

Another interesting and important observation is that most of the alpha comes from the short-side of the strategy. The “Q5 Undervalued” portfolio did not produce significant alpha in any of the specifications while the “Q1 Overvalued” portfolio did so for all of them. This suggests that it is more profitable to short the most expensive stocks than to buy the cheapest ones. Note however, that these results are before transaction costs and implementation challenges might explain some of these higher profits.

Conclusions and Further Ideas

As I mentioned in the introduction, I really like the paper because it provides a different perspective and tries to systematically model what a fundamental analyst does. In addition to that, I also believe it is encouraging to see robust empirical evidence that even very simple forms of fundamental analysis “worked” historically. Nevertheless, there are some open issues remaining.

The first one, of course, is the methodology. The authors explicitly try to make as few assumptions as possible but there are some things you can’t change. For example, linear regression is very sensitive to outliers and even after winsorizing, the data remains messy.5Think for example about Apple’s two trillion market cap. This is 1000x the value of a $200m small cap. Another issue are the input variables. It is of course perfectly reasonable to just include all fundamental variables, but on the other hand, we know that momentum is also a desirable characteristic of stocks. So why not include it in the regression as well?

The second big issue, and you know this from my other posts, is an out-of-sample test. So far, the authors only analyzed the US market. To test whether agnostic fundamental analysis indeed “works”, we need to find similar results in international markets.

The good news is that the authors and some others already addressed both issues in two follow-up papers. In Part 2 of this series, we will continue with international out-of-sample tests and look at the exact same analysis within global equity markets (>25,000 stocks in 36 countries). In Part 3, we will address the methodology and look at a paper that replaces the simple linear regression with state-of-the-art machine learning models.

This content is for educational and informational purposes only and no substitute for professional or financial advice. The use of any information on this website is solely on your own risk and I do not take responsibility or liability for any damages that may occur. The views expressed on this website are solely my own and do not necessarily reflect the views of any organisation I am associated with. Income- or benefit-generating links are marked with a star (*). All content that is not my intellectual property is marked as such. If you own the intellectual property displayed on this website and do not agree with my use of it, please send me an e-mail and I will remedy the situation immediately. Please also read the Disclaimer.


1 It is generally very difficult (if not impossible) to test the efficient market hypothesis because we never know the “fair price” of a security with certainty.
2 For example, Banks don’t have an operating income and, by their nature, much higher debt ratios.
3 For the geeks, the regression sufferers from multi-collinearity.
4 The authors add momentum, short-term reversal, and long-term reversal to the Fama-French 5-factor model. So this is a quite comprehensive test.
5 Think for example about Apple’s two trillion market cap. This is 1000x the value of a $200m small cap.