AgPa #32: Agnostic Fundamental Analysis (3/3)

Boosting agnostic fundamental analysis: Using machine learning to identify mispricing in European stock markets (2022)
Matthias X. Hanauer, Marina Kononova, Marc Steffen Rapp
Finance Research Letters 48, URL/SSRN

The third and final post about agnostic fundamental analysis. After establishing the idea with US data in the first week and testing it internationally in the second, this week’s AGNOSTIC Paper challenges the methodology and introduces vastly improved valuation models.

  • Part 1: Agnostic Fundamental Analysis in the US
  • Part 2: Agnostic Fundamental Analysis around the World
  • Part 3: Agnostic Fundamental Analysis with Modern Statistical Tools

Similar to last week, I will not repeat the main idea behind agnostic fundamental analysis but rather focus on the specific contributions of this week’s paper. Once again, I therefore recommend to read the series chronologically.

Everything that follows is only my summary of the original paper. So unless indicated otherwise, all tables and charts belong to the authors of the paper and I am just quoting them. The authors deserve full credit for creating this material, so please always cite the original source.

Setup and Idea

One thing I kept mentioning in the last two posts was methodology. Bartram and Grinblatt, the authors of the first two papers, use a very simple regression to explain the market capitalization of firms by >20 fundamental variables from the balance sheet, income statement, and cash flow statement. To mitigate the impact of outliers, they also introduce a more sophisticated Theil-Sen regression. But overall, their valuation approach is quite simple and solely based on linear models.

There is of course no evident reason why the relation between market capitalization and fundamentals should be linear. In fact, the relation is almost certainly more complicated in practice.1Think about debt, for example. A little debt is usually no problem, but after a certain point, things can get very bad quickly. I also mentioned that I don’t want to criticize the authors for their methodology. Their goal is to publish an empirical fact based on a reasonable, transparent, and replicable analysis. Bartram and Grinblatt even acknowledge that there is potential for better methodology but leave that deliberately open for future researchers and practitioners. The nice thing about a competitive field like finance is that it doesn’t take long for others to accept the challenge.

Data and Methodology

The key idea of this third paper on agnostic fundamental analysis is to replace the simple linear regression with more sophisticated machine learning models. Specifically, the authors use LASSO, random forests, gradient boosting, and an ensemble of the latter two (LASSO, RF, GBRT, and Combi, respectively). Those models are more powerful than simple linear regression as they incorporate non-linear relations and better deal with noisy inputs which is both relevant for company fundamentals. There are of course even more sophisticated machine learning models. However, the authors picked those deliberately as they are still relatively simple and easy to use.

The authors apply those models to a sample of 8,121 European stocks over the period from 1987 to 2019. They also apply the same filters as in the previous papers and remove the Financial Services sector and firms without data on all required fundamental variables. In addition to that, they also exclude companies with a market capitalization below $10m (micro caps).

Why do the authors limit their analyses to an European sample? From reading the paper, I see two reasons. First, they want to have a non-US setting as such studies are considerably underrepresented in the literature.2This way, the paper is no only an out-of-sample test with different methodology but also with different data. Second, they want to examine the results of last week’s international paper in more detail. Going back to last week’s international paper, Bartram and Grinblatt actually find negative alpha in their European sub-sample. Once again, it is a nice feature of a competitive field like finance that people observe such results and go deeper.

Except for the valuation model(s), however, the methodology is fairly close to the two originals. The authors use their models to estimate peer-implied fundamental values and calculate mispricing signals as the percentage deviation between the current market capitalization and estimated “fair” value. They also create the trading strategy that bets on convergence of price and “fair” value and goes long (short) the most undervalued (overvalued) stocks in each month.

Table 1 of Hanauer et al. (2022).

The table above shows some characteristics of quintile portfolios for different methodologies. Panel B also plots the pairwise correlation. Before coming to the results, I want to highlight one important detail. To train their machine learning models, the authors not only use the most recent fundamentals but also 48 months of historical data. The machine learning models therefore “see” more data than the linear regression approach of Bartram and Grinblatt (LR(BG)). For a proper comparison, the authors thus also calculate a linear regression with 48 months of historical data (LR (pooled)).

Apart from that, the results are quite similar to the original papers. The absolute values of the mispricing signals are still meaningless, although the machine learning models better fit the data and produce less extreme estimates. Also similar to the two original papers, the most undervalued stocks (Q5) tend to be smaller and score worse on momentum.

Important Results and Takeaways

More sophisticated valuation models yielded better performance

Table 2 of Hanauer et al. (2022).

Panel A of this table shows monthly industry-adjusted returns of the quintile portfolios. Panel B shows alphas of the long-short portfolio for the different valuation methodologies. The numbers clearly show that monthly return spreads and alphas increase with more sophisticated valuation models. For example, while the average monthly industry-adjusted return for the original LR(BG) approach is “just” 0.36%, it almost doubles to 0.6% for the Combi of the two machine learning models. This pattern is even more extreme for alphas. The most robust alpha estimate for the LR(BG) methodology is an insignificant 0.11% per month. This number turns into a highly significant 0.49% for the Combi of machine learning models.

So overall, the results clearly suggest that (at least in this sample) more sophisticated valuation models have been compensated by higher risk-adjusted returns. In my opinion, this is completely reasonable and consistent with my expectations. Apart from that, the results also contradict with last week’s negative alpha for European stocks. The authors don’t comment on it specifically but the difference could be due to the different sample period (1987-2019 versus 1993-2016). Also note that the returns in the table are before trading costs which also explains some of the differences.

Different models emphasize different fundamental variables

Figure 1 of Hanauer et al. (2022).

The authors also borrow some machine learning tools to interpret their model and attempt to identify the most relevant fundamental variables. The charts above summarize the results. I will not go into details, but as a general rule, the variables with the highest SHAP-values contribute the most to the predictions. Note, however, that this approach does not indicate any causality.

Looking at the bar charts, the most relevant variables are quite different between the valuation models. For LR(BG) and LR(pooled), Net Income and Total Assets seem to be the most important. The machine learning models, in contrast, incorporate much more variables. Net Income and Total Assets are still among the Top 10 but other variables like Pre-Tax Income, Common Equity, and Total Dividends are equally or even more important.

In my opinion, these results are quite interesting. The identified variables are mostly in line with traditional fundamental ideas like P/E multiples or dividend-discount models. In addition to that, the different rankings across the methodologies also suggest that there is not the one “right” valuation model. This is again quite similar to human fundamental analysts who also use different model specifications and inputs for their analyses. While this may sound plausible, please keep in mind that the analysis doesn’t indicate any causality.

Conclusions and Further Ideas

Given that we are now at the end of this series, it is time to answer the question why I picked those papers. The main reason is the, in my opinion, novel perspective. In the conclusion of their 2018 paper, Bartram and Grinblatt write that their study “[…] is not another anomaly paper because our approach differs […]”. And this is true. Anomaly papers typically focus on a single or few variables that somehow explained stock returns. This is absolutely not bad and such research tremendously advanced our knowledge of financial markets.3It is hard to criticize the anomaly literature. Researchers received Nobel prizes and several factor investors became very rich. So don’t get me wrong, there is a lot of substance there. But it is still somewhat inconsistent with the approach of most fundamental investors.

In theory, investors should gather all available information about a stock, process them into an estimate for the “fair” price, and trade on their insights.4Note that the idea of efficient markets also works if not all investors look at all information. If one investor just looks at historic prices and another one just at fundamentals, the aggregate price still reflects all information. The (in my opinion) cool thing about the agnostic fundamental analysis papers is that they provide a transparent framework to simulate this behavior. Even more important, they allow us to test empirically if fundamental analysis is actually worth the effort (given the alphas from the price-value convergence strategy, it is!).

In some sense, this is a fundamental counterpart to my post on technical analysis. The authors of this AGNOSTIC paper present an image-recognition model that “looks” at price charts to examine technical analysis. Different approach but the same underlying idea. Simulate what human investors actually do and test if it makes sense.5A combination of the two approaches could be very interesting. Trading on price signals of undervalued stocks… I really like this development to more holistic approaches in finance research.

With respect to fundamental analysis, there are (in my opinion) three important takeaways. First, it actually worked. While no serious fundamental investor probably ever doubted that, I believe it is still nice to see it in the data. Second, fundamental analysis worked even better in less efficient areas of the market. For example, small caps or emerging markets. Third and finally, smarter methodology seems to be compensated. In my opinion, all of these are promising news for both discretionary and systematic investors.



This content is for educational and informational purposes only and no substitute for professional or financial advice. The use of any information on this website is solely on your own risk and I do not take responsibility or liability for any damages that may occur. The views expressed on this website are solely my own and do not necessarily reflect the views of any organisation I am associated with. Income- or benefit-generating links are marked with a star (*). All content that is not my intellectual property is marked as such. If you own the intellectual property displayed on this website and do not agree with my use of it, please send me an e-mail and I will remedy the situation immediately. Please also read the Disclaimer.

Endnotes

Endnotes
1 Think about debt, for example. A little debt is usually no problem, but after a certain point, things can get very bad quickly.
2 This way, the paper is no only an out-of-sample test with different methodology but also with different data.
3 It is hard to criticize the anomaly literature. Researchers received Nobel prizes and several factor investors became very rich. So don’t get me wrong, there is a lot of substance there.
4 Note that the idea of efficient markets also works if not all investors look at all information. If one investor just looks at historic prices and another one just at fundamentals, the aggregate price still reflects all information.
5 A combination of the two approaches could be very interesting. Trading on price signals of undervalued stocks…