A member recently asked me if a screening strategy with fewer criteria performs better than one with many criteria. As irony would have it, a few days later after I was asked this question, Wesley Gray and his colleagues at Alpha Architect published a paper on SSRN comparing several of the value-oriented AAII Stock Screens to a simple valuation model. The study’s results are not an apples-to-apples comparison to the way we track the performance of the screens (I’ll discuss the differences momentarily), but it did find that only our Piotroski High-F Score screen fared as well as a screen that simply seeks non-financial stocks with low ratios of EBITDA (earnings before interest, taxes, depreciation and amortization) to TEV (total enterprise value).
The challenge with any strategy is making it investable. It is quite common for an analysis of indicators to divide the results into deciles, or 10 evenly split groups ranked from lowest to highest. Even if the universe of stocks studied for the analysis is narrowed in some fashion, each decile may still contain far more stocks than the average individual investor is willing to hold or can cost-effectively hold. (In Gray’s study, the EBITDA/TEV screen identified an average of 96 stocks.) There is also a behavioral aspect to consider: How willing are you to hold stocks that are otherwise unattractive?
This is where adding additional criteria can be beneficial. By adding additional criteria to a screen, the list of passing stocks can be narrowed down to a manageable level. More importantly, undesirable traits can be weeded out. Gray’s ValueShares US Quantitative Value ETF (QVAL) overlays economic moats and financial strength on top of valuation measures.
An element of human interaction is helpful as well. A good screen only knows what it is told to look for. It knows nothing beyond its criteria. As such, a screen can have historical or backtested results and still identify undesirable stocks. After all, companies can experience surprises (both good and bad) that are beyond the scope of the screen. To get around this unsystematic risk, you need to build a large enough portfolio (e.g., 15 or more stocks).
So, what’s the maximum amount of criteria you should use in a stock screen or a stock selection strategy? The answer partially depends on what you count as a being a criterion. Technically, the Piotroski: High F-Score screen only looks for stocks with a minimum F-Score and not excluded by three other restricting criteria. The F-Score itself, however, is based on nine different parameters. Joel Greenblatt’s Magic Formula screens for stocks with return on capital greater than 25% and selects the 30 with the highest earnings yield. This sounds like just two criteria, but return on capital for this screen requires calculating tangible capital from five balance sheet items, and earnings yield is calculated as dividing earnings before interest and taxes by enterprise value (as opposed to merely the inverse of the price-earnings ratio).
The answer further depends on what you want to exclude from your results. Greenblatt’s seemingly simple strategy was initially based on a database of only exchange-listed stocks. This necessitates including a criterion to exclude over-the-counter stocks. Our Model Shadow Stock Portfolio screen has restricting criteria to omit ADRs, financial stocks, Chinese stocks, limited partnerships and stocks with share prices below $4.
There is a point at which a screen will fail to identify a sufficient number of stocks, or no stocks, because it is so restrictive. This can occur when too many criteria are used. A balance can be found by selecting the key traits you want in a stock (e.g., low valuation, earnings growth, price momentum, dividends, etc.) and overlaying additional criteria to omit stocks you want to absolutely avoid (e.g., over-the-counter stocks). Don’t obsess over the amount of criteria used in your screen, but rather focus on the general characteristics you desire in a stock. In other words, there isn’t a magic number of criteria you should target.
I looked at the AAII screens to see if there was any noticeable trend in the number of criteria used, as I sensed that some of you would feel unsatisfied without an actual number. Out of the 10 screens with the best performance from inception, seven used either eight or nine criteria based on what is listed on AAII.com. I view this more as coincidence than anything else, especially given the diversity of how those screens approach stock selection.
As far as the results published in the study mentioned above are concerned, Gray and his colleagues restricted the analysis to stocks with market capitalizations ranking in the largest 60% of all NYSE-listed stocks. When no stocks were identified, the portfolio balance was allocated to a universe of mid- and large-cap stocks. The stock screen results we show on AAII.com do not have market capitalization restrictions, unless specifically part of a given screen. When no stocks are identified, we treat the portfolio balance as being allocated to cash. Furthermore, the Gray study looked at returns for the period of 1963 through 2013, whereas the results on AAII.com are calculated from the beginning of 1998 through the most recently completed calendar month. (The authors do caution in their study about their returns potentially differing drastically from what appears on AAII.com.)The views and opinions expressed herein are the author's own, and do not necessarily reflect those of EconMatters.
About The Author - Charles Rotblut, CFA is the VP and Editor for American Association of Individual Investors (AAII). Charles is also the author of Better Good than Lucky. (EconMatters author archive here)
© EconMatters All Rights Reserved | Facebook | Twitter | Email Subscribe | Kindle