You are on page 1of 20

Interviews for Intelligent Investors

A Conversation with Quantitative Value Investors: Toby Carlisle & Wes Gray.
Miguel A Barbosa 2013

Table of Contents
Part 1: Introduction ................................................................................................................ 3 The Historical Case For Quantitative Value Investing ................................................... 4 Models vs Experts ........................................................................................................... 4 Part 2: The Quantitative Value Model ............................................................................... 7 The Checklist Approach To Quantitative Value Investing .............................................. 8 Cheapness is Everything ............................................................................................... 11 Obstacles ....................................................................................................................... 12 Protecting Capital.......................................................................................................... 13 Criticisms & Misconceptions........................................................................................ 14 Part 3: Applying The Quantitative Value Model ......................................................... 16 Rebalancing, Holding Cash, & Managing Taxes .......................................................... 16 International Investing .................................................................................................. 17 Part 4: Closing Thoughts .................................................................................................... 18 Appendix: Toby & Wes on Joel Greenblatts Magic Formula .................................. 19

Part 1: Introduction
Miguel: My name is Miguel Barbosa. With me are my friends Toby and Wes. They've written a book called Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors. Thanks for joining me. Pleasure. Thank you for having us. Thanks. What question does your book answer? The book is about answering the question, What is the most efficient and effective way to screen for stocks? The initial answer is that it probably involves a value approach. The philosophy of trying to buy stocks that are cheap works- and theres a lineage of research and evidence supporting this approach. So tell me more about the research on why value works. Value strategies outperform but the reasons are not because cheap stocks are more risky. It's more likely that value strategies are exploiting the naive strategies that other investors employ. A naive investor might not fully appreciate the phenomenon of mean reversion, which is that an undervalued stock tends to mean revert to about the average valuation. Value investing seeks to exploit the behavioral and suboptimal errors that other investors make. So why did you decide to use a quantitative approach (along with a value approach)? Although, we wanted to screen for cheap stocks, screening for cheapness without accounting for risk concerned us. We were also worried about our own behavioral biases. So, we decided to use a quantitative model to remain objective and disciplined. We married the two approaches -value investing & quantitative model trading- and call it quantitative value. We argue that quantitative value investing is the most efficient and effective way to screen for stocks.

Miguel: Toby: Wes: Miguel: Wes:

Miguel: Toby:

Miguel: Wes:

The Historical Case For Quantitative Value Investing


Miguel: So, you are combining value investing (a la Graham/Buffett) with the quantitative (from Ed Thorp). Tell me more about the historical basis for using the quantitative value approach. Theres certainly a precedent starting with Ben Graham. In much of his writing he outlines quantitative value strategies. Grahams strategies are not as sophisticated as ours, not because he didn't know about this stuff, but rather because he didn't have computers or the ability to mine databases. Grahams principles -of focusing on simple value indicators and maintaining discipline- still apply. Of course, another value approach that developed from Grahams was Warren Buffetts, which, evolved to analyzing the whole spectrum of value. Buffett realized that although value investing meant purchasing a manufacturing firm at five times earning, it could also be Coca Cola at 1520 times earning because they may have some sort of economic [moat] that could generate enormous returns on capital. What we are doing is taking Ben Graham's and Buffetts principles to the next level.

Wes:

Models vs Experts
Miguel: Toby: Tell me about your argument for quantitative models beating experts. We already mentioned that value investing tends to outperform because of behavioral mistakes made by investors. Unfortunately, these errors apply to both inexperienced and experienced investors (ie. experts). It turns out that there's a lot of research on experts making errors. Most of this research exists outside of finance but it applies to financial decisionmaking. One research study along these lines involves a simple test given to patients as they were admitted to hospitals to determine whether they were psychotic or depressed. Apparently the symptoms are quite similar. So in this study, the results of a simple test were compared against the diagnoses of inexperienced and experienced doctors. They found that experienced doctors make the correct diagnosis more often than inexperienced doctors. But the simple model -with a few questions administered by an inexperienced person- performed even better than the most experienced doctor. The really interesting thing is that in a repeat study, experienced and inexperienced doctors were provided with a copy of the test and asked to

5 administer it. What they found is that; although their ability to diagnose improved, the simple test still outperformed both the experienced and the inexperienced doctors. The conclusion was that simple models outperform(ed) even the best experts. Wes: In general experts or people doing forecasting get evidence and set up their prediction. Forecasters then collect additional information, but, on average, this extra information doesnt improve the accuracy of their forecast. However, the additional information does increase the confidence of their forecast. In the end, the forecasters new prediction is no more accurate than the original, but they are much more confident (often overconfident) in their predictions. This can lead to erroneous decision making. An example would be a study that we discuss in the book, which looks at horserace handicapping. This study is relevant because like investors handicappers have to make predictions under uncertainty. The experiment goes as follows: Researchers asked them for 20 factors that would determine the outcome of a race. Researchers then broke these 20 factors into five groups of four and randomized them so each handicapper would get a different group of variables. Each handicapper was then given pieces of information to make a prediction about the outcome of a horse race and was required to make a prediction. They were subsequently provided with another group of factors and asked whether they wanted to change their prediction and then how confident they felt. What the researches found is that the accuracy (based on the predictions vs actual outcomes) didn't improve when handicappers were provided with additional blocks of information but their confidence in the accuracy of their predictions did increase. The reason that this occurred is that we collect additional information we filter it on the basis of the information that we already understand. So, we create a story, and as we encounter new information we accept it or reject it based on whether it fits our initial story. So two experts could be sitting next to each other with different perspectives generated from getting different pieces of (initial) information and they wouldn't change their mind even if the next piece of information they got was the one that the guy sitting beside him had received. Wes: What Toby is describing happened to me when I was a stock picker. I remember a stock I invested in called Combimatrix. I was like "Oh man, this is great," I did my research, visited the company, met the CEO, etc. I uncovered everything you could ever want to know about the company.

Toby:

Well, I lost 95% of my investment and it was because of what Toby is describing. Every single time I collected more information I was more and more convinced that it was a great investment. Unfortunately, all my additional research was reinforcing my original hypothesis and making me more confident. As the stock dropped, I bought more--how could I lose? I think this phenomenon of more information translating into higher confidence but no corresponding increase in accuracy, represents a huge danger for bottom up investors. These investors often think they are conducting value-add research when in fact they are just convincing themselves of the soundness of an ill formed idea. Miguel: Yeah, my ex-boss used to say, "Within the first 30 minutes to an hour I know if it's the type of thing I want to buy or not and then Im just looking for details to disprove my thesis." And he used to tell me all the time, "Be careful, because you can know too much." This happened to me often. I would go into such detail, that I lost all sense of perspective. I couldnt zoom out and weigh the factors that really mattered. Wes: Yeah.

Part 2: The Quantitative Value Model


Miguel: Wes: Let's talk about your quantitative model. What data sets did you use and what universe did you cover? The data sets we use are pretty much the standards in academic and practitioner research. For fundamentals data, we use Compustat and for return data as well as delisting data we use CRSP, which stands for the Center for Research in Security Prices. Now as far as universe of stocks, we know that you can make the returns look very favorable (i.e. 20%-25% CAGRS) when you include microcaps, small caps, etc. Our research process didnt involve data mining for rare artifacts. We looked for robust strategies that could work in liquid markets. This way we could develop strategies that were believable and useable for 99% of people (and institutions). To do this, we focused our analysis and strategies on a market cap break point of the 40th percentile. This means that every single year on June 30th (before rebalancing) we look at the entire universe of NYSE stocks (at that point and time.) We then percentile rank stocks based on their market caps and use the 40 percentile rank as the minimum cutoff for our universe. As of December 31 2012, the cutoff mark was around 1.5 billion. So the model would include stocks that are pretty large. It's really important that over time you dynamically adjust the market cap cutoff because everything is relative to a point and time. 1.5 billion is not huge today, but 50 years ago it might represent the largest firm in the market. Miguel: Wes: You mention that people misunderstand the dynamic nature of your model. What do you mean? Lets use an example. Lets say that after performing some sort of liquidity cutoff we have 1000 stocks in our universe. First, we kick out firms that have a potential for permanent loss of capital. This means we remove approximately 10% of firms in our sample--so now we have 900 firms. Next, we filter for cheapness by decile and go to the cheapest decile--this means we have only 90 firms left. We then calculate quality measures for every single firm amongst those 90 firms, and rank them. Finally, we buy the top half of the 90 cheapest firms. So our portfolio for the year would be 45. You see that if you start with a different universe at the initial step of say 500, the portfolio may only contain 22-23 firms. So thats what we mean by the model being dynamic, because the construction of portfolios depends on the initial sample as well as the number of firms that get

8 eliminated. Historically, our portfolios hold between 30 to 40 names.

The Checklist Approach To Quantitative Value Investing


Miguel: You present your quantitative model as a mechanized checklist. The checklist contains factors for screening stocks (i.e. measures for cheapness, risk, quality, etc). Tell us about the checklist. We built our model around a checklist because thats how most value investors would operate. So we start with cheap measures. Next we look for measures that identify risky stocks. That is stocks where you can never be confident that they have any intrinsic value. There might be earnings manipulation or fraud etc. We also wanted to consider quality and by quality we mean, not return on equity, but other quality measures like margin strength and margin consistency. Tell me more about the individual measures included in the checklist. Many of these measures already exist in academia. In some instances we improve upon these measures. For example, we rearrange the Piotroski F score and change several of the variables. One of the most important findings in developing the searching algorithm is the step in our checklist that relates to avoiding permanent loss of capital. Most value investors complete this step at the end. We do it at the beginning because there are a lot of models in academic research shown to predict with statistical significance future frauds, earnings manipulations, and potential bankruptcies. Tell me about some of these financial risk measures: A lot of people know about a few of the manipulation measures we use such as accruals. Broadly speaking, firms that use a lot of accruals are prone to earnings manipulation--most folks know that. So lets talk about some that your readers will be less familiar with. One tool for detecting manipulation is a statistical model called the PROBM Model. This model is based on research that uses a sample of firms that engaged in past manipulation to identify a set of predictive variables that can identify manipulators in out of sample. The model mines the relationship between a set of firms with a history of manipulation and these predictive variables. Finally, the model calculates the probabilities of future manipulation. There's a second model that can be used to predict financial distress or bankruptcy. It comes from a paper titled, In Search of Distress Risk.

Toby:

Miguel: Toby:

Wes:

Miguel: Wes:

9 This is different from measures that people typically use like the Altman Z. Before elaborating on this model, I want to take a second to talk about the Altman Z and why it doesnt work (anymore). Altman Z became popular because Ed Altman published a paper in 1968 and became a consultant to everyone. Well, it turns out that customers buying the consulting work havent been paying attention to the last 40 years of research because about ten years after Altman wrote that paper, another paper came out and said, Altman's paper is not correct because he had look ahead bias in his data. Fast-forward 34 years, and people say, Altman's stuff doesn't even work out of sample. We like to use the latest in financial technology, so we ditched the Altman Z approach and focused on this improved financial distress model. The framework for the model works is exactly the same as the PROBM model. You start by identifing firms that actually went bankrupt and/or had some sort of serious financial distress problem. Then you find variables that are hypothesized to matter for predicting distress. Variables like recent high variability in stock prices, recent under performance in the stock price, high debt relative to your assets, no income, etc. Then the authors fit this model with the sample to data-mine the relationship between actual bankrupt firms and these hypothesized factors (that should be useful in predicting bankruptcy). When you apply this model out of sample, it works. It increases the prediction ability by about 15 to 16 percent. It's not perfect and there's certainly a lot of instances where you're going to get false positives, but in the context of value investing -especially for long-only investors the key is to eliminate the left tail. Miguel: Toby: What other measures do you find to be extremely important? The valuation measures are the price ratios. We studied a number of price ratios, free cash flow to enterprise value, the enterprise multiple that is EBITDA to enterprise value and the EBIT version. Then we looked at their performance over a period and found that an EBIT to enterprise value ratio performed the best. This finding is something I thought would be likely before we conducted the testing. But, what I found really surprising was when we tested permutations of those price ratios such as multi-year averages (so multiyear averages of P/E, EV/Ebitda etc). Those multi-year averages didn't perform as well as the single slice of EBIT to total enterprise value, which is interesting. The simple version gives better performance. The other thing that we did was construct averages of all of those price

10 ratios to produce composites. I've seen in various research reports where composites have performed very well and those were my expectations. When we did the testing it just didn't work out that way. I think we examined eith year data and it doesn't improve returns. And the other thing that we looked at was combinations of price ratios. So, free cash value to enterprise value, EBIT to total enterprise value, price to book, price to earnings, and I've seen in other places where those combinations have worked and have improved returns but we didn't find that. We found that they still under-performed the best performing price ratios. It seems that as unlikely as it sounds, the EBIT to total enterprise price ratio is the best measure for value beating out multi-year averages and composites. Miguel: Toby: Wes: Wow. It's completely unexpected. Its all about robustness; it's not that you can't find certain time periods where certain ratio works better than another. But the real trick in performing simulations is looking at performance over a whole cycle. On average, the more complicated the formula, the more the performance lagged. Lets keep talking about checklist measures. Just to reiterate your model starts by eliminating high-risk companies (ie protecting downside) out of a universe of cheap stocks. So what are the next factors that the model analyzes? So once we've ranked the cheapest companies by decile we look for high quality stocks. We do it, by examining two categories of factors; operational factors and financial strength factors. Ill take a second to talk about the financial strength factors and Wes will talk about the operational factors. The financial strength score is similar to the Piotroski F Score. In fact, it's almost exactly the same but we change Piotroskis equity issuance gate to a net equity issuance test and we rearrange the sum of the other factors into something a value investor would recognize and we call this the financial strength score. So the higher the financial strength score, the higher the quality of the company. Wes: Lets talk about the operational factors that we use to find quality companies. Warren Buffett talks about a durable competitive advantage as a proxy for quality and calls it an economic moat or franchise. It's an indicator that

Miguel:

Toby:

11 a firm can earn returns on capital that are higher than what they should earn. Our goal is to try to find factors that would signal the presence of a durable competitive advantage. To do this we look at eight years of returns on capital, returns on assets, & margins. Let me give you an example of why its important to take a long term view: Lets say you have a firm that last year that had 100% return on capital, that's not really necessarily a quality firm because maybe the prior year had -50% return on capital. Maybe they're an oil producer or maybe the business is just responding to a commodities cycle. However, if you could identify firms that, say, had 15-20% returns on capital ever year for the past eight years that would be an indication that this firm might be special. So we have fined tuned measures to detect the historical presence of an economic franchise. Miguel: Toby: So in conclusion what have you found about most factors/measures that you have tested (and that you include in your checklist-model). We found that using simple measures results in good returns. But adding well-known quality type dimensions doesn't provide much of a boost. At best you receive a tiny marginal return on top of the return provided by the simple measures. Thats right. One of our board members is an accounting professor at Columbia. They have a new method for capitalizing R&D, which results in more accurate calculations of book to market. We researched the more complicated and nuanced method and it simply doesnt add much marginal value. Most variable fine-tuning (beyond the basics) doesnt work. Complication and sophistication dont equate to better results, merely overconfidence in the model. The key insight is to capture movements of cheap stocks by using basic metrics. If you can keep things simple, youll capture 95% of the result and filter out the noise. In the end, a simple checklist of simple variables ends up adding the most value.

Wes:

Cheapness is Everything
Toby: Wes: It also goes back to the fact that the main driver of returns is value. Cheapness is everything. That's the biggest takeaway anybody can take from any of the research we have performed. Quality matters at the margin, but the minute you move out of cheap is the minute you shoot yourself in the foot.

12

Toby: Toby:

We've done a whole heap of back testing to essentially prove what Benjamin Graham said 50-60 years ago. This reminds me of a 1976 article where Graham concludes; that all that effort in security analysis, might not add a great deal, because you can outperform by using cheapness and quality measures. His measure of cheapness is a PE of less than 10. And his measure for quality is a debt to asset ratio of less than 50%. He said this based on 25 years of data that he had collected. We researched the same factors and found the same results except that we had much more data. Yeah. It's incredibly volatile, but if you want a 15% return, this is the way you do it. But you better ready for a wild ride.

Wes:

Obstacles
Miguel: Wes: What are the biggest obstacles in applying your model? I think the biggest obstacle is tracking error. With any value strategy you're going to get a lot of tracking error, and that's just the nature of the beast. If you want to be tightly bound to an index, these strategies simply aren't going to do it for you. They've got huge swings and you're probably going to lose your job if you're marked to market on a monthly basis. So I think that's really the biggest challenge for an institutional money manager. Toby, whats your take on obstacles to quantitative value investing? I think the biggest obstacles are behavioral. It's extremely difficult to buy some of the stocks in the list. Joel Greenblatt describes a situation where he compared accounts using the magic formula. There were two types of accounts: self-directed and automatic. The automatic accounts bought everything in the screen and the self-directed ones systematically avoided the biggest winners. This happened because they don't look like they're going to generate a good return, which is exactly what makes them such deep value in the first place. An example would be buying Blackberry. I had conversations with people in June of 2012 and at the time that they said, " there are a variety of reasons why you shouldn't be buying the stock. One its a technology firm, and it's losing market share to Apple and Samsung," but Wes bought it anyways. Wes: Yeah, that's right we bought RIMM in June of 2012. I really didnt want to do it. But I follow models. Jim Simmons says, If you're going to use

Miguel: Toby:

13 models, slavishly follow your model. Another classic example is GameStop. A very well known short seller touted GameStop as a short--Jim Chanos. He is so well known that if he says something bad about your stock, you know everybody hates it, even value investors. But GameStop showed up in the screen on June 2012 and has doubled since then. So I'm of the opinion that this quant value system is not only contrarian to what the market says, but also to most value investors. Its picks the safest trash that nobody would ever want to touch-even value investors!

Protecting Capital
Miguel: If I wanted to apply this strategy, one of the things I'd be worried about is protecting my capital during market-wide downturns. Tell us about how your approach protects capital in a variety of market climates. The question of how well our model protects capital is an empirical question. If you compare risk measures such as max drawdowns, our model outperforms standard mechanical value strategies and value managers. Even in the most turbulent times a long-only fund using our strategy has a 37% drawdown during the financial crisis versus the S&P 500 which endured a 50.21% max drawdown. The real question is why do we outperform? The answer is that downside protection is built into the model. Starting with the first step of avoiding permanent loss of capital (i.e. frauds etc). The model cuts off the left tail of potential -100% percent investments. Even if you have a diversified portfolio of say 30-40 names, a negative 100 percent position hurts your returns. So this is one of the reasons we outperform from a risk management perspective. Next the model protects downside by focusing on cheapness. We already know from Ben Graham that cheapness provides a margin of safety, which prevents permanent losses of capital. But we go even further. After looking for a margin of safety we look at quality measures, which correlate with economic moats. Now I have to clarify that having a moat isnt a surefire way to protect capital because a business can have an economic moat but also be unable to pay a debt payment in which case its going to go bankrupt. But we do look at economic moats because cheap firms with moats can compete effectively (and outperform) during down terms. Finally, after taking all these steps we incorporate what I call a preflight checklist which is literally a ten point checklist, akin to a Piotroski F score but with slight improvements. This checklist looks at factors like:

Wes:

14

1. 2. 3. 4. 5.

Debt levels Working capital Operations improving Capital allocation/Net Stock Issuance Profitability

So when markets tank our process minimizes capital loss. This is what we think drives the empirical evidence for our outperformance.

Criticisms & Misconceptions


Miguel: Wes: What are the biggest criticisms and misconceptions of your quantitative value model? The biggest criticism is that people dont believe simple models can beat experts. It's very difficult for an expert, who's invested 30 years of their life, to hear people say that a model can beat their instincts. So the biggest criticism we get is that it just can't be true and wont work. There's no way a model can beat me because I've put all this time and effort into learning about finance. My only defense against these critiques is the data. It's an empirical argument and so I point naysayers to the data. What makes our system work is the presence of people who think they can rely on gut instincts to outperform. Some will outperform but of course most wont. So, I would say, we have a different philosophical view and our view is based on data. Toby: Many people argue that the people using models arent experts. But, we find that even when experts apply models we get similar results the models win. Another criticism I get is from value investors. They tell me that it's the undiscovered footnote in the financial statements that makes the difference between something being a buy, sell, or pass. They claim that we're not going to be able to find these issues because we're not performing a granular analysis of the footnotes. We addressed this earlier with the horserace handicapper experiments. When we mentioned that there is research showing that collecting more information when making a future prediction doesnt increase the accuracy of the prediction. Rather it only increases the confidence in the accuracy of the prediction. As an investor gets more and more granular information (i.e. footnotes) they run the risk of becoming wedded to a position and filtering

Toby:

15 everything through the same lens. And in this case even though they think more information will increase their returns it only increase their confidence in their returns. Wes: I also want to stress that Toby and I have not been quants our whole life. I was raised on the religion of value investing and picking individual stocks. It took me 15 years to get over the hurdle of using models instead of picking stocks. It's also not the case that either of us are quant geeks with zero market experience. People often say, All these PhDs, they just don't understand. Well, in our case thats wrong because we do understand. We've been there. I've managed a fund as a stock picker. It was terribly stressful and made my hair turn gray. I think we both generally came to the philosophy of quantitative value after being stock pickers. We both tried many different value approaches. But eventually both asked, Am I really that smart? Am I smart enough to outsmart myself? I came to the realization that I'm just not that smart. Miguel: Toby: So what other criticisms are you receiving? One criticism is that there's some disappointment that the price ratio we ended up using was EBIT to total enterprise value, but we've tested long term averages and we've tested composites in the book and we didn't find anything that beat EBIT to total enterprise value, which I think is surprising, because Greenblatt uses the same ratio and found that it massively outperforms the market.

16

Part 3: Applying The Quantitative Value Model


Miguel: Wes: How can the average intelligent investor apply your quantitative model? There's multiple ways you can go about it. Weve built some free tools and you can see them at TurnkeyAnalyst.com. On this site we run the models each month and provide names for investors to investigate. Then if you're a do-it-yourself type, go to Yahoo! Finance. Run screens using EBIT to enterprise value and then screen for some of our measures (in the checklist). So for example after screening with Ev/Ebit use gross profits to total assets. If you do this you're going to capture 90 -95 percent of the benefits of being systematic. Maybe you're not going to get the extra 100-200bps but you're going to get the majority of our results. That said, you have to remember that with systematic investing you have to follow the models otherwise the strategy wont work. So, if the model told you to buy RIMM, would you actually do it? If you are the type of person that's going to tinker with the model -picking the stocks you like and throwing out ones you don't like- you're running a serious risk.

Rebalancing, Holding Cash, & Managing Taxes


Miguel: Lets say Im a do-it yourself investor interested in applying this strategy. How often do you rebalance your portfolio and what do you think about holding cash? Effectively we have tax-managed versions of the model and non-tax versions. There is better performance with more frequent rebalancing but it comes with a huge tax burden. For the tax-managed version we rebalance annually and perform tax harvesting at the end of the year. As for which month to rebalance; if you don't care about taxes, then each month you add in the latest names that are highest ranked and sell your lowest ranked holdings. Also, during this time you would rebalance to equal weight (per position) for risk management. Cash management can be tricky. A typical example where this would occur would be mergers and acquisitions. A lot of times you have portfolio companies being bought-out. This happens because you're buying a bunch of cheap companies that private equity funds eventually takeout. You need a system to deal with these situations. This is how we handle cash; if at the end of the month there's over 1% cash we rerun the model and add a new name. Otherwise, if cash is below one percent we wont add anything. This will result in a slight drag but the

Wes:

17 tradeoff between the performance and costs of trading and taxes offset. That's just our internal rule. People could do it differently but the execution is tricky.

International Investing
Miguel: Wes: One quick follow-up. How does the strategy work with international investments? We don't do it. We're currently developing the systematic models to invest internationally. The problem with international companies is data. Our model requires a massive amount of data with a long tail of at least eight years. In many international markets the data's just not available (except for very large companies). In those cases you have to simplify the model and screen for a few factors. Toby can tell you more about his approach because he focuses globally. That's exactly what I do. I use a simplified version of the model that doesn't require data that doesn't exist. It allows me to buy stocks in developed markets around the world. Recently, the best place to invest over the last year has been in the U.S. To the extent that you weren't exposed to the U.S., you slightly underperformed. I've been balancing towards other countries as they've become cheaper and away from the U.S. I just have too short of a period of data to give you an answer. I think it needs a full cycle to be comparable to the research we have done in the U.S.

Toby:

18

Part 4: Closing Thoughts


Miguel: Toby: What would be the best way for us to follow your work? Well you can see what I write on Greenbackd.com. I post research both my own and as well as academic research from others. I also run a Registered Investment Advisor called Eyquem. Wes, what about you? We have two venues. First would be the TurnkeyAnalyst which is a website designed to democratize quant strategies and release research for retail investors. I also run an SEC Registered Investment Advisor called Empiritrage LLC. At Empiritrage we provide research that's tailored to institutions and full time professionals. Thank you guys for taking the time to answer our questions. Thank you very much, Miguel. Thank you, Miguel.

Miguel: Wes:

Miguel: Toby: Wes:

19

Appendix: Toby & Wes on Joel Greenblatts Magic Formula


Miguel: Wes: The value community is obviously very keen of the magic formula. What have you guys learned about magic formula? Lets start with some background. Ben Grahams quantitative model was designed around only buying cheap stuff. Buffetts approach is to look for value across the spectrum; where value is not just what you pay but also what you get. Buffetts strategy was very successful and also seems to be a very sexy idea to quantify and automate. The magic formula is a quantitative Warren Buffett strategy. The magic formula quantifies the Buffett criteria. Say you have a universe of 1000 firms. Magic Formula ranks them on cheapness (using EBIT total enterprise value) and on quality (using EBIT over capital) from one to 1000. So you have two columns a cheapness column (in order from cheapest to most expensive) and a quality column (in order from highest quality to lowest quality). Next, the Magic Formula takes the average of those two ranks and reranks all the firms. The idea is that now you have a quantitative Buffett system where it's not just focused on cheap stuff, it's focused on all value stuff. This means, that in Magic Formula, a firm that's of high quality, even if it's expensive on a price ratio still might be better than buying a really cheap low quality firm. So the magic formula is attractive because it quantifies Buffett's philosophy and anyone who looks at Buffett's track record says, Wow, that's something we should probably think about. Fundamentally, the question is an empirical one because Buffett is an anecdote. He is not a robust study of the data. We like to do robust studies because there's always going to be some sort of outlier. When we perform the tests what we've found is that it's essential for a framework to start with cheapness and then focus quality. So the order in which you look for things matters. The minute you step away from the bargain bin is the minute, -that from an on-average perspective-you are shooting your self in the foot. We talk a lot about this in the book and demonstrate empirically that the magic formula underperforms our quantitative value system because it sometimes buys firms that on a straight price basis are expensive. Even though it may feel like a good deal to pay 20 times for Coke, on average that's actually a bad bet. It's critical that your framework gets to cheap first then looks at quality within the cheapest stuff. That's going to maximize

20 the probability of you capturing high risk-adjusted returns. So the summary is that from a quantitative perspective Warren Buffett was wrong and that his teacher, Ben Graham, was right.

You might also like