In this episode of The Acquirer’s Podcast Tobias chats with Partha Mohanram, a Professor of Accounting and John H. Watson Chair in Value Investing at The University of Toronto. During the interview Partha provided some great insights into:
- Picking Winners From Losers Using The G-Score
- The TJ Maxx School Of Investing
- What Is The Beta Of Twitter?
- Combining The Search For Quality And Value
- The Efficacy Of Fundamental Analysis Has Declined Over The Last Decade
- The Accrual Anomoly
- Share-Based Compensation Leads To Lower Returns
- Removing Predictable Analyst Forecast Errors To Improve Implied Cost Of Equity Estimates
- Pro-Forma Earnings – (EBBS) Earnings Before Bad Stuff
You can find out more about Tobias’ podcast here – The Acquirers Podcast. You can also listen to the podcast on your favorite podcast platforms here:
Full Transcript
Tobias: Hi, I’m Tobias Carlisle. This is The Acquirers Podcast. My special guest today is Partha Mohanram. He is the value investing chair at the University of Toronto. I’ll get him to give you the full title when we come back. He’s got some extraordinarily interesting research on fundamental analysis, share-based compensation, and the discount rate. I’ll be talking to him right after this.
[intro]Hi Partha, how are you?
Partha: I’m doing great, Toby. Thanks a lot for having me on your podcast. I just sort of stumbled on your podcast a month ago. This whole thing started off with me just making an offhand YouTube comment, and well, I’m glad you replied and I’m glad to be here.
Picking Winners From Losers Using The G-Score
Tobias: Well, your name was brought to my attention by the Practical Quant, Jack Forehand, he’s a partner at Validea, in a podcast that we did. And he pointed out that the best-performed strategy, I think last year or the year before was your G-score. This is because they track a variety of strategies. And I thought it was absolutely fascinating. And I think I said it was funny. I didn’t mean to offend you, but I thought it was funny in the sense that it’s explicitly looking in the most expensive stocks.
Partha: Exactly. So, if I can just sort of give you a little bit of history on that. By the way. I think I have met this gentleman or maybe one of his partners in like the mid-2000s because if I’m not mistaken, they are based in Connecticut somewhere north of New York City.
Tobias: That’s right.
Partha: And they came down to Columbia to talk with me. The thing about Validea is they don’t really have formal relationships with these professors who publish papers. They just take it and interpret it as they want to. So, for example, my G-score paper is actually a longshot idea, but they are just focusing on the long side, and the idea has done phenomenally. To be fair, the reason why the ideas done really well is not just because it’s a great idea. It is, but it’s also because I think, in general, value has not done well while growth has done well in the last decade. That’s kind helped, so that rising tide has lifted this boat as well.
So, the basic idea of that paper is, everybody knows Piotroski, at least not just academics. And so I didn’t start off as a valuation guy. So, I got my PhD from Harvard. My thesis was the area of disclosure, I got a more of how do firms communicate and how do they improve their information environment, that was my area of research. But I ended up teaching FSA, financial statement analysis, and ratio analysis. And also I got interested in valuation from a very practical perspective, and I came across Piotroski’s paper. And I just love that paper. By the way, I know Joe really well. So I’m not just saying this, but I did because I know him. But I like that paper because it was a very simple practical idea of applying stuff that people do.
And he basically said, “Let’s test fundamental analysis. But let’s test it in a setting where we know it should work.” These are value stocks. Nobody looks at them. So, it’s quite likely that there’s information in the public domain, financial statements that people just haven’t bothered to look at and therefore, it’s not been impounded into prices. If you just look at the F-score, it’s basically DuPont Analysis. It’s like, are you profitable? Is your profitability growing? Is your asset turnover improving? Is your profit margin improving? And then something’s pertinent to value stock. Like, is your liquidity getting better? Is your solvency getting better? Are you not doing stuff like issuing equity, which is a sign of weakness and so those are all the signals. And then, he shows that the stuff really works. His is like a simple test of fundamental analysis in a setting where you think it ought to work.
So, when I saw his paper, I had this thought experiment. I said, “This is awesome. But what if we think about the opposite quadrant? What if we look at a setting where we think fundamental analysis should not work?” If it works in that setting, it basically shows that there is value in fundamental analysis. Showing it works in a setting where it ought to work is great, but that’s like setting a low bar. So, I was trying to set a really high bar and see if it can work for that.
So, the other thing I also noticed was that the F-score doesn’t work that well in growth stocks. So, I said, “Let’s try to see if I can tailor fundamental analysis for the purposes of growth stocks.” Obviously, this notion that these firms are ignored and nobody’s looking at them can no longer be true because these are firms that are in the public domain and people are looking at them. They have a fair amount of analysts following, following in the business press, institutional investors, and so on.
Tobias: And high prices.
Partha: And high prices too. Probably, those things are related. But just because everybody is looking, doesn’t mean everybody is looking the right way. So, maybe it’s a case of fools rush in, everybody has fallen for the hype. So, can we still apply the basics of fundamental analysis to separate the solid growth firms to the hype firms? If you think about how Piotroski or how most people use to sought firms into value and growth, the most common ratio is the market to book or book to market ratio. Let’s use the word book to market.
Piotroski uses high book to market firms, so firms which have low market values related to the book values and calls them value. So, I look at low book to market firms. So I said, if you’re in the low book to market group, let’s try to see the guys who are in the low book to market group because there are some reasons why that book value is low versus the ones who are there because their market value is high, i.e., overvalued. So the ones who deserve to be there for accounting reasons, like accounting depresses book values in certain cases when you have lots of R&D, and you have lots of advertising and all. These are things which create assets, but these assets you’re forced to expense on, therefore, these assets don’t show up in your balance sheets and on your book value on the liability side.
Many of my signals were tailored for some of these accounting sort of thing. And the second signals I introduced, which are unique to my signal, was this notion of naive extrapolation. When firms do well, people assume that it’s going to stay on forever. So if you have two firms, both of which have a strong current performance, but one of them has steady performance in the past, the other one’s performance has been variable, the odds are the firm which has variable performance just got lucky and had a strong realization just here and you’re going to see some reversals in the future. So, I also built some signals based on how stable your profitability and how stable your growth has been because these firms– if you look at the ratio like a PEG ratio, talk about earnings, and you talk about growth, you want both the earnings and the growth to have quality. So, I was trying to build signals on that.
So, I came up with this index called G-score. And I basically showed that the G-score strategy is just like Piotroski’s F-score strategy works pretty well if you backtest it. The paper was written in 2005. I think the analysis goes with data up to 2002 or something. But the one difference between Piotroski and my paper is obviously Piotroski is looking at value stocks, which on average outperform the market. If you break his– Let’s say the average value stock beats the market by 5%, he breaks up that 5% into a 15% or 20% and a negative 5%. And he gets a long shot on that. But the short is not that crucial. You’re getting a lot of action from the longs.
On my side, we know that at least at that point of time, the average growth stock underperforms the market by 5%. I’m breaking up that minus 5% into a plus 5% and a minus 15%. So most of the action’s going to come from the short side. To get the maximum bang for buck, at least, that was the idea then, you need to short. Now, as things have gone on, we know that this decade– of the first one and a half decades of the new millennium has been very different, growth stocks have actually done very well. And that’s probably helped the performance of something like G-score, as the folks in Validea have shown. It’s done really well, on a long only side approach. But even there, if you had gone long on just growth stocks, like the book to market ratio, you wouldn’t have done as well as if you had gone long on the book to market ratio conditioned by G-score, which says that, “You know what? let’s focus on growth stocks, which deserve their valuations.” If you will just indulge me, I love using these corny analogies.
What Piotroski does, is Piotroski finds diamonds in the rough. These firms have rough valuations, he finds the diamonds among them. What I do is, I separate out the real diamonds, so these firms all have diamond valuations. I tell you these are the real diamonds and the rest of these are cubic zirconia. That’s my strategy. The real diamonds are the ones who deserve diamond valuations. The rest of the firms have diamond valuations, but these are cubic zirconias. That’s the analogy I use to talk about the differences between the F-score and the G-score approach.
Tobias: It’s a fascinating line of inquiry because it’s reasonably well known that the reason that people invest in the glamour end of the market is because they tend to have these lottery ticket properties where all of the very best companies over time never really get cheap enough to fall into the value bucket. They tend to stay in that glamour and you can think of examples like Walmart, Microsoft, many of those sort of companies, never get cheap on fundamentals. And so that’s why folks behaviorally tend to traffic in the glamor stocks, even though they know as a cohort, they underperform. If you can separate out the diamonds from the cubic zirconia, as you call it, from that group that really is– that’s the Holy Grail of that end of the markets. What does your G-score– What does it do? How does it differ from the F-score?
Partha: There are two fundamental differences. F-score does a deep dive into ratio analysis, because many firms in the values, they’ve been around for a long time, they’re more likely to be in sectors like manufacturing and all those kinds of things. So, your conventional DuPont base ratio analysis actually works very well. And also, he uses a time series approach. He’s comparing the firm to itself because he’s trying to look for signs of recovery. This is a firm which has a bad valuation, but maybe this is different from the rest of the firms because it’s actually showing positive momentum. It’s time series approach and a full DuPont.
My approach, I don’t use a full DuPont Analysis because, at least when I was looking at the paper like 15 or 16 years ago when I was working on it, many of these firms, and I haven’t gone deep dive into it, so I don’t know if it’s true right now. Many of these firms are not necessary– They’re very often in the early stages of their life. Not everything’s a Walmart. If you look at many of them, they’re also like firms which have gone IPO in the last five years. And so you have much more young firms and so the operations by nature are extremely unstable. So, year by year comparisons are actually deceptive. You could have a firm whose losses could be worsening, but it’s actually doing well, because it’s like trying to build market share or something like that. Doing time series comparisons are a little fraught. So what I did was, I did within-industry cross-sectional comparisons. That is among the low book to market firms in this industry, and I think I used the SIC code or something, which firms are doing better on the signal and which firms they’re doing worse on that signal. That’s the first thing I did.
Second thing is because of the nascent nature of the operations, I didn’t do a deep dive into ratios. I just looked at, “Are you profitable? How are you doing on earnings? How are you doing on cash flow?” This is just very, very basic profitability. And then, I had these signals related to these naive extrapolation. How stable is your earnings growth? Or how stable is your earnings, your ROA or something. And use that. If you’re high on stability, you get a 1. I think I made it continuous, but I don’t really remember the exact– if you’re high or above– no, if you’re above the industry median, you get a 1. If you’re below the industry median, you get a 0 or something is how I did it. So actually, G-score is a little more computationally intensive. The F-score is really easy to do, but it’s not difficult, but it’s a little more computationally intensive.
And the last thing is, I introduce these accounting-based signals. Are you investing in R&D? Are you investing in advertising? Are you investing in CAPEX? For two reasons, number one, in the case of advertising and R&D, it’s clear there’s an accounting reason why it depresses your book to market ratio. So, you want the low B firms or the high-end firms. In addition for something like CAPEX, these are firms which are being valued for growth. You want firms which are investing in growth. So, things like R&D, advertising, and CAPEX, means you’re a firm, which is doing stuff to ensure that the future is going to be bigger and brighter and more profitable than the present. Even if your present is depressed, you know that this is a harbinger of good things to happen in the future. That’s the justification for these three kinds of signals.
I think Piotroski had nine signals. I believe the G-score has eight signals, but fundamentally the construction is very similar. Basically, what you want is your backtesting, you want it to look like the skyline of Manhattan. You want to see a bunch of upward bars with very few negative bars. So, if you look at 20 years, you want the strategy to rarely have massive negative returns because when you have massive negative returns, and you have massive positive returns, it’s very difficult to say that this is mispricing. It’s probably just risk. You take on more risk, you’re gonna get more return.
So we try to rule out the mispricing, a risk-based explanation, in addition to doing asset pricing tests, but just looking at the prevalence of losses in your strategy, and the losses hardly ever happen. That’s the first thing.
Second thing is, most of the returns are concentrated around future earnings announcements. If you look at the performance of the next year, almost 30% or 40% of the returns comes around the three-day trading windows of the next four quarters, which means that consistently these firms are surprising positively, at least your longs are surprising positively around future earnings announcements, while your shorts are surprising negatively around future earnings announcements, which tells you that it’s not risk. It’s something to do with fundamentals, which the market has not impounded, but your strategy in a sense has.
Tobias: How are you assessing the stability of the earnings?
Partha: I think if I’m not mistaken– so this is a bit of a challenge, because you need to have time series to do that. If memory serves me right, and I wrote the paper 16 years ago. And unlike other people, I have no coauthor to blame, it’s just me. So, I think I looked at the standard deviation of quarterly earnings for eight quarters. For example, I would look at something like earnings divided by assets or something, and look at how– I just calculate a simple standard deviation of that across eight quarters. And again, people might say, “How can you compare that so-so variable?” Remember, I’m doing the comparison across industry. You certainly can compare a ratio of that saying that among all firms in this particular SIC code, this firm had above-median variability, and therefore that’s a bad thing. This firm had below median variability, and therefore it’s a good thing. So that’s the way I code it.
Tobias: Are you applying the strategy in– is it being practically applied by anyone?
Partha: Well, I know for a fact– Okay, the first thing is, people ask me, do you apply it yourself? The answer is, I don’t. I tell you why I don’t, because I just don’t have the time to do this kind of stuff and frankly, I don’t have the money to do this kind of stuff. Professors are well paid, but not that well paid. But more importantly, if you invest a lot in individual stocks, like something like this, you need to be monitoring this thing on a pretty active basis, and I just don’t have the time and the bandwidth to that. So, I just buy index funds or whatever which line up with this, whenever I have to make decisions on my retirement accounts and so on and so forth.
I know for a fact that, in addition to– so somebody highlighted this Validea thing to me a few years ago, saying, “Hey, Partha, I found that you are listed as a guru among some really, really big names.” And I said, “Come on, you’re joking.” I found that really cool that these guys put me in a list along with the Buffetts and some really, really big names, and I was very kicked to see that. But the other thing is, when you look at many reports, and I come across some reports from some buy side investors, they do mention this strategy from time to time. And so, I do know that this stuff is being used, but the thing is it’s in the public domain. People can use it and I don’t need to– it has nothing to do with me.
The other question people ask me is, “If your strategy is so good, why aren’t you running a fund? Why are you working as a professor?” My answer is different people get motivated by different things. I really like my job. I like teaching, I like doing my research, and I like highlighting these things. And I’m not that motivated by the actual financial aspect of it. But I do believe that this stuff actually works because when we think about fundamental analysis, to me, it’s this very strange alchemy or amalgam of market efficiency and market inefficiency.
It relies on market inefficiency because you say that firms do get out of whack, they move out of position, but it also relies on market efficiency because you assume that they’re going to come back to the real value. So, it’s just belief in long-run market efficiency, but short-run market inefficiency. In some cases, it’s not gonna work. There are many people who say that Amazon is incredibly overvalued, or Uber or Tesla are incredibly overvalued. But some valuations are likely to be stuck in that thing for whatever reason. Again, I have this analogy here. You cannot take things too literally.
Suppose you’re a chemist and you’ve studied chemistry, and you have– I’m gonna go back to diamonds. You have 10 grams of diamonds in your right hand. And somebody says, “I have 10 grams of coal right here. It weighs the same. It’s the identical same chemical composition. It’s really inefficient that the market is valuing this 10 grams of diamonds at a million dollars, and this 10 grams of coal at 10 cents. So, I’m going to go long coal and I’m going to go short diamonds.” That’s not going to work because that inefficiency is baked in. And if you don’t agree with that, just try giving a significant other a coal ring instead of a diamond ring. It’s not gonna work.
Leaving out situations like that, fundamental analysis believes that there is something which causes stock price to deviate from value, but eventually they find that value. And if you can find that deviation systematically earlier and better than other people, you can make some money on it. And I believe in that. Certainly, I’m not one of those University of Chicago guys who say that– I’m a Harvard guy, by the way. You hear this joke about this Harvard MBA and the Chicago MBA who are walking on the street, and they found a $100 bill. And the Chicago guy says it’s not possible, because markets [unintelligible [00:20:08]. The Harvard guy says, “Okay, I’ll make the markets efficient,” and he’ll pick up the $100 bill. So, that’s my approach basically.
***
Tobias: Let’s talk about your background a little bit. I didn’t give you the full title, but you’re the value chair at the University of Toronto, but what’s the full title there?
Partha: Okay. It’s called the John H. Watson Chair in Value Investing. John Watson is a large fund manager here in Canada. Universities have chaired professors and they have endowment. This chair just naturally fit my research. They have a value investing program here. I’m actually not that involved in the teaching of value investing, although I teach a course on business analysis and valuation. That’s why this chair came to me. But my actual background is, I’m a computer scientist. My undergraduate is I did a degree in Computer Science from IIT in India. And then, I got my MBA from this place called IIM Ahmedabad. These are the premier engineering and MBA institutes in India. And then, I came to Harvard for my PhD. My PhD was in the thing called Business Economics, which is this joint degree between economics and the business school. But within the business school, my area– my interests–
Tobias: Was that under Michael Jensen?
Partha: Well, Michael Jensen was certainly involved. But when I was there, he was already towards the end. But among the other well-known people who are from this program are Michael Porter, for example. He’s a graduate of PhD in business economics and lots of people. A guy called Tarun Khanna, he is very well known at Harvard. And there are many other people very successful at all the universities. So, within the business school, my area shifted more into accounting as opposed to finance because I found that if you’re interested in doing firm-specific analysis, the accountants actually do it better than the finance people. And I’ll tell you why, because finance people treat accounting as a blackbox.
If we just look at the whole DCF valuation approaches, “Oh, I don’t trust accounting. Let’s get rid of every single accrual. Let’s add back depreciation. Let’s adjust for working capital.” Even though a lot of research has shown that earnings are a much better predictor of value than cash flows, you still have this thing, everything has to be done in terms of cash flows. So, I ended up going into accounting because I found that, especially at the firm level, finance people are very interested in the markets as a whole. But if you interested in understanding what’s happening at the firm level, you cannot be agnostic or even worse, completely ignorant about accounting matters.
I had a finance prof who actually came to me and said, “Partha, is depreciation an asset or a liability?” I said, “Depreciation is not even on the balance sheet. It’s on the income statement. And there’s something called accumulated depreciation, which happens to be a contrast.” This is the extent to which sometimes– really, really top people in finance and they say, “Oh, I don’t care. I just do everything in terms of cash flow.” But that can lead to bad decisions because I think understanding the accounting is very important because otherwise you do things very mechanically. You take the number as given and you just blindly apply a multiple to it, whether it’s a EBITDA multiple or a P/E multiple and stuff, without worrying about how that E came into being.
***
Combining The Search For Quality And Value
Tobias: One of your other papers that I read that I really enjoyed – combining value and quality – which is something that I try to do as well. I sometimes find it difficult to believe that the two are separated because I don’t know how you get the value without the quality. But perhaps you could take us through that paper?
Partha: Yeah, absolutely. If you look at the fundamental analysis– and I’m taking more of an academic approach here because I’m sure practitioners do both. If you look at papers my G-score or Piotroski’s F-score, these are papers which are looking for quality. They’re looking for signals of fundamental strength or weakness. But then there are papers– there’s a whole– like Frankel and Lee had this paper on this V to P ratio where they’re trying to estimate the intrinsic value of the firm using some sort of valuation model and you use the forecast and stuff. And they create this ratio called the V to P ratio where it’s a value to price. And if the V to P ratio is high, that’s an undervalued firm because the value is much higher than price.
So, these firms focus on value, but they’re just focusing it on some very basic summary statistics like earnings forecast or something, and some model of extrapolation. They’re not focusing on quality paper. Paper like mine and Piotroski’s paper focuses on quality, but doesn’t focus on valuation, other than just looking at the book to market ratio as a signal of overall valuation. But you’re not looking at– relative to your forecast and prospects, how highly valued you are. You just saying on average, if the market is not getting it, I think there’s going to be someone undervalued firms here. So, you would think that these two signals are highly correlated positively. It turns out the correlation is negative. And the answer is actually very intuitive if you think about it. Quality is not cheap. If you want to get a Tesla, you have to pay Tesla prices for it. If you want to get high-quality products, you go to Neiman Marcus or Nordstrom, I know one of them went bankrupt, but at least you used to, not to Walmart. The problem is quality is expensive. You either have to pay a price for quality or you got to put up with crap, lower quality stuff.
***
The TJ Maxx School Of Investing
Partha: So, I’m going to give you another analogy again, I’m sorry I have this really–
Tobias: No, please go ahead.
Partha: –bad habit of using analogies. I call this the TJ Maxx School of investing. Why do you shop at TJ Maxx? You shop at TJ Maxx because you want to get the sort of stuff you used to get at Neiman Marcus or Nordstrom, but it’s sitting on a rack, it’s the brand name product but it’s 60% off. So, you want to go and say I snag– or outlet shopping that I snagged this Hugo Boss jacket which would have cost me 300 bucks, I paid 70 bucks for it,” and stuff. So what you want is, because on average the Hugo Boss jacket is going to be very expensive because it is quality. The problem with quality is when you get high-quality stocks, there’s not too much alpha there because it’s already priced in. And the problem with low-price stocks is there’s reason why they’re low priced because most of them are crap.
So what you want is, you want the stock which is high quality but has reasonable valuation as your long. So, your on-sale merchandise at TJ Maxx should be your long. And I’m sure you can find some really overpriced, useless stuff on full price at Nordstrom. That should be your short.
Tobias: Full of moth holes.
Partha: Yes, fully valued, but it’s really not a very good stock, that should be your short. So, we try to combine these two– and by the way, I’m not the first one to do it. If you look at some of these– Greenblatt’s magic formula or even the fundamentals of Graham and Dodd, what they do, many of the signals try to incorporate these two things. They do it sequentially. Say, let’s do this, let’s do that. But if you look at the correlation of the signals, the correlation is negative because these two things fight against each other. When we did that paper, this paper with my coauthor, Kevin, which was published a few years ago, I was trying to combine this–
We did two signals of quality. One was the Piotroski F-score, the second thing was my G-score. And we did two signals of cheapness or valuation. One was this V to P ratio, which is like I said, it’s a little technical. The second thing is you do something very simple like the PEG ratio. The PEG ratio is something that people use. It’s a heuristic, it’s not perfect, but it actually does a reasonably good job, especially if you’re doing it in portfolios and stuff. So, it’s a price to earnings divided by the growth. If your PEG ratio is high, it basically means that you are paying a lot for whatever growth you’re getting. If your PEG ratio is low because remember, the numerator is a P ratio, it’s how expensive something is, how much are you paying for $1 of earnings. And the denominator is growth. Why are you paying so much? Because there’s growth, and therefore, there’s going to be more earnings in the future. We said, “Let’s look at that, either this VP ratio and PEG, as a determinant of valuation of cheapness.”
And what we basically found was, firstly, these things are negatively correlated. High F-score firms and high G-score firms, these are high-quality firms, tend to have higher PEG ratios or low VP ratios, that is they are more highly valued. Conversely, for the lower F-score forms. Now, both these strategies individually work. The F-score strategy works long short, the G-score strategy works long shot, the VP and PEG strategies also work long shot, but they are working against each other. They’re doing something very, very different. So, the question is, given that they are negatively correlated, if we can make them work with each other, so obviously look at the subsets. Let’s go along, not just in the F-score firms, but in the high F-score firms, which also have moderate valuations. And conversely, let’s go short, not just in the low-quality stocks, which are also highly valued. And what we found was–obviously, the sample size gets smaller, because like I said, these things are negatively correlated, so they’re not that many high F-score firms which also have low valuations. But let’s say you find those and you also find the other group, we find that those hedge returns, they increase by a magnitude of two or three times, like instead of a 10% hedge return, you get 25% as a hedge return.
And one thing I must say is many of these returns actually have weakened in the last 5, 10 years, but this is a mystery in finance. If you look at all what they call return-generating processes, they’ve become much more random recently. But certainly, it still doesn’t produce negative returns. It’s just that instead of the skyline of Manhattan, you probably have the skyline of Mumbai right now.
[laughter]***
The Efficacy Of Fundamental Analysis Has Declined Over The Last Decade
Tobias: That was one of the things that I wanted to talk to you about. I think it came from the quality and value paper. You said that the efficacy of fundamental analysis has declined over the last decade. Do you have any thoughts about why that might be? More people hunting, better computer power, something like that?
Partha: I think all of those things are true. More people hunting, better computer power. There has been some transformational changes in disclosure. For example, earlier on, just think back to the 1980s, if you need some financial statements, or you needed to analyze 1500 companies together, you’d have to go download this– forget download, you have to write to the company and get their annual report and stuff. Now, you have something like Edgar, you have this thing called XBRL, which allows you to search inside Edgar. You have ways in which you can do machine learning, you can do all these kinds of things. So, data extraction and data analysis has gotten better, I think.
Also, I think many of these account– many of the anomalies have weakened by the way. It’s not just– like the accrual anomaly, which like Richard Sloan is famous for, has essentially disappeared. The quality of work by financial intermediaries has also improved. Actually, I have a paper on the accrual anomaly where we show that the accrual economy has essentially disappeared. Some researchers before me showed that one reason is people actually investing in it, and therefore it’s got arbitraged away. But the second explanation which I come up with my paper is, analysts are now issuing cash flow forecasts. Analysts are issuing both earnings forecasts and cash flow forecasts.
Basically, earlier on, if you were not working out on your earnings, you could kind of manipulate the accruals and get your earnings. But now, if you’re being analyzed on both earnings and cash flows, you can’t do that because the accrual is a difference of the two. What I think is, all of that has essentially made markets a little more efficient. And obviously, markets are efficient, trying to find inefficiencies and trying to find alphas is a little more difficult.
***
The Accrual Anomoly
Tobias: Just for the listeners, part of the accruals anomaly is where your earnings overstate your cash flow and there needs to be [crosstalk] asset created, and it’s accrued as an asset.
Partha: Essentially, the accrual anomaly– it’s a very, very seminal paper written by Richard Sloan, a professor now at University of Southern California in 1995 or 1996, I forget when. He looked at this thing called accrual, simply earnings minus cash flows divided by assets. And then, he sorted firms into high accruals and low accruals and looked at future returns. And he found that consistently, high accrual firms had negative returns and low accrual firms had positive returns. And what he showed was this is because earnings have two components. They have a cashflow component and an accrual component. The cash flow component is likely to persist. That’s the real economic story. The accrual component is likely to reverse.
There are some periods where you have lots of receivables. The next periods, you’ll have less receivables. In some period, you have lots of inventory because you’re gearing up for an expansion. Next period, you’ll have less inventory and stuff. But the markets don’t understand this differential persistence. He showed that you can earn pretty consistent hedge returns, and you could up to the mid-2000s. But then, like I said, people started investing in it heavily. You had many of these quant funds. It’s something so easy to set up, many of these quant funds investing in it. So, if you look at the AUM going into these kinds of quantitative strategies that kind of rose exponentially as this accrual strategy declined. And then, you had this other explanation like I came up with, which is that the reason why you might get misvalued and investors get misled has gone away because now people are actually paying attention to these [crosstalk]
Tobias: The additional scrutiny keeps the managers a little more honest.
Partha: Absolutely.
***
Share-Based Compensation Leads To Lower Returns
Tobias: One of your papers that I discovered it after you got in contact me, but I’m glad that I did because it’s an issue that I think is particularly important right now, and that’s share-based compensation. Basically, you show that higher share-based compensation leads to lower returns, but the way that you get there is by higher valuations. Perhaps, it’d be better if you describe it.
Partha: No, I think it’s a pretty fair explanation. Let me give you a genesis of how this paper came into being. I teach business analysis in valuation, which basically is a financial statement analysis course. One of my students here from Toronto, his name is Wuyang Zhao. He’s now a professor at UT Austin at the McCombs School of Business. He teaches the same thing. And he said, “Hey, Partha, I was looking at this free cash flow calculation, and I found that different books do differently. Some people use cash flow operations where they use the cash flow operations number, and then they simply subtract out CAPEX and probably add back interest something and you get free cash flow. Other people start with net income, subtract out depreciation, and then do the changes in working capital and stuff, and then the CAPEX and all. And the two are different because how they treat these non-cash expenses is different. The second approach starts with net income and therefore, it includes these non-cash expenses like stock-based compensation. It’s the largest of them, by the way. The approach which starts from cash flows, if you look at your cash flow statement, you always add back these things like stock-based compensation because it’s a non-cash expense. And so, it systematically makes this free cash flow higher. The question is, which is the right one?” And there’s no clear answer to that because, yes, stock-based compensation is not a cash expense and therefore, it is right to add it back in the calculation of cash flow. But is it right to exclude it in the calculation of free cash flow? Not so sure, because what is free cash flow?
Free cash flow means that the company has– this is that amount which is truly left over for the shareholders after you’ve taken into account all things you need for your future growth. Think about stock-based compensation. If you’re giving a lot of stock-based compensation to your employees– now, we had this big controversy but since 2005, it’s an expense which you have to account for on your income statement. It is an expense because you are giving up something of value at a lower price to your employees. Now, when you exclude stock-based compensation, you’re excluding the impact of the stock-based compensation on the current shareholders.
The impact can be twofold. Number one, there will be future dilution. If a company does absolutely nothing and allows all these options to get exercised as and when they get exercised, your current shareholders will get diluted because they’ve got to share this wealth of the company with these new shareholders, who happened to be the employees of the company. The second thing which is likely to happen is, to prevent this dilution– if we look at many of these companies in tech and all, they are constantly repurchasing shares to prevent the dilution. How do you repurchase shares? You repurchase shares by using cash. So that cash is not free, it’s being used to essentially service this thing. The only difference is, it makes sense because instead of paying the guy– let’s say you have the choice of paying somebody $100,000 in cash or you pay them $60,000 plus $40,000 worth of options. Yes, you save $40,000 in cash right now, but you’re on the book for it later on when you start repurchasing shares to prevent this guy from diluting your equity. It’s not a free cash flow.
So, we said, “If that’s the case, it must be the case that firm–” Let’s say markets ignore this, they have no clue. They take these numbers mechanically, and they apply multiples and so on so forth. They must be systematically overvalued firms which have a lot of stock-based compensation. So, we asked the question, is it the case that they’re systematically overvaluing these companies? And so, we looked at traditional measures of valuation, your P/E ratios, your price to book ratios, price to sales ratios and stuff. And we found that, yes, these ratios are higher. Now, the obvious question, “Yeah, but that’s obvious. These firms are in tech and these guys are in industries which have higher valuation ratio. So, you’re just showing that.” So, we control for that. We control for industry, we control for your growth and stuff, and we show– even controlling for all of that, then you have stock-based compensation, systematically you have higher valuation ratios.
And the next question, if that’s the case, and if it’s overvaluation, you should see the results in future returns. And we find that systematically, these guys in future– if you look at the next one or two years, the returns tend to be a little lower, because the markets eventually find out that, these guys are not in a sense– or they see the actions that they take to dilute their shareholders or the repurchases or whatever, we don’t know the exact mechanism and we can’t pinpoint exactly when the market sees the light. But if you look at a large sample, that’s what we systematically find.
And again, this goes back to the fact that if you really want to do fundamental analysis properly, you need three skills in my opinion in terms of knowledge of areas of business. You need to understand business, that is strategy and what makes– economics fundamentally, the economics of the industry and so on so forth. You obviously need to understand finance, but the third thing, which I think many people don’t understand is you need to understand accounting. You don’t need to be a CPA, but you need to understand the basics of what an income statement is, what a balance sheet is, what a cash flow statement is, and how these things articulate with each other. Because if you don’t get that understanding, you are going to be the person who uses accounting numbers as a black box and you’re systematically going to get misled.
Tobias: Yeah, I couldn’t agree more. And just on that point, when I wrote a book in 2012, called Quantitative Value with a coauthor, one of the adjustments that we made to Piotroski was to include– in addition to shared purchases, we included share issuance, so we made it share issuance net. Just a tiny little change which slightly improves the performance.
Partha: It improves, okay.
Tobias: And leads to a more intuitive output, the companies that come out. Otherwise, you’re favoring these companies that tend to buy back a lot of stock but often they’re buying back just to tidy up.
Partha: Exactly.
Tobias: The option issuance.
Partha: Although to be fair, in Piotroski subset, value stocks, they probably aren’t that many tech companies and stuff. It’s probably a bigger problem in the G-score subsample if you look at that. But certainly if you’re applying Piotroski across the board to all firms, I think that’s certainly a valid and probably very good adjustment to make.
Tobias: It was one of the things that I have observed too. The Piotroski does quite well outside of that book to market decile. It does reasonably well across the entire portfolio. If anything, it’s held back a little bit by the book to market decile because that seems to be such a– they’re very small firms and they’re not great companies in the first instance.
Partha: Right. But actually, if you think about it, that was part of Piotroski’s ideas. You want to invest in these firms. Going back to my analogy, you need the rough to find the diamonds. In fact, the reason why some of those things continue to stay misvalued is simply because people are scared to get into investing there because they don’t meet the cut-offs of size or stock price or liquidity for you to want to go and invest.
***
Removing Predictable Analyst Forecast Errors To Improve Implied Cost Of Equity Estimates
Tobias: One of your other areas of research is the implied cost of capital. So, let’s just talk about that a little bit. What’s the area of research?
Partha: So, the implied cost of capital is basically an idea which came about– again, the intersection of finance and accounting, I would say around 20 odd years ago. Some of the, let’s say well-known people who you may– some of your listeners may have heard of is Charles Lee, he’s now at Stanford was one of the early guys there. And so, the basic idea is, it’s like an IRR from a firm’s perspective. Let’s say you have a form which has a certain value, which is the stock price on the left-hand side. And the right-hand side are, its future cash flows, whatever. You can think of it in terms of future earnings or future cash flows or it doesn’t really matter.
So, what discount rate justifies this current stream of earnings. For example, let’s say I have a form, which has EPS of $1 50. And I expect the EPS to grow at like 8% for the next five years. And then I have some projections about what’s going to happen for the next how many of the years. Why is it worth whatever it’s worth in terms of the stock price? So what this implied cost of capital does is, it uses some sort of a model to say, “Okay, this is what I think the current forecast for the next five,” because typically, you get forecasts for three or four years. You got to make some assumptions about what happens from year five to year infinity, because that’s what the valuation horizon is. So, you got to make some sort of a terminal value assumption. And then say, given this stream of cash flows, what must the discount rate be?
So, in a sense, if you think about valuation, it’s inverting the valuation process. Normally the way valuation works is, you’re given the cash flows, you’re given the discount rate, you come up with a value and you compare value to price and say this was undervalued, overvalued. This one is saying that, “No, no, I just want to know, if this valuation model is you think, reasonably representative. And if these forecasts are right, what must the implied cost of capital be?
Initially, this paper was written– This is a very, very influential paper by authors called Claus and Thomas. This is Professor Jacob Thomas, who’s a very well-known professor of accounting, now at Yale, formerly at Columbia. If you’ve done an MBA before the 2000s, you know the Kappa model, they will tell you that use a market premium of 7% to 8%. That was extremely common, but then If you look at it recently, the market premium has come to more like 4%. So, this paper essentially says that for whatever reason, the market seemed to be discounting the average firm’s earnings at like a 4% over the risk-free rate, that seems to be the market premium. This is the initial genesis of this implied cost of capital idea.
The cool thing about this particular idea is, I’ve done some work on the– I don’t want to get too technical here, but there are different ways of estimating the implied cost of capital. And there are some challenges as well. It’s only as good as the quality of the inputs, if your forecasts are bad, if your model is bad, the implied cost of capital is going to be bad. So, I’ve done a lot of research which tries to refine this model and tries to say that, okay, what’s the best way of calculating implied cost of capital. But the reason why I find this thing very interesting is, I use what I call a Triangulation Approach. I use this in my classroom, by the way, it’s not research. I actually have a session on this implied cost of equity in my MBA class. And the students love it.
By the way, I also talked to them about F-score and G-score and all these things. I try to spend, out of my 12 sessions, at least two of them are related to research because I think many of us in academia have this ivory tower view of academia where we do our research from 9:00 to 1:00, and then we go teach from 2:00 to 4:00. And never the twain shall meet. And I think that’s terrible because I think the reason why students come for to do an MBA, spending all the money that they do, is not to learn stuff, they could have gotten the books, they should learn stuff, try to get some insights from your research as well. And that’s what I tried to do.
So, what I do in this my class is, I introduced this thing called a triangulation approach where I say, let’s calculate the implied cost of capital. I use a very simple formula for the implied cost of capital, which is the– if you take the PEG ratio, okay, it’s a little technical, but not much. Square root of the reciprocal of the PEG ratio. So basically, a square root of growth divided by the P/E ratio. And by growth, I mean growth for the next 5, 10 years. So, if you go look at Yahoo Finance, they’ll have this five-year growth, long-term growth is what analysts call it. You take the growth and divide it by the P/E ratio and take the square root of that, that gives you a heuristic for the implied cost of capital. If the ratio is too low or the ratio is too high, what does that mean?
So, I have this discussion with my students and we do this triangulation. If this ratio is too high, for instance, that is, if the implied cost of equity is like 12% for a firm, it means the market is discounting this firm’s earnings extremely highly. So, it means one of two things. It means the market thinks this firm is much more risky than you think it is. So maybe for whatever reason your beta estimates are wrong or something. The market thinks the firm is much more risky. Or it means, the market is wrong. The firm’s potentially mispriced
This is an undervalued firm, it’s being over discounted, or the forecasts are wrong. Like the analysts are being overly optimistic, and they’re saying this firm is gonna grow at 40% for the next five years, and the market is saying, “No, I think this is all BS, it’s not gonna go that far. And therefore, I’m not considering your forecasts, I’m gonna discount it. And I’ll give you a lower price.”
***
What Is The Beta Of Twitter?
So, I try to set up this triangulation framework and I have this very interesting exercise, usually we try to look at these highly valued firms in the tech sector. So, every year we look at Google and Twitter and Microsoft and Apple and say, what does this ICC tell us in terms of? For example, Twitter—If I was to ask you a question, what do you think the beta of Twitter is?
Tobias: I’d say it’s much higher than one, but I don’t know.
Partha: Okay. The beta of Twitter is like 6 or .7. Can you believe that?
Tobias: No. [chuckles]
Partha: Suppose you do that mechanically. Let’s say it’s .6 as a beta, .7 is a beta, apply a market premium of 6%. That’s 4.2. Okay. And currently, the risk-free rate is 1%. You’re telling me that Twitter’s cost to equity is 5.2%. That’s absolutely meaningless, but you calculate this implied cost of equity for Twitter, the ICC will come out to be something 8% or 9%. The market’s telling you, I don’t believe this .7 beta, I Twitter is much risk—it’s exactly your insight. If you think the beta is 1.2, or 1.3, with a market premium of like 5% or 6%, that gets you to 8.5%. So, the markets telling you that your beta is wrong. The implied cost of equity is, it’s basically a sanity check. You can use it to say, I think something is wrong. Either this firm is– The other alternative is markets undervaluing Twitter. I don’t know. I mean, you need to invest your time and effort to figure out which of these regulations holds.
Tobias: Well, there’s some suggesting that that’s the case because they’ve had seen as Elliott Management has taken a big position in real estate capital from outside. So, it’s entirely possible that’s the case.
Partha: This ICC at least helps you organize your thoughts. And it gives you one of three possibilities and then you can figure which of these three is the most applicable in the situation you happen to be.
***
Pro-Forma Earnings – (EBBS) Earnings Before Bad Stuff
Tobias: That’s very interesting. And I appreciate the example there. Final question, you have some comments on pro-forma earnings.
Partha: My comment is don’t believe that. Plain and simple. Firstly, I’m not a big fan. Okay, this is my bias. I’m a professor of accounting, with all its flaws, I like GAAP accounting numbers, okay. I don’t like pro-forma earnings because—There’s an expression EBBS, earnings before bad stuff. The way companies presented is, guys, purely out of the goodness of my heart, just to make your life a little easier, I have stripped out these unnecessary and boring accounting terminologies that these guys at the SEC and the FASB force us to do. And I have given you this reality of our company. But in most cases, it presents an alternative reality where they’re stripping out all things, which are valid expenses. So, proformas rarely bother me.
The second thing which bothered me is this companies with recurring, nonrecurring expenses. For example, let’s say you have a company which is being valued for its growth, but the growth is coming inorganically, it’s coming through acquisitions. That’s not a growth company. If I am 10, and you are 10, and tomorrow, we are 20. That’s not growth. But unfortunately, there is a lot of research which shows that markets don’t understand the difference between organic and inorganic growth. And part of it is, and firms are, let’s say, growing organically, and they have this every year there’s some merger which is taking place, that merger will have some expenses which get taken out in the pro-forma. So, each time if you ignore those expenses and yet value them for the growth which is coming because of those expenses, you’re going to systematically overvalue these firms. And in systematically overvaluing these firms, you’re also giving them free currency, which they’re going to use to make the next acquisition, and because the other thing which is a well-known fact, is the most popular, negative NPV activity out there as mergers and acquisitions because people do it because the valuation is completely out of whack and nobody holds them up to it. And part of it is because the accounting is so deliberately opaque and pro-forma plays a role in it, that nobody calls them out and say, “Partha, you said this, and this is what the reality was.” By then, the world has moved on, there’s been two more acquisitions. So, there’s no way you can even tell what actually is happened.
Tobias: Just to put in your collection of anomalous things that have occurred over the last decade, one of them is that mergers and acquisitions seem to have become positive NPV somewhere over the last 5 or 10 years.
Partha: Well, like I said– I need to go look at the data. I’m not certain if that’s—
Tobias: Michael Mauboussin comment. There are lot of things that the market is defying logic in very many ways. And that’s another one, but I agree with you, that it should be. They tend to overpay.
Partha: Part of it could also be because nowadays– let’s say in general, if you take out the recent, let’s say, three, four months of perturbations, you’ve been in a bull market for a long time, which means that you have cheap currency and it continues to be cheap. So in a sense, while the economy benefits on quantitative easing and cheap dollars, firms benefit from their own cheap currency in terms of ability to do M&A. So that’s probably– [crosstalk]
Tobias: I think that’s the reason. I agree. Absolutely, Partha. If folks want to follow along with your research, how do they do that?
Partha: Firstly, two things. This is not just my research. Any professor, right? Go to Google Scholar and type the professors’ names, invariably you will find all their papers. In my case, you can find my papers on three places. You can find it on Google Scholar, you can find it on this website called SSRN, the Social Science Research Network. Most of us post our– yet to be officially published working papers there.
Tobias: Yeah, very grateful for you doing that.
Partha: SSRN. And the third thing is, you can just go look me up on my website. Just Google my name, Partha Mohanram at University of Toronto. I have almost all my papers are on my website so you can download them and feel free. And if you need to chat with me about my papers and stuff, I’ll be happy to do that. Like any person, the first hour is free and after that, you can hire myself.
Tobias: [laughs] There are so many researchers who only put their papers up inside the paywalled journal, so I was very happy to find that you have the working papers in the social security research network.
Partha: Absolutely. Many junior professors are scared to do that, because they’re afraid of the review process and stuff. I’ve reached a stage in my career where the odd rejection of a paper doesn’t really bother me anymore. So, it’s like, I don’t care, so it’s okay.
Tobias: I also find that the earlier ones, the very early versions of the paper, the better versions of the paper and they get twisted a little bit as they get closer to publication.
Partha: Yeah, sometimes the refereeing process forces you to spend so much time on trying to rule out every other alternative explanation. So, it makes a little more academically rigorous, but from a practitioners’ perspective, very often the early version can be the quite—
Tobias: The signal gets lost a little bit along the way.
Partha: Probably, yeah.
Tobias: Partha Mohanram, thank you very much.
Partha: Oh, it’s been a pleasure and look forward to seeing this podcast.
For all the latest news and podcasts, join our free newsletter here.
Don’t forget to check out our FREE Large Cap 1000 – Stock Screener, here at The Acquirer’s Multiple: