In this episode of The Acquirers Podcast, Tobias chats with David Trainer, CEO of New Constructs. During the interview David provided some great insights into:
- Truth Stocks
- Earnings Distortion Delivers Alpha
- Using AI To Decipher Footnotes
- What Are Persistent Earnings?
- Wall Street’s High-Priests Incentivized To Complicate
- Why Do Companies Understate Profits?
- ESG – G Gets E & S Right
- AI Cannot Become Lawnmower Man
- Common Hidden Distortion
- Companies Exchange Words With Graphics To Trick AI
- Which Companies Have The Most Complex Financial Statements?
- Berkshire’s Unrealized Portfolio Gains & Losses
- Ignorance Of Fundamentals
Links in this podcast: Earnings Distortion: The New Value Factor
You can find out more about Tobias’ podcast here – The Acquirers Podcast. You can also listen to the podcast on your favorite podcast platforms here:
Tobias: Hi, I’m Tobias Carlisle. This is The Acquirers Podcast. My special guest today is David Trainer of New Constructs. He’s got a brand-new paper that talks about his core earnings methodology. We’re going to be talking to him right after this.
How are you, David?
David: I’m doing great, Toby. Nice to see you.
Earnings Distortion Delivers Alpha
Tobias: There’s a nice paper that’s just come out about New Constructs and its core earnings methodology. The line that really stood out to me was that the core earnings provides a more accurate and persistent measure of a firm’s profitability than traditional metrics. There’s two ideas in that that I want to understand. The first one is, let’s understand, what are core earnings?
David: Core earnings are effectively just a better way to measure profits. Some professors, I think, maybe the last time we talked, I mentioned it and started to study this. A couple of guys out of Harvard Business School and MIT Sloan learned about New Constructs and were interested in this work we were doing and really the big question that came out of it was, “All right, all this extra work that New Constructs does in the footnotes, does it matter? Does it give you incrementally better data, enough better to generate idiosyncratic alpha or alpha that can’t be explained in other factors like traditional profitability, or value factors, or momentum, or sector weightings, or tilts or size?” The paper came away with, yes, absolutely we can, and there is nothing else like it in the marketplace. Of course, in order for that paper to get published in a peer-reviewed journal, they had to prove these things to be true, which they did. The paper was published in the Journal of Financial Economics.
The next step for us was to take that insight and say, “Okay, how do we deliver alpha? How do we make it easy for our clients, whether your fundamental manager or client manager, to make money on that insight?” So, it effectively says, core earnings is a better measure of profits. So, we can then look at the difference between core earnings and whether it’s accounting earnings, or street earnings, or consensus earnings. The difference between core earnings and the other traditional measures of earnings is what we call earnings distortion. That’s what we’re calling the new value factor. Because that distortion number is shown to, on a standalone basis, produce idiosyncratic alpha. The paper that you mentioned, Toby, from ExtractAlpha of 10.3 compound on annual return over the last five years and a long-short portfolio, and I think something like 9.2% of that is pure idiosyncratic alpha. Yeah, it’s the earnings distortion number, and earnings distortion is the difference between core earnings and traditional measures of earnings.
Tobias: So, I should mention on a client of New Constructs and David very kindly provides it gratis and it’s an integral part of my investment process. I use it because there are just very important pieces of information that are often hidden in the footnotes with the– My concern is always that some of this information is going to be caught, for example, some convertible note or something that does make a material difference to the earnings, and it’s just not something that’s otherwise picked up by traditional measures or it’s not something that you can read in the statement. So, can you take us through a process where you find an earnings release from a company. It comes in, and then, what do you do? There’s a machine learning AI process to that? Does that identify the footnote or how does that all work?
Using AI To Decipher Footnotes
David: Yeah, let’s get nerdy here, because the reason we came to get in touch with the folks at Harvard Business School is because the professor, Charles Wang, he said, “David, I have a problem.” This is going back a few years ago. It’s not a problem any more. But the problem was, “Students don’t want to take my class. They don’t see any reason to go through these filings and dig through the footnotes to find these items. So, there’s no reason to teach financial statement analysis because they don’t think it matters. Or, it’s not enough of an edge to justify the time it takes to do that work.” I said, “Well, what if there was technology that made getting that extra information costless or very inexpensive, so you could operate with the advantage of doing that work across thousands of companies without having to take the time to do it?” Charles said, “That’s really interesting. I’d love to know more about that.” That’s what led to a case study and a paper, and papers.
The key is, how does this what we call Robo-Analyst technology work? How does it make going through filings better, faster, and cheaper? The first part of the answer, Toby, is that, it’s not that exciting. We started building this machine learning natural language processing capability back in 2002. We were drawing with the plans, we completed in 2003, and it’s really just a fancy mechanism for copying what an expert would do and going through a file. So, there’s been no fancy algorithm that we created that can automatically figure out how to deal with the data. We have found that there’s no substitute for experts engaging with the data, the source. We just do that with technology that allows us to teach machines to do what experts do.
The first thing that happens is that the filing comes in and it’s not press releases actually, Toby, because they don’t contain adequate disclosures. You don’t get the details you need until you get to the Qs, and oftentimes, until you get to the Ks. For example, sometimes, a Q has partial disclosures, and you don’t know the real answer to the annual report. But to answer your question, specifically, yes, the filing comes in and it goes through a process that the machines go through in and autoparse, and identify, and tag all kinds of stuff that gets reviewed by a human. Then that gets run through a model. That result gets again reviewed by a human. All the while it’s parsing and running data checks that compare what’s been done in the past, so that we can always take advantage of the intelligence that we’ve accumulated in our system based on what the experts have done over the last 18 years.
Tobias: It always interested me to know because by the very nature that each note is going to be a little bit idiosyncratic, it is some exception to what is auditable in the filings themselves or in the financial statements themselves. So, how do you get these things that are idiosyncratic, and each one of them is potentially unique, and then work that back into the financial statement?
David: That’s a great question and it was one that I came to over a lot of time when I was doing this work manually back in my Wall Street days pre-tech bubble. I went through a lot of files. In the beginning, I didn’t think it would be possible it’d ever occur to me. But as I cut into the super nerd range of looking at thousands of filings, I did see that there was pretty consistently a pattern. There’s a lot of differences, there’s a lot of idiosyncratic reporting, but there’s enough consistency there that you can get a machine to do a lot of the work, and that’s how our system works. We only let the machine do the work that it’s 100% sure of, and when we see something that doesn’t fit the pattern, it’s knocked up to the analysts to review it as an exception. So, we’ve been able to churn through many, many, many, many exceptions. We see that there’s actually a pattern with in exceptions.
Look, you can get 50% of it automatic the first time through and then you can get the next 50%, you knock out 25% of that, and 25% of that. So, we’re constantly chopping down over time how much we require a human expert to go through the filing. Every time, the expert says, “Oh no. Here’s how you deal with that,” that gets added to the machine’s memory bank of how to handle a certain situation so that if that exception ever comes again, then the machine can handle it.
Tobias: One of the interesting things in the paper, it talks about them being more accurate, that makes complete sense to me that if you combine that what’s in the footnotes into the financial statements, it becomes more accurate. But it talks about the earnings just being more persistent. What does that mean and why is that the case?
David: Well, the accuracy and the persistency are really the same thing, because what we ultimately take out that affect core earnings. We’re not talking about the balance sheet. We do a lot of work there, too. When we’re talking about core earnings, which is really– it’s like what S&P tries to do with operating earnings, or a more normal version of profits. Ultimately, what we are excluding in our measure are the unusual gains and losses that cause the unadjusted, the legacy, the traditional data to be more volatile and to be less persistent. Our number is just more normal. It’s more persistent in that it doesn’t fluctuate as much and it’s effectively more reliable.
We see this play out on the individual company level as well as a macro level. We just recently published a bunch of our macro trends reports where we look at the fundamentals of the S&P 500 and the top 2000 stocks in the market. We see that whether you’re looking at S&P 500’s operating earnings, measure profits for the S&P 500 or accounting earnings, those numbers are much more volatile. Especially over the last couple of years where you’ve got COVID, where analysts at S&P said earnings were going down way more than they were, and now they’re bouncing of course, much faster.
I’ve seen this phenomenon since the tech bubble. I remember Mike [unintelligible 00:12:01] calling it the kitchen sink effect. When things are bad, we’re going to write everything down we possibly can, make the numbers look as bad as we can, because sentiment is terrible, and everything’s bad, then if we had good numbers, it wouldn’t matter anyway. Then, the benefit of that is that on the way out, we can show even better earnings growth and stronger comps, because we’ve depressed numbers here, and we’ve taken off these charges and write-downs, and now, the numbers look even better coming out when markets are much more positive, and you get rewarded more for growth. So, companies kind of game the system that way during times of crisis. But the persistency thing is that we’re getting rid of those unusual gains and losses.
Why Do Companies Understate Profits?
Tobias: That makes complete sense, so that, one of the parts of the papers, it talks about autocorrelation, which is just how close one year is to the next year or to the preceding year, it’s noticeably higher for core earnings than it is for net income. Then the way that you’re achieving that is by knocking out some of these charges, or reserves, or whatever they’re taking that are probably unwarranted, and they’re using that kitchen sink effect that perhaps GE was famous for doing, for having those reserves and you’re able to identify those. So, it’s interesting that you’re also identifying them when they are using them on that downswing as well as you can say that they’ve over reserved, or overcharged, or they’ve made some change here that’s probably for the purposes of future growth.
David: We see that all the time. People say, “Why would a company understate their profits? Why would they knock it down?” It’s the setup for better comps. You have to keep in mind that they’ve also got better information around what’s probably going to happen, and that potentially can affect when they issue options. There’s a lot of things that go into why management might choose to take expenses or gains at certain times. There’s also a lot of discretion as to where these things get reported. So, we’re seeing the most signal in it– that earnings distortion number, the difference between core earnings and net income, we’re seeing the strongest, sharpest signal in the hidden version or portion of earnings distortion, because there’s two parts. There’s the hidden and reported portions of earnings distortion.
Reported earnings distortion is what you can find on the income statement. A lot of times companies report unusual gains and losses on the income statement. Acquisition merger related costs, gains on sale, those can show up on the income statement, or they don’t sometimes, or they don’t a lot of times. Those that don’t, we call it hidden, and you have to dig through the MD&A, the footnotes, sometimes on the cash flow statement but the mostly the foot notes in the MD&A, and that’s where really we add the most value because that’s getting missed the most.
Common Hidden Distortion
Tobias: What’s an example of a common hidden distortion?
David: There’s a ton. We have actually 13 categories of them in the Harvard paper and in our data set so that we could look at actually which ones of those individual distortion items tend to have the largest effect. It turns out there’s not always enough data in any one individual distortion type, unless you break it down between hidden and reported. But they can be related to gains and losses on derivatives, can to be gain related to merger and acquisition costs. Those are the probably the most common. Foreign currency translation is also a big one that’s pretty common. Just because those companies that engage in business in a lot of parts of the world, they’ll see that as a fairly common item that they often don’t report and can have a big impact on earnings.
There’s a lot of stuff related to pensions. Some of the stuff in pensions happens so often that we’ve decided to leave it in core earnings, but then there’s a lot of unrealized gains or losses or actual return or gains losses that can happen and have a big impact as well. It’s hard to say, but there’s a lot. I think that’s the trick, Toby, is that you don’t know unless you look. We’d all like to believe that, “Oh, if we just checked this one item off, we could just automatically search for this one string of text, then we would find all these things.” Companies are too smart for that.
Tobias: They just change the description every time so they can hide– If you’re looking for a specific string of text, then they’re wise to that, and they change it so that it’s stated in a slightly different way, and that’s enough to fool us if you’re searching for that specific string of text. It’s diabolical, really.
David: It really is right. I think it’s important to maybe pause here a little bit and press on this point a little bit more, I’ve been doing this again since 1996. It was clear to me after a couple of years that the companies could make their disclosures a lot easier to read if they wanted to. The best example is the fact that there’s a different set of terminology on the income statement and a press release versus a 10Q versus a 10K. You see different descriptions for line items on all three of those, significantly different. A lot of times just even different numbers of disclosures, like the number of items that they display for an income statement on a press release can be half of what you see in a Q, which will be half as what you see in the in the K. They’re using, again, different descriptions, and the same is true in the footnotes.
Look, as a former auditor, I’m thinking to myself, “Why are they making more work for themselves? Why not just have the same template?”
Because a lot of times, I know that the auditors, you start with a prior year’s working papers and documentation, and you just build on that for the next year, but yet things change. I think this is part of what XBRL doesn’t work, because it makes sense. Companies probably could very easily be cheaper even to keep things to say every year, but they don’t. The number of tags that they’re using an XBRL is growing significantly every year. I think we’re up to 40,000. Companies don’t always follow the convention, and I remember when I was working with FASB and some of the accounting groups, I was not really very sanguine on the potential success of XBRL, because I’d seen if companies had wanted to be more transparent and make their filings easier to read, they’ve clearly chosen not to. So, as long as there was dependence on the willingness of corporations to be more transparent in their reporting and the success of XBRL, it wasn’t going to happen. That’s been the case.
Companies Exchange Words With Graphics To Trick AI
Tobias: One of the things that I have seen too, is that, in the PDF release, they exchange a word for a graphic. So, the word appears to someone a human reading, and it’s indistinguishable from the rest of the text. But a computer reading it encounters it as a graphic that doesn’t have the word in that little piece of text, which I think that– I don’t want to name anybody because I can’t remember exactly who it was. But in recent times, this is in the last three to six months, I saw that happen.
David: Yeah, that’s Tesla.
David: I think Michelle Leder at footnoted.org does tremendous work in this area too. She is one of the first to point that out, if not the first. A lot of times, we’ve got to convert stuff to text, because the PDFs and these other documents just get too difficult to read. But yes, exactly, they can just use the PDF, but they took the time and trouble to take a picture and insert that into the PDF instead of leaving the table. So, they’re doing extra work to make the filing harder to read. Yeah, Toby, that’s a perfect example. A perfect example of how companies are going out of their way to obfuscate financial performance, make it more difficult for folks like us to do our job, and for everyone to do this work.
Tobias: So, David, you do the core earnings calculation, but then how are you deriving a trading signal from that? What is the nature of that trading signal? Is it saying these companies are more distorted, and these companies are less distorted, and is that the signal to go long or short, or is there value component to that? How do you think about that?
David: Yeah, great question. There’s three buckets here. There’s companies with high levels of distortion, they are overstating their earnings. Those are the ones you want to short. There’s companies with low levels of distortion or negative, and they’re understating their earnings. That’s where you want to go well. There’s a third bucket that we’ve discovered that also outperforms on a large cap and small cap level. That’s what we call the truth stocks, or the companies where there’s zero distortion, or really small amounts of distortion. This is actually a surprise to us and it isn’t covered in the Harvard and MIT paper at all. But it was something that was identified by one of our data partners, Exabel.
When we first looked at this, we didn’t see a strong outperformance as we do in some of the long-only or long short portfolios, but we do see outperformance. We do see way better volatility or even better Sharpe ratios. When it first was presented, I was surprised and I thought to myself, well, of course the market would appreciate those companies that are telling the truth about earnings. They would see less volatility. It’s almost like we’re thinking about this as a new ESG factor. The companies that tell the truth about profits as a way to reflect upon the quality and the integrity of the management team.
Tobias: Yes, potentially, a governance factor, the truth in reporting or something like that?
David: Agreed. Yeah, it’s an interesting thing, but you hit the point, the nail on the head in terms of how you use it for trading. It’s looking at that distortion number, and then you normalize it, or divide it by total assets. That’s just reported assets or market value. Depending on the timeframe and depending on whether or not you’re looking at analyst distortion, an analyst distortion being the difference between core earnings and analyst consensus numbers, you can normalize by total assets market value or the absolute value of core earnings. As they’ve done this test for the machine learning, they look at all three and in different timeframes, different normalizes work a little better.
Tobias: Given that you’ve been doing this for 25 years plus, 26 years, have you noticed a change over time in the amount of distortion? Because I sort of ask because the late 1990s were very famous for there being quite a lot of distortion, and there were some attempts made to reign that back in. Is that the thing that as the bull market gets older, you see more and more distortion or is it just there’s more distortion over time?
David: The Harvard and MIT professors pointed out that there is more distortion over time. There’s more distortion in the underreported expenses. So, companies understating earnings, and we think that simply because of the nature of the world. There are a lot more ways to spend money than there are to gain money. That’s a trend. \
Honestly, Toby, you actually hit on something that we’re just now scratching the surface on, because it’s a lot of work to do this, but we’ve been looking at the difference between core earnings and S&P Global operating earnings for a long time, and GAAP earnings. We see a meaningful difference over time. But we lose in looking at just the absolute difference is the amount of under or overstatement because a lot of that cancels out. If you get one company that’s understating and one company that’s overstating, that’ll cancel out. So, we’ve started to track the percentage of companies that are overstating by 10% or more and understanding by 10% or more.
We’ve seen in just the last couple of years, because I’ve only done it for really about eight quarters, we’ve seen a pretty market increase. It’s a good way to look at it in terms of ones that are adding together how much it’s overstating, how much you’re understating because we’ve seen a big flip flop in the last eight quarters.
During the downturn in the market meltdown at COVID-19, we saw a lot more companies understating. Now, we’re seeing a whole lot more overstating. But in aggregate, that number is going up, and that’s consistent with what the Harvard and MIT professors tracked back to 1998. We want to do it in a little more granular way where we look at the number of companies overstating compared to operating or consensus earnings, or analyst earnings, overstating and understating by more than 10%.
Which Companies Have The Most Complex Financial Statements?
Tobias: Do you find there’s more distortion in companies that are just more complex? So, conglomerates tend to be more complex, they’ve got more moving parts and more ability to hide, but also more discretion over how things are recognized as. Does the complexity of the business mean that it’s almost inevitable that there is more distortion in that business?
David: Some of those for sure. Liberty Media, anytime that’s coming down the pipeline, the analysts are like, “Oh, this one’s a killer.” Some of these filings take us a couple of minutes, some of them can go straight through, some of them take 40 minutes. Those are the bad ones. They tend to be some of these conglomerates, they tend to be companies also that have both traditional industrial-type operations like a General Motors, and a financing operation. Deere is a good example of that. Because you have to treat the debt and earnings for the financial business different than you treat the debt for an industrial business just to be consistent in the return on capital calculation. Those can take longer.
But in terms of monkeying with the numbers, the more complex, the harder it is to find the stuff but the pattern in terms of a sector is not real clear. I think it really is more idiosyncratic to management team except for the financial sector. We do not see a lot of distortion there. My guess, Toby, is that it’s because the financials are complicated enough. They’re so different from financials for a traditional company that I think that there’s enough of a barrier that people just– I guess they don’t think they need to do it as much or they don’t care to do it. It’s interesting.
It was funny, when I was at Credit Suisse and I was developing this methodology for all sectors, the financial guys would tell me to, “Hey, you better take a look at the securitization process, because we think there’s a lot of room for companies to monkey with the numbers with the securitization of receivables, especially to credit card receivables.” I was surprised to find that the accounting for the securitization of receivables very much accurately reflected the underlying economics. There wasn’t any monkeying with the numbers there. Again, I don’t know if it’s because it’s a complex enough activity that they’d have to worry about people were trying to unwind it or what. But the answer to your question is across all sectors except for financials, we don’t see a lot of monkeying within numbers in particular. It tends to be more idiosyncratic by company.
Tobias: Yeah, that’s very surprising, because that was exactly where I was going to go next, and I would have thought that financials was the obvious area to do that. What about an insurer? Do insurers– It seems to me that’s just right for distortion.
David: We’re not finding that so much the case, Toby. It might be that the statutory reporting is so detailed, that’s where folks can get to the nitty-gritty. But it also is likely due to the fact that, when there are unusual gains and losses, they’re related so much to the core of the insurance business like unexpected losses or unusual gains or losses related to changes in actuarial assumptions, those are going to be on the income statement most of the time. Same is true for changes to deferred acquisition costs and amortization of those over time. Those tend to be reported and are not something that companies can get away with baring.
So, one of the things also that the Harvard and MIT professors looked into is, how many of the unusual gains and losses are reported in the MD&A, which means they were likely talked about on a conference call versus the footnotes? What they found is that most of them are still in the foot. That’s sometimes even what’s in the MD&A doesn’t always reflect what was said on the call. But it’s for sure that a lot of the unrealized gains and losses are mentioned on earnings calls tend to be incorporated, whereas the stuff that’s in the filing is not. I think when it comes to insurance companies, because the unrealized gains or losses are so core to the business and affect how investors are going to think about how managers are running the business, those tend to be announced or discussed on calls, and so, therefore, we’re not able to get a lot of signal from them.
Berkshire’s Unrealized Portfolio Gains & Losses
Tobias: The Berkshire Hathaway, which is an insurer and an investment company, has complex financial statements and they do seem to be quite distorted. Is that a function of the reporting for an insurance company that’s a conglomerate, where the unrealized gains on their holdings have to go through the income statement? What drives the distortion in something like Berkshire Hathaway?
David: Yeah, that’s a great question. Honestly, that’s a place where FASB really messed up and allowing these unusual– I’m sorry, unrealized gains and losses to flow through operating earnings. They didn’t used to do that. They changed that rule. It makes no sense. It reflects how FASB gets a lot of pressure from corporate earnings to make changes. That’s why it’s important. They say, call your congressperson, right? Call your FASB representatives or comment so that they can hear from investors. Otherwise, the overwhelming amount of feedback comes from corporations, and of course, they’re conflicted.
Yeah, with Berkshire Hathaway, we actually throw them out. Whenever we’re doing our analysis and talking about the companies with the most distorted earnings, we just exclude Berkshire Hathaway, because so much of that distortion simply comes through the fact that they’re having to report unrealized gains and losses on their investment portfolio and their accounting earnings. It’s like, “Nope.” Anyone who’s following them knows you need to ignore it. So, we wouldn’t really be adding any value by pointing out, “Oh, Berkshire’s numbers are overstated by X or Y, or understated by X and Y because of the unrealized gains or losses.” But it is a big item, it’s a big number, and it’s super impactful on a company like that.
The Ignorance Of Fundamentals
Tobias: Do you have any suggestions for a better way to deal with it? Is there a simple fix to it, you just get rid of it? But then, how is it accounted for otherwise? What was the treatment of it? The change is only in the last five years, something like that.
David: I think it’s less than that.
Tobias: Even less than that.
David: I don’t remember. There’s so many rules that come through, and we’re we constantly track it. That’s another resource on our website that’s for free. The FASB track, we’re looking at every rule, we document how it’s going to affect report earnings or how it affects our models. To be honest, you may be right, Toby. It’s been in the last few years. The honest truth is that, when it comes to accounting, there’s just really not any shortcuts to getting to the answers. I think that’s the problem in many ways with the markets these days, is that Wall Street wants you to believe, it’s easy, you just rely on it. Now, investors, especially meme stock investors, have figured out, “Well, that’s not true.” They just rejected all fundamental analysis altogether, because they say, “Well, we can’t trust Wall Street. We’re not going to do the work ourselves. We can’t trust Corporate America, because they’re not going to make it simple enough. So, guess what? We’re just going to go into it and ignore fundamentals.”
I think that that’s a big part of what’s driving markets these days is just an intentional disrespect and ignorance of fundamentals, because they can, if they all join together, move stocks in a way that completely is disconnected from fundamentals.
So, they’re proving by their own efforts that fundamentals don’t matter at least in the short term, because they don’t have the time, they don’t have the expertise to understand them. Candidly, Toby, I think we talked about this on our last call. That’s part of my mission is to make high-quality, reliable fundamental research available to everybody to ultimately make the markets more efficient, because I think it’s in our best interest. I think our society needs more discerning researchers to give people an objective one version of the truth at least about something that’s quantitative like profits. Ultimately, it enriches us all because less capital goes to waste.
But your question is an excellent one, because it speaks to the harder matters. There is no easy answer. The absence of good information, I think people are reacting rather strongly and saying, we’re just going to have with it all. We’re going to trade like apes and just push stocks around to make the Wall Street guys pay for not being more transparent and not sharing the information or misinforming us.
Wall Street’s High-Priests Incentivized To Complicate
Tobias: There is a little conspiracy going on. Well, there’s an incentive for companies, too. Of course, there’s an incentive for companies to overstate what they’re earning and then Wall Street as the high priests interpreting those entrails are also incentivized to make that as complicated as possible. It’s must be working, because the function of your signal is to identify things that are overvalued by virtue of the fact that they have all these distortions. So then, you want to be short those ones, and then you’re long the ones that are telling the truth, because presumably, they’re undervalued and they’re going to provide some excess returns in the future. It’s great that you can explore it. It’s great that it’s exploitable. But it does also indicate that all of this chicanery works, doesn’t it?
David: Oh, yeah. It works. You’re absolutely right. Gosh, I love the high priests analogy. Yeah, it works. That’s part of why what we do works. If there weren’t distortion, we wouldn’t have this new value factor. But it works. It works great, and it has worked for a long time. It’s dumbfounding when companies like DD Global can go public at valuations that probably cost public US markets $40 billion in market cap in a short amount of time when the economics of the business are terrible, when we know this regulation that could kill the business or significantly hurt the businesses is pending. The fact that Wall Street can get these things sold when they are objectively such poor investments, absolutely, it speaks to the power of the high priests, absolutely.
ESG – G Gets E & S Right
Tobias: Do you have any examples of the very best actors, the truth tellers?
David: In terms of research providers?
Tobias: Oh, in terms of just companies that are reporting and they’re doing a good job of faithfully reflecting the economic basis of the company.
David: I don’t. That’s a great question. I should have brought that to this call. It’s a fairly recent discovery that these truth stocks actually make money as well. We do have it built into our screener, where you can actually look these things up. I won’t bore you guys waiting for me to do that work and report back right now but it is absolutely baked into our tools that tends to be the higher-level subscriptions, institutional only. But yeah, it’s there, and we are definitely looking forward to creating an ESG product, because like you said before, I feel like the GE and ESG is actually the most important, because no matter how good you are at ENS, if you’re not actually able to build a sustainable enterprise, then whatever good you’re doing on ENS is not going to persist. So, G in terms of governance and capital allocation, good capital stewardship, in my opinion, is most important. We’re excited that we can finally have ESG product at New Constructs, and it will focus on that and it will talk about those companies, and it’s a great question. I should have been prepared, and I apologize.
Tobias: No problem at all. I couldn’t agree more that the G is more important than the E and ENS because the G gets the E and the S right for the investors in that company, whereas if you have, if the governance is weak, then you lose whatever they’re doing in the E and ENS space is slightly irrelevant because it’s not forced into the company probably, it’s not appropriately considered for the nature of that business.
David: How dependent on you is New Constructs or the algorithm? Does everything ultimately filter up to you or is that something that a team can handle without you?
AI Cannot Become Lawnmower Man
David: The answer is yes. From the beginning, Toby, I knew that for a New Constructs, there’s an idea to ever be successful. I had to be a really good communicator, and it was all about institutionalizing into code and into machines, what ideas were in my head, and I think ultimately, what we were doing was way easier, and that there was one person making a lot of decisions about how to correctly categorize all these adjustments in items. Fortunately, we’ve had Harvard and MIT validate that those categorizations are best in class, best in market.
Because I think doing it by committee in one of these big organizations, which is what you’ve got to do if you’re at a Bloomberg, or S&P, or [unintelligible [00:41:13], Cap IQ all these guys, it’s going to be impossible, because you don’t have always enough people who understand accounting, and understand finance, or understand both and technology.
I think, it was been great for one person to be able to charge through with this. There’s no question that we would never achieve the scale that we’ve achieved today, if not for our ability to what I would call institutionalize it. By that, scrupulously document what we do and why in our internal wiki, ultimately feed that into our code base. Because the analogy I like a lot of times for what we’re doing is we’re like Robocop. It’s a machine that’s been endowed with as much human intelligence as possible.
When we’re having trouble automatically parsing stuff, what I’m always reminding our team is that, the problem is that this machine does not bring as much context to the problem as you do. You have to constantly step outside yourself and think, “Okay, how is it that I’m figuring out that this item in this area of the filing should be categorized in this way?”
Well, it’s because you know that there’s no way that this item could be a balance sheet item, because it’s too small or you know that no way it could be an income statement item, because it’s larger than total expenses, or because of what you’ve seen in the income statement, you know that this is not part of SG&A. There’s so much context that humans bring that they don’t always pick up on, and I think what makes us good at New Constructs is that we’re willing and good at figuring out how to teach the machine that.
For me, to be successful, for New Constructs to be successful, it is all about teaching people and teaching a machine or teaching machine directly. At the beginning, it was me teaching the machine directly, more and more over time, it’s teaching people that teach the machine, and that’s where our Robo-Analyst technologies special is that we’ve had experts who’ve been willing to take the time in trouble, because it is painful to have to teach machines, because you are truly dealing with something that’s less sentience than a child. That’s a big misconception around machine learning at AI. It’s like someone can type some algorithm and, poof, machine is ultimately Lawnmower Man, wakes up one day, all of a sudden, is genius. Most people won’t get Lawnmower Man, but I guess– [crosstalk]
Tobias: It’s too old. [laughs]
David: Way too old.
Tobias: But maybe not.
David: But it’s extremely difficult. It’s hard. That’s why it’s taken us a long time to see the machine with a little more context, little more context, little more context over and over and over again. So, that’s a long-winded way of saying no, I think, there isn’t that much keyman risk here, because in order to be successful, I’ve had to communicate to people and ultimately teach people to communicate to the machine, and that’s what makes I think our technology so special.
Tobias: David, we’re coming up on time here. Is there anything that you would like to discuss? Is there anything we’ve missed?
David: I don’t think so, Toby. It’s been a lot of fun. Yeah, I think the big takeaway is for your listeners and viewers, we’ve got the first new value factor in 25 years, and we’ve showed that there is material idiosyncratic signal, and we’re excited to be getting this to the marketplace, and we’re excited to actually be marketing a product that we know can tap into people’s desire to make money in the short term as opposed to just be proud of using better analysis. [laughs]
Tobias: Are these papers publicly available? Can I link them up in show notes?
David: Yeah, we’ve got one that’s on our website from ExtractAlpha. The Harvard and MIT papers been publicly available. It’s free on SSRN. It’s not free on the Journal of Financial Economics website. Then we’ve got papers imminently coming out from Envisage and CloudQuant that will be looking at different kinds of portfolios. The Envisage paper in particular is going to be looking at true stocks. The ExtractAlpha focuses on long-short, you’re going to see more long only for CloudQuant, you’re going to see truth stocks from Envisage. The idea here is just like with Harvard and MIT, it’s way better to have someone else say your data is good than it is for us to say. So, we’re looking at independent third-party validation that will come with trading track records and trading documentation so that investors can audit that work just as well.
Eventually, we’re going to be offering clients a Jupyter Notebook to replicate the strategy spread quickly on their own as well.
Tobias: What’s a Jupyter Notebook?
David: A Jupyter Notebook is just a simple way to put together a bunch of Python code that is very user friendly, if set up in the right way, so that the quants can go in and say, oh, so, CloudQuant says if you do this with the data, you can generate this kind of returns if you look at this universe with these constraints. This Notebook will be set up so that people can plug and play those kinds of inputs and validate whether or not the papers results are accurate, and then tweak them on their own. The whole idea, Toby, is that, we wanted to shorten the distance between this better data point and how to make money, so that investors could more easily monetize.
The analogy I like is, we want to be in the business of selling holes, not drills. So, we’re trying to make it easy for people to make money with the data, because we know we’ve got proof from multiple angles that there’s something unique and of value there. It’s time for people to begin drilling for the oil, pulling it out, refining it into profitable trading strategies.
Tobias: David, if folks want to follow along with what you’re doing or get in contact with you, what’s the best way to do that?
Tobias: David Trainer, New Constructs, thank you for your time.
David: Thank you, Toby. Good to be with you.
For all the latest news and podcasts, join our free newsletter here.
Don’t forget to check out our FREE Large Cap 1000 – Stock Screener, here at The Acquirer’s Multiple: