I received a really interesting e-mail from one of our advisors the other day. He asked me to identify instances when I had been completely wrong about something, why I had made the mistakes, and what I had learned from them. He was looking for ways to better himself when it came to thinking about the future—a goal I totally endorse.
Anyone who has been reading this blog for a while knows that I spend a great deal of time figuring out how to be “less wrong” when thinking about the future. That said, I had not done specifically what was being asked here: look at a couple of my biggest errors and try to specifically identify what I had gotten wrong, why that was, and what I had done about it. Therefore, I thought it was a good idea to take my response and expand it a bit here.
The big ones
In terms of bad calls that I’ve made, the two biggest came after the financial crisis, during the slow but durable recovery that started in 2009. I called for a recession in 2010 and for the collapse of China at about the same time, with the consequent damage to the financial markets. As noted, both were completely wrong, and I eventually changed my mind. But, at the time, I was quite certain in my conclusions. I’ve thought a lot about why I was so convinced that those were the right (indeed, inevitable) calls, and I think the failures in my thinking were threefold.
1) I had too limited a picture of what the forces of order (for lack of a better term) could do. I thought that the Fed was out of options or that the Chinese government didn’t have the tools to keep things going. Of course, I was wrong. It was true that they didn’t have the options or tools, respectively. But rather than let things collapse, they kept making them up as they went. The first failure in my thinking was underestimating the power of the system to keep all the balls in the air. While this might just be postponing the inevitable, that is, in fact, what life is all about. Events have a momentum and institutions a strength, which is far stronger than it may seem. I now place much more reliance on the ability of institutions to act as stabilizers, which has been very beneficial in my analysis of, for example, Europe or Japan.
2) I did not have a sound understanding, based on both theory and empirical data, of real signals for the events I was looking for. I did not understand in enough detail what had to happen to result in, say, a recession or a bear market. As such, and like many people, I was therefore reacting to headlines rather than to deep data, with suboptimal results. Indeed, it was this failure that led to much of the research I have done since then: developing—and testing—different theoretically and empirically sound models that give a much more grounded understanding of what might happen and why. Before, I was flying on visual. Now, however, I rely on instruments.
3) I was, quite simply, overconfident. I thought I had a much more complete understanding than I did, failed to account for the known unknowns, and didn’t even consider the unknown unknowns. In any recursive, adaptive system like the economy or markets, it is simply folly to think you can predict with any certainty what will happen. The only school a fool will learn in is experience, which I am trying to do. But even given all of the work I have done, I am still markedly less confident in my conclusions—and much more willing, even eager, to look for evidence that I might be wrong.
Here’s what matters
Failures happen. But what really matters? What you learn from them. Generally, I allow myself to make any mistake without criticism—but only once. What I have done since making these mistakes is to expand my understanding of politics and policy to address the first failure, develop a deeper theoretical understanding of the economy and markets to address the second, and develop a better sense of the limitations of my own thinking (and how to work around those limitations) for the third. Humility is on the list as well. But despite real efforts, I still need a lot more work on that.
Some books I have found useful in the process include Thinking Fast and Slow, by Daniel Kahneman (which I’ve previously reviewed); How Not to Be Wrong, by Jordan Ellenberg (which I’ll review tomorrow); and The Checklist Manifesto, by Atul Gawande (which I’ve also reviewed). I plan to start reading Poor Charlie's Almanac, edited by Peter Kaufman, quite soon.
The never-ending quest
This is, indeed, a never-ending quest. Looking back five years from now, I will certainly have made some more major errors. What I hope to see, however, is that I was more aware of the risks in those bad calls, that I was quicker to see my errors and react to them, and that those errors came mostly from the unknown unknowns, rather than from something of which I should have been aware.
The purpose of research is not to eliminate uncertainty, which is impossible, but to manage and minimize it. By understanding what is known, by being aware of and adjusting to the known unknowns, and by reacting as quickly and thoughtfully as possible to unknown unknowns when they pop up, we can do the best possible job.
At least until the next error, when we will learn to do it even better.