As I mentioned in part 1 of The State of the Market, today I’d like to take a side trip into a different area: artificial intelligence (AI). One of the key themes we see pretty much everywhere these days is that computers, or robots, are taking over the world. This “age of automation,” as I will call it, is often seen as the rise of the machines (think the Terminator movies) and the associated demise of human jobs.
Taking a deeper look, though, that trend doesn’t seem so clear. In fact, the reality (at least so far) looks a lot more like Waze, which has a much cuddlier logo.
Let’s go for a drive
Driving is a metaphor I use a lot, as it captures many aspects of modern life. Here, I think it works particularly well. While the headlines often talk about self-driving cars, the reality is that Waze and similar products are the actual face of AI today. I use Waze every day, and it's not because I don’t know how to get to work on my own. Instead, allowing the program to deal with some of the more tedious aspects of driving gives me the opportunity to spend time thinking about higher-value things. I like to think of this as cognitive outsourcing.
When Waze works, which is most of the time, it’s great. When it doesn’t? Not so much. For example, when a road is closed, Waze can turn from a help into a hindrance, trying to direct me back onto the closed road while I am trying to find a way around.
The nature of AI
The reason why it does this is interesting and relates back to the nature of AI. At its core, AI is a collection of if-then rules. Its success depends on a consistent set of rules, a consistent set of relationships, and a limited universe. Much of the early work on AI focused on chess, for exactly these reasons. Chess is a defined universe with defined rules and relationships.
Looking at AI this way gives us insight into how and why Waze works—and fails. Most of the time, Waze is working with a set of known relationships (the roads) and known goals (a start and end point), and it can operate within a universe defined by the maps and the user's goals. The map is the territory here, and it works well. When the territory becomes different from the map, however, the system breaks down.
Making the jump from Waze to self-driving cars is therefore like making the jump from the map to the road. Unlike the map, which is constant, weather conditions change hourly, roads may change daily, and human drivers do crazy things every second. This is the opposite of a defined universe with constant relationships and, as such, a much harder problem for AI. One that, even as progress is made, is still far away from a solution.
How does this relate to investors?
As investors, we must decide whether the investing process is more like the map, in which case Waze can offer us a good solution, or like driving. In my experience, it is a bit of both. Just as Waze can offer real assistance during ordinary times but needs help in difficult ones, different investing tools can be very useful—but still require human oversight.
This extends beyond AI, of course, to any rule-based system, of which there are many in the investment world. The most prevalent, at the moment, is passive investing, which is definitely a rule-based decision process and therefore falls under the same kind of AI discussion we have been having. Tomorrow, we will talk about the overlap between today’s discussion and what that means for markets.