The Independent Market Observer | Outlook. Opinion. Insight.

The Way We Measure Risk Is Wrong

Written by Brad McMillan, CFA®, CFP® | Mar 11, 2015 6:10:00 PM

Discussing returns over the next 10 years the other day, I closed with the thought that averages aren’t the best way to express how portfolios may perform. We will certainly talk about that today, but it’s emblematic of a much bigger problem: how we measure risk.

The trouble with averages

Consider Russian roulette. With five out of six chambers unloaded, on average, this is a winning bet. But the worst-case analysis tells a very different story. I don’t care what the average says when I consider the downside.

Similarly, when we consider market returns, we tend to focus on average returns, with a presumption that the downside will take care of itself. Indeed, we explicitly model this in our portfolio design, assuming that stocks will outperform in the long run.

Risk, we are told, is the price of higher returns. When we speak of risk in this context, though, we’re talking about variability, not capital loss. We’re also implicitly considering a long-term perspective, without defining what "long term" actually means or how it might conflict with other factors.

Focusing on the individual investor

Most investment math works like this. It uses simplifying assumptions and applies them to the portfolio or market. The statistics we see are inward looking and self-referential. They apply to the investments, and not to the investor.

There are good reasons for this. First, it’s easier. Second, it provides a consistent basis for comparison. Third, these numbers can indeed be useful in certain contexts.

I would argue, though, that in the most important context—that of the individual investor—the numbers we often use don’t capture the most important aspects of risk.

For a retiree, for example, withdrawing 4 percent per year and facing 3-percent inflation, downside risks are paramount. With a nut of 7 percent to make every year, large drawdowns at any point could cripple his or her entire future life. Drawdowns matter.

Similarly, looking at average returns over historical time periods is useful, but how about looking at minimum annual returns over that time period? For example, if over the past 30 years you never saw a 10-year period with a return less than 4 percent in an asset class, would that be more interesting than a 30-year period with a similar return but with 10-year periods where you actually lost money?

How do you define risk?

The standard response to this problem is a risk tolerance questionnaire, which investors use to try to identify how much risk they can take. But the title itself begs the question. How can you measure your tolerance for risk when you haven’t really defined what risk is?

The real question is this: How do we better measure and describe risk, and what does that mean for the investment process and results? We’ll talk about that tomorrow.