Introduction to Portfolio Theory & Risk Parity
In the following text we will introduce some of the common terminology and concepts of portfolio management. Enjoy.
Intuitively, you might prepare the construction of your portfolio by collecting data, verifying observed patterns, questioning repeatability and then verifying your predictions. And there are a lot of asset classes to choose from. At Vanilla Equity we only focus on an extremely narrow investment universe of US equities. But you can include so much more in a portfolio, such as gold, real estate, bonds, funds, Etherium, antiques, coins, farm land, cash etc. What’s the criteria to find the right balance of asset classes?
First we have to ask ourselves what our objective is. Before we tackle how to construct the portfolio, we should look a the “why”. During our life span our spending and income varies and we tend to put money aside to offset the difference. For many it’s a question about reaching financial freedom at a certain age. In that case you can have a clear numerical goal for your portfolio and for the cash flow it should generate.
The relative volatility of the investment is measured by Beta. A young individual is more likely looking to maximize returns (with a high Beta) while an older individual’s goal might be to preserve wealth and protect principals (with a low Beta). Regardless of age, one fundamental risk mitigating action is to apply diversification at multiple levels. You can diversify by outsourcing the management of part of your capital to various decision makers (like we do at Vanilla Equity), you can use multiple asset classes, you can use multiple entry points etc. If you start a fund with your friend, you can decide to apply separate strategies and together you have a well diversified strategy. The same goes for your spouse and your family savings.
Harry Markowitz‘s Modern Portfolio Theory, which is 70 years old, is all about variance or standard deviation of return. For simplicity, we can start by defining risk as the standard deviation. A standard deviation (or σ) is a measure of how dispersed the data is in relation to the mean. Low standard deviation means data are clustered around the mean, and high standard deviation indicates data are more spread out. For example; Cash has no standard deviation, a lottery ticket has a tiny standard deviation, a coin flip has a 100% standard deviation and the standard deviation of a ETF (Exchanged Traded Fund) is lower than a single stock. Our measure of success would then be the return versus the standard deviation.
In modern portfolio theory a so-called rational investor will try to maximize the return while minimizing the standard deviation. At this point it becomes obvious that we should eliminate all the extreme asset classes. We also have to find the correct balance of assets. This can be referred to as the efficient frontier; constructing the optimal portfolio that offers the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. When you have less than perfectly correlated, positively correlated assets, you can achieve the same return but have a lower standard deviation.
Asset managers used to simply benchmark to bonds versus equity, such as a 60/40 combination (60% in equity, 40% in bonds) because of the risk parity. When stocks went up, bonds went down and vice versa. Today the correlation between the two asset classes are questionable. But typically the stock volatility is higher than the bonds, and the expected return, is also higher. So your 60/40 combinations likely fall on the higher return and the higher standard deviation part of the efficient frontier.
In this case risk parity really means equal risk weighting rather than equal market exposure. The value is in the diversification.
But what happens if we use leverage? Let’s say we borrow money to increase our position in bonds. Now we actually have higher expected return, given the same amount of standard deviation. So maybe our way of measuring risk is flawed?
What about the Sharpe ratio that is commonly used? The Sharpe ratio can be calculated as the average returns in excess of the risk-free rate, divided by the standard deviation of returns, a measure of the average excess returns earned per unit of the standard deviation of returns. It’s value should generally increase with increasing diversification as compared to other similar portfolios with lesser diversification. The keyword is there again: diversification.
(Side note: There are two common ways to manipulate the Sharpe ratio: increasing the measurement interval or simply picking up a period for data with the better value of Sharpe ratio rather than choosing a neutral period.)
Looking at single trades, we have something referred to as the Kelly criterion – a formula that determines the optimal theoretical size for a bet. It is valid when the expected returns are known.
The common single period analysis is an oversimplification of an investment. When you are investing over multiple periods, the Kelly criterion tells you how to optimally basically bet with your bank roll.
Additionally, each trade has two variables that should be considered: the potential win versus the potential loss, but also the chance of each of those happening.
All the above could be put in mathematical terms. We could simply program robots to be portfolio managers. But that’s not the case, right? – and the reason for it is simple.
When everybody starts to implement the same optimal strategy as individuals, the whole system is actually not optimized. It’s actually in danger. All portfolio managers are seeking alpha (alpha is what the portfolio manager can generate additionally on top)
and trying to figure out the same thing. They are all connected.
When I was in the university our lecturer illustrated this effect by asking us students to guess the number between zero and one hundred, that would also be the average number of all our answers.
In the field of finance, and particularly in quantitative finance, it’s not mechanical and predictable as physics problems.
When you participate in the market, you are changing the market. In game theory, you have a pretty well-defined set of rules. A computer will beat a human. But in the market, you have so many people participating without clearly defined rules.
So the best we can do is extrapolate from our historical data. We are trying to forecast volatility, forecast return, forecast correlation, all based on historical data. It’s like driving by looking at the rear view mirror.
Portfolio Managers are not always as performance driven as people might assume. A career portfolio manager is likely to play it safe and just do enough to beat a benchmark (generate alpha), an index, the competitors. The optimal strategy for an individual portfolio manager is really to do the same thing as other people – and if they get a head even a bit, then boost the risk mitigation to end their year on a good note and not lose their job.
When you participate in Vanilla Equity, you will notice this feature as well. Risk management becomes all about consistency and “quitting while you are ahead” (reaching a quarterly quota rather than risking it all on a lottery ticket).