Algorithmic thinking is a way of getting to a solution by using a formulated series of steps called an algorithm. An algorithm is a set of instructions or rules, that if followed, can lead us to the solution of a problem. The best way to develop algorithmic thinking is by practicing algorithmic thinking. Try to find solutions to everyday things: the best way to get to work in the morning, the best way to cook, the best way to shop, etc. Almost everything has a goal and method with which to achieve it. For this reason, almost everything can viewed as an algorithm. Even when solutions are already known, we can still figure out the solution ourselves and compare it with the known solution.
The power of algorithmic thinking is that it allows us to both automate and optimize our decision processes. By formulating the process of solving a problem, we better understand the assumptions and weaknesses that are built into that process. This understanding encourages us not to see the world as black or white. We begin to realize that any solution is only as good as the model and inputs it was built upon. Most importantly, this way of thinking reduces the chance of us being affected by our cognitive biases.
To further explore this concept, this post uses the ideas presented in the recent Rationally Speaking podcast on “Algorithms to Live By”. We look at the algorithms presented and how they are useful in our everyday lives.
37 percent rule algorithm
You have a sequence of options. You have the chance to make an offer. If you don’t make an offer, you lose it. Really what you have to be doing is to be building up a sense of … how good are your options here? What are the options? At the same time as dealing with the cost of losing some of those options as you’re gathering that information.
The right thing to do in that situation is you take 37 percent of the pool of options, or 37 percent of the time you are going to spend looking. If you’re, say, looking for an apartment for 30 days, that’s 11 days, you spend that time just gathering information. You leave your checkbook at home. You’re just getting calibrated. Then after that, you make an offer on the first place you see that’s better than any place you’ve seen so far.
If you’re in that precise situation, that’s exactly the right thing to do.
To understand the intuition behind the 37 percent rule, we need to understand what it is trying to solve. In situations, such as deciding which house to buy, we are subject to a trade off between having increased information and having the ability to act on that information. For example, we can look at every house in our city, but by the time we get to the last house we are going to have missed out on the opportunity to make a bid on the first house. In this case, our increased information wouldn’t be actionable. At the other extreme, if we just make a bid for the first house we see, what are the chances that we bid on the best house? Well, if there are only 5 houses for sale in our city, the odds are 20%.
The idea behind the 37 percent rule, is that the optimal strategy lies somewhere between these two extremes. We know that we can get a 20% chance simply by choosing a house at random, but now we want a strategy that will do better random.
Say we try a new strategy, where we look at that first place but don’t make an offer on it. We then look at the second place, and if it is better than the first we make an offer on it, otherwise we continue on to the third and repeat this process. With this strategy, the odds of choosing the best house increases to 43%. For n number of houses, the probability of selecting the best house with this strategy converges towards 1/e or 37%.
The 37 percent rule is a solution for a subset of optimal stopping problems. Of course, this solution is only as good as the assumptions built into it. For example, what if we do have the chance to go back to a previous house and make a bid after rejecting it, or what if we use Zillow rankings to decide, or what if there is a large psychological cost to not getting the situation sorted out fast. All of this would change the calculus of our decision algorithm. This brings up another important insight: just being aware of the parameters involved in an algorithm, can help us better understand and solve problems.
The upper confidence bound algorithm
In Computer Science, the explore/exploit trade‐off is the idea of “How do you balance between spending your time and energy getting information, and using your time and energy, leveraging the information that you have to get some good outcome or some payout?”
In concrete human terms, it’s like if you first move to a city, you should spend the first month or more just relentlessly trying new things. The first restaurant you go to when you move to Berkeley is literally guaranteed to be the greatest restaurant you have ever been to in Berkeley. The second place you try has a 50/50 chance of being greatest place you’ve ever been to in Berkeley. The chance of making a great new discovery is greatest at the beginning. Moreover, the value of making a discovery is the highest when you have the most time left to enjoy it.
The key insight here is that if you try something new and it fails, it only fails once, but if you try something new and it succeeds, you can keep going back to it. The upper confidence bound algorithm is one solution for these explore/exploit problems. It says that instead of evaluating something based on our expected value, we should be evaluating it based on the upper bound of our expected value. This is an optimistic approach that favors things you have very little information about.
For example, a person might not want to try something new on a menu. This person expected value of trying something new is that they wont like it. This person is completely ignoring the asymmetric returns they will achieve if in fact they do end up liking it. The upper confidence bound is that they will discover a new favorite food that will bring them joy for the rest of their life.
Then you get into what’s called Weighted Shortest Processing Time. Basically the way that that works is you take the ratio of how important something is to how long it’s going to take you, and then you prioritize jobs by that ratio. I think that gives you a reasonably good heuristic, which is something like “You should only take twice as long to do something if it’s twice as important.”
When deciding how to organize our time, first we need to decide on our goal. That goal might be to avoid missing due dates, or to minimize the length of our to do list. Depending on this goal, our optimal algorithm will change.
The key takeaway from this section is that specifying our goal is an important step. Without a thorough understanding of the goal, we can’t fine tune the parameters of our algorithm.
If you’re trying to make a prediction, say you’re trying to predict the stock market into the future or something like that. Then the more complex you make your model, the less accurate it turns out being. There’s some level of complexity that you need in order to capture the signal that’s in the data, but once you exceed that level of complexity, you end up making your model worse.
The key insight behind over-fitting is that adding more parameters to an algorithm can actually make that algorithm worse. For example, say we model an algorithm on what kind of food we like based on every single ingredient in the food. We take the foods that we know we like, break them down to their individual ingredients, and then create an algorithm that says, if these ingredients, in these proportions, are present in future foods, then I will like them. Obviously, this algorithm will not generalize well for future predictions. In hindsight it does a great job, but going forward it will just prevent us from experiencing any new foods.
Another example of over-fitting is how we tend to project our current preferences into the future. For example, we may enjoy small houses today so we buy a small house. This completely ignores what our future preferences might be. A better strategy would be to buy a medium house, since we know there is a decent chance that our preferences will change in the future. In other words, we should maximize our utility over time and not just in the present.
Algorithms aren’t a silver bullet
I think one of the things that Computer Science does really well is give us a way of articulating how hard problems are. There are huge classes of problems that are just considered intractable. We should not expect to be able to reliably get the correct solution in an efficient and repeatable way.
Even some of the algorithms that we’ve discussed today, the 37 percent rule and the optimal stopping problem, if you read the fine print, the 37 percent rule only works 37 percent of the time. It just turns it out that optimal stopping in the no information case is a hard problem and even when you are following the optimal algorithm, you still fail 63 percent of the time.
Some people treat algorithms as the holy grail of understanding. In reality, all algorithms are flawed and this should humble us. Nothing is ever clean cut, instead, reality lies across a probability distribution. There is a large probability that earth will still be around tomorrow, but there is also a nonzero probability that a meteor will wipe it out. Doubt is important for any rational decision maker.
That being said, outcomes aren’t necessarily as important as the decision process that went into them. Optimizing our decision process is still the best strategy, even if the outcome isn’t always accurate. Deciding to plan my day as if the earth will still be around tomorrow is still the optimal strategy.
The key insight is that even if a decision rule is wrong 90% of the time, it may still be useful. The world is complicated, and sometimes being right 10% of the time is a huge accomplishment.
How this relates to investing
Albert Einstein said, “If you can’t explain something simply, you don’t understand it well enough.” Oftentimes in investing, we seek to know every little detail of a company. We start modeling the intricacies of daily weather patterns on the future growth rate of operating expenses. All of this does us a disservice. Our models need to separate the important information from the noise in the world, and there is far more noise in the world.
We also often get so bogged down in the search for more information, that we never act on it. There is always a news report to read, a new study to to take, or a new model to try. At some point we need to stop searching for more information and just make a decision. The 37% rule could help us here.
Algorithmic thinking helps solve problems by understanding and optimizing the parameters that go into those problems. Algorithmic thinking uses a continuous process of optimization and understands that an algorithm will almost never be 100% accurate.