Monthly Archives: May 2014

Taking Risks

One of the things that people tend to have trouble with when making decisions is evaluating risks – mathematically humans are just bad at it. Between zero-risk bias, the sunk-cost fallacy, and half a dozen other behavioural quirks, the human brain is a poor judge of risk. To quote Bruce Schneier in the linked article (well worth reading):

People are not computers. We don’t evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases — generally known as “heuristics.” These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities.

Unfortunately, these “close-to-optimal” answers are occasionally not just sub-optimal but truly and utterly wrong. When you think about how little work we do in evaluating most risks, the surprising point isn’t that we sometimes get it wrong – it’s that we sometimes get it right! But this is necessary – rigorously evaluating risks is extremely slow, and it simple isn’t practical to evaluate every possible risk in this way.

So next time you see someone taking a stupid risk, don’t judge them – think about what shortcuts and biases they must be using in order to miscalculate the risk involved, and make sure that you’re not falling under some bias in the other direction.

And remember, the ultimate payoff for the risk they’re taking may be larger than you think, since it probably feeds into one of their secret goals.

Secret Goals

First off, apologies for the long absence; life has a habit of getting in the way of philosophy. Back to decision-making and game theory.

Now, obviously whenever you make a decision you must have certain goals in mind, and you are trying to make a decision to best fit those goals. If you’re looking at a menu, your goals may be to pick something tasty, but not too expensive, etc. You can have multiple goals, and they can sometimes conflict, in which case you have to compromise or prioritize. This is all pretty basic stuff.

But what people tend not to realize (or at least, not to think about too much) is that frequently our “goals” are not, in themselves, things we value; we value them because they let us achieve bigger, better goals. And those goals may be in the service of even higher goals. What this means is that all of these intermediate layers of “goals” are really just means that we use so frequently we have abstracted them into something that we can think of as inherently valuable. This saves us the mental work of traversing all the way back to the root wellspring of value each time we want to pick food off a menu. The result is these layers of abstract “goals”. Yet another set of layers of abstractions!

So what are these root goals we tend not to think about? Are they so-called “life goals” such as raising a family or eventually running your own company? No. Those are still just intermediate abstractions. The real goals are still one more step away, and are almost universally biological in nature. The survival and reproduction of our genetic code, whether through ourselves, our offspring, or our relations. These are our “secret goals”.

So how does this help us understand decision-making? It seems intuitively impossible to understand somebody’s decisions if we don’t understand the goal of that decision. But when we think exclusively in terms of our shorter-term, abstract “goals”, these are things that change, that we can abandon or reshape to suit our current situation. Thinking of these instead as methods of satisfying our underlying goals (which do not change) provides a much more consistent picture of human decision-making. This consistent picture is one to which we might even be able to apply game theory