Potential Breakthrough Links Game Theory and Evolution

(Forgive my departure from the expected schedule, this was good enough to jump the queue).

It’s always nice to be validated by science. Only a week or so after finally wrapping up my series of posts on game theory and evolution, a serious scientific paper has been published titled “Algorithms, games, and evolution“. For those of you not so keen on reading the original paper, Quanta Magazine has an excellent summary. The money quote is this one from the first paragraph of the article:

an algorithm discovered more than 50 years ago in game theory and now widely used in machine learning is mathematically identical to the equations used to describe the distribution of genes within a population of organisms

Now the paper is still being picked apart by various other scientists and more details could turn up (for all I know it could be retracted tomorrow) but I doubt it. Even if the wilder claims floating around the net are false, the fundamental truth stands that evolution drives behaviour, and evolution is a probabilistic, game-theory-driven process. While it’s easy to see that link on an intuitive level, it looks like we’ve finally started discovering the formal mathematical connections as well.

Advertisements

Conflict and Cooperation

And finally we come to the last post in our game theory subsection, itself the last section in what I originally called “practical worldbuilding”. Conflict and cooperation are in many ways two sides of the same coin: ways in which multiple people can interact. Since, this whole section has been about people making decisions, conflict and cooperation is really about how groups of people make decisions.

In many ways the concepts we’ve already covered are more than good enough to handle this case, it just gets a bit unwieldy to start working through all the details for each individual. You start running into behaviours likeĀ Keynesian Beauty Contests and things become… complicated. Theoretically we want something like Asimov’s psychohistory, but that is still sadly fictional.

Still, we can say some interesting and hopefully useful things. Conflict occurs when the apparent goal(s) of another person is/are incompatible with your goal(s). Cooperation occurs when the apparent goal(s) of another person is/are compatible with your goal(s). As we have seen however, goals are tricky things. And just like people are pretty bad at evaluating risks, we’re also pretty bad at evaluating the goals of other people, even when we’re not being actively deceived.

In even larger groups of course, you start getting into memetics and social negotiation. It’s all one big sliding scale of behavioural analysis, tied up with a bow.

Taking Risks

One of the things that people tend to have trouble with when making decisions is evaluating risks – mathematically humans are just bad at it. Between zero-risk bias, the sunk-cost fallacy, and half a dozen other behavioural quirks, the human brain is a poor judge of risk. To quote Bruce Schneier in the linked article (well worth reading):

People are not computers. We don’t evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases — generally known as “heuristics.” These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities.

Unfortunately, these “close-to-optimal” answers are occasionally not just sub-optimal but truly and utterly wrong. When you think about how little work we do in evaluating most risks, the surprising point isn’t that we sometimes get it wrong – it’s that we sometimes get it right! But this is necessary – rigorously evaluating risks is extremely slow, and it simple isn’t practical to evaluate every possible risk in this way.

So next time you see someone taking a stupid risk, don’t judge them – think about what shortcuts and biases they must be using in order to miscalculate the risk involved, and make sure that you’re not falling under some bias in the other direction.

And remember, the ultimate payoff for the risk they’re taking may be larger than you think, since it probably feeds into one of their secret goals.

Secret Goals

First off, apologies for the long absence; life has a habit of getting in the way of philosophy. Back to decision-making and game theory.

Now, obviously whenever you make a decision you must have certain goals in mind, and you are trying to make a decision to best fit those goals. If you’re looking at a menu, your goals may be to pick something tasty, but not too expensive, etc. You can have multiple goals, and they can sometimes conflict, in which case you have to compromise or prioritize. This is all pretty basic stuff.

But what people tend not to realize (or at least, not to think about too much) is that frequently our “goals” are not, in themselves, things we value; we value them because they let us achieve bigger, better goals. And those goals may be in the service of even higher goals. What this means is that all of these intermediate layers of “goals” are really just means that we use so frequently we have abstracted them into something that we can think of as inherently valuable. This saves us the mental work of traversing all the way back to the root wellspring of value each time we want to pick food off a menu. The result is these layers of abstract “goals”. Yet another set of layers of abstractions!

So what are these root goals we tend not to think about? Are they so-called “life goals” such as raising a family or eventually running your own company? No. Those are still just intermediate abstractions. The real goals are still one more step away, and are almost universally biological in nature. The survival and reproduction of our genetic code, whether through ourselves, our offspring, or our relations. These are our “secret goals”.

So how does this help us understand decision-making? It seems intuitively impossible to understand somebody’s decisions if we don’t understand the goal of that decision. But when we think exclusively in terms of our shorter-term, abstract “goals”, these are things that change, that we can abandon or reshape to suit our current situation. Thinking of these instead as methods of satisfying our underlying goals (which do not change) provides a much more consistent picture of human decision-making. This consistent picture is one to which we might even be able to apply game theory

Game Theory

We come, at last, to the final subsection of our “worldbuilding” series. Having touched on biology, culture, and the mind, we now turn back to a slightly more abstract topic: game theory. More generally, we are going to be looking at how people make decisions, why they make the decisions they do, and how these decisions tend to play out over the long term.

This topic draws on everything else we’ve covered in worldbuilding. In hindsight, understanding human decision-making was really the goal of this whole section, I just didn’t realize it until now. I’m sure there’s something very meta about that.

Game theory is traditionally concerned with the more mathematical study of decisions between rational decision-makers, but it’s also bled over into the fuzzier realms of psychology and philosophy. Since humans are (clearly) not always rational, it is this fuzzy boundary where we will spend most of our time.

The wiki article on game theory is good, but fairly math-heavy. Feel free to skim.