Category Archives: Worldbuilding

Constructing the Mind, Part 2

Whoops, it’s been over a month since I finished my last post (life got in the way) and so now I’m going to have to dig a bit to figure out where I wanted to go with that. Let’s see…

We ended up with the concept of a mechanical brain mapping complex inputs to physical reactions. The next obviously useful layer of complexity is for our brain to store some internal state, permitting the same inputs to produce different outputs based on the current situation. Of course this state information is going to be effectively analogue in a biological system, implemented via chemical balances. If this sounds familiar, it really should: it’s effectively a simple emotional system.

The next step is strictly Pavlovian. With the presence of one form of internal state memory, the growth of another complementary layer is not far-fetched. Learning that one input precedes a second input with high probability, and creating a new reaction for the first input is predictably mechanical, though now mostly beyond what modern AI has been able to accomplish even ignoring tokenized input. But here we must also tie back to that idea (which I discussed in the previous post). As the complexity of tokenized input grows, so does the abstracting power of the mind able to recognize the multitude of shapes, colours, sounds, etc. and turn them into the ideas of “animal” or “tree” or what have you. When this abstracting power is combined with simple memory and turned back on the tokens it is already producing, we end up with something that is otherwise very hard to construct: mimicry.

In order for an animal to mimic the behaviour of another, it must be able to tokenize its sense input in a relevant way, draw the abstract parallel between the animal it sees and itself, store that abstract process in at least a temporary way, and apply it to new situations. This is an immensely complex task, and yet it falls naturally out of the abilities I have so far layed out. (If you couldn’t tell, this is where I leave baseless speculation behind and engage in outrageous hand-waving).

And now I’m out of time, just as I’m getting back in the swing of things. Hopefully the next update comes sooner!

Advertisements

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.


Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it 🙂

Potential Breakthrough Links Game Theory and Evolution

(Forgive my departure from the expected schedule, this was good enough to jump the queue).

It’s always nice to be validated by science. Only a week or so after finally wrapping up my series of posts on game theory and evolution, a serious scientific paper has been published titled “Algorithms, games, and evolution“. For those of you not so keen on reading the original paper, Quanta Magazine has an excellent summary. The money quote is this one from the first paragraph of the article:

an algorithm discovered more than 50 years ago in game theory and now widely used in machine learning is mathematically identical to the equations used to describe the distribution of genes within a population of organisms

Now the paper is still being picked apart by various other scientists and more details could turn up (for all I know it could be retracted tomorrow) but I doubt it. Even if the wilder claims floating around the net are false, the fundamental truth stands that evolution drives behaviour, and evolution is a probabilistic, game-theory-driven process. While it’s easy to see that link on an intuitive level, it looks like we’ve finally started discovering the formal mathematical connections as well.

Conflict and Cooperation

And finally we come to the last post in our game theory subsection, itself the last section in what I originally called “practical worldbuilding”. Conflict and cooperation are in many ways two sides of the same coin: ways in which multiple people can interact. Since, this whole section has been about people making decisions, conflict and cooperation is really about how groups of people make decisions.

In many ways the concepts we’ve already covered are more than good enough to handle this case, it just gets a bit unwieldy to start working through all the details for each individual. You start running into behaviours like Keynesian Beauty Contests and things become… complicated. Theoretically we want something like Asimov’s psychohistory, but that is still sadly fictional.

Still, we can say some interesting and hopefully useful things. Conflict occurs when the apparent goal(s) of another person is/are incompatible with your goal(s). Cooperation occurs when the apparent goal(s) of another person is/are compatible with your goal(s). As we have seen however, goals are tricky things. And just like people are pretty bad at evaluating risks, we’re also pretty bad at evaluating the goals of other people, even when we’re not being actively deceived.

In even larger groups of course, you start getting into memetics and social negotiation. It’s all one big sliding scale of behavioural analysis, tied up with a bow.

Taking Risks

One of the things that people tend to have trouble with when making decisions is evaluating risks – mathematically humans are just bad at it. Between zero-risk bias, the sunk-cost fallacy, and half a dozen other behavioural quirks, the human brain is a poor judge of risk. To quote Bruce Schneier in the linked article (well worth reading):

People are not computers. We don’t evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases — generally known as “heuristics.” These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities.

Unfortunately, these “close-to-optimal” answers are occasionally not just sub-optimal but truly and utterly wrong. When you think about how little work we do in evaluating most risks, the surprising point isn’t that we sometimes get it wrong – it’s that we sometimes get it right! But this is necessary – rigorously evaluating risks is extremely slow, and it simple isn’t practical to evaluate every possible risk in this way.

So next time you see someone taking a stupid risk, don’t judge them – think about what shortcuts and biases they must be using in order to miscalculate the risk involved, and make sure that you’re not falling under some bias in the other direction.

And remember, the ultimate payoff for the risk they’re taking may be larger than you think, since it probably feeds into one of their secret goals.

Secret Goals

First off, apologies for the long absence; life has a habit of getting in the way of philosophy. Back to decision-making and game theory.

Now, obviously whenever you make a decision you must have certain goals in mind, and you are trying to make a decision to best fit those goals. If you’re looking at a menu, your goals may be to pick something tasty, but not too expensive, etc. You can have multiple goals, and they can sometimes conflict, in which case you have to compromise or prioritize. This is all pretty basic stuff.

But what people tend not to realize (or at least, not to think about too much) is that frequently our “goals” are not, in themselves, things we value; we value them because they let us achieve bigger, better goals. And those goals may be in the service of even higher goals. What this means is that all of these intermediate layers of “goals” are really just means that we use so frequently we have abstracted them into something that we can think of as inherently valuable. This saves us the mental work of traversing all the way back to the root wellspring of value each time we want to pick food off a menu. The result is these layers of abstract “goals”. Yet another set of layers of abstractions!

So what are these root goals we tend not to think about? Are they so-called “life goals” such as raising a family or eventually running your own company? No. Those are still just intermediate abstractions. The real goals are still one more step away, and are almost universally biological in nature. The survival and reproduction of our genetic code, whether through ourselves, our offspring, or our relations. These are our “secret goals”.

So how does this help us understand decision-making? It seems intuitively impossible to understand somebody’s decisions if we don’t understand the goal of that decision. But when we think exclusively in terms of our shorter-term, abstract “goals”, these are things that change, that we can abandon or reshape to suit our current situation. Thinking of these instead as methods of satisfying our underlying goals (which do not change) provides a much more consistent picture of human decision-making. This consistent picture is one to which we might even be able to apply game theory

Game Theory

We come, at last, to the final subsection of our “worldbuilding” series. Having touched on biology, culture, and the mind, we now turn back to a slightly more abstract topic: game theory. More generally, we are going to be looking at how people make decisions, why they make the decisions they do, and how these decisions tend to play out over the long term.

This topic draws on everything else we’ve covered in worldbuilding. In hindsight, understanding human decision-making was really the goal of this whole section, I just didn’t realize it until now. I’m sure there’s something very meta about that.

Game theory is traditionally concerned with the more mathematical study of decisions between rational decision-makers, but it’s also bled over into the fuzzier realms of psychology and philosophy. Since humans are (clearly) not always rational, it is this fuzzy boundary where we will spend most of our time.

The wiki article on game theory is good, but fairly math-heavy. Feel free to skim.