Monthly Archives: March 2014

Matching Patterns: The Nature of Intelligence

From the nature of the brain, through the nature of the mind, we now move on to the last of this particular triumvirate: the nature of intelligence.

A good definition of intelligence follows relatively cleanly from my previous two posts. Since the brain is a modelling subsystem of reality, it follows that some brains simply have more information-theoretic power than others. However, I believe that this is not the whole story. Certainly a strictly bigger brain will be able to store more complex abstractions (as a computer with more memory can do bigger computations), but the actual physical size of human brains is not strongly correlated with our individual intelligence (however you measure it).

Instead I posit the following: intelligence, roughly speaking, is related to the ability for the brain to match new patterns and derive new abstractions. This is information-theoretic compression in a sense. The more abstract and compact the ideas that one is able to reason with, the more powerful the models one is able to use.

The actual root of this ability is almost certainly structural with the brain somehow, but the exact mechanics are irrelevant. It is more important to note that the resulting stronger abstractions are not the cause of raw intelligence so much as an effect: the cause is the ability to take disparate data and factor out all the patterns, reducing it down to as close to raw Shannon entropy as possible.

Advertisements

The Ghost in the Machine: The Nature of the Mind

Having just covered in summary the nature of the brain, we now turn to the much knottier issue of what constitutes the mind. Specifically I want to turn to the nature of self-awareness and true intelligence. Advances in modern computing have left most people with little doubt that we can simulate behavioural intelligence to within certain limits. But there still seems to be that missing spark that separates even the best computer from an actual human being.

That spark, I believe, boils down to recursive predictive self-modelling. The brain, as seen on Monday, can be viewed as a modelling subsystem of reality. But why should it be limited to modelling other parts of reality? Since from an information-theoretic perspective it must already be dealing in abstractions in order to model as much of reality as it can, there is nothing at all to prevent it from building an abstraction of itself and modelling that as well. Recursively, ad nauseum, until the resolution (in number of bits) of the abstraction no longer permits.

This self-modelling provides, in a very literal way, a sense of self. It also lets us make sense of certain idioms of speech, such as “I surprised myself”. On most theories of the mind, that notion of surprising oneself can only be a figure of speech, but self-modelling can actually make sense of it: your brain’s model of itself made a false prediction; the abstraction broke down.

The Nature of the Brain

Our little subsection on biology and genetics has covered the core points I wanted to mention, so now we take a sharp left turn and head back to an application of systems theory. Specifically, the next couple of posts will deal with the philosophy’s classic mind-body problem. If you haven’t already, I suggest you skim through my systems-theory posts, in particular “Reality as a System“. They really set the stage for what’s coming here.

As suggested in my last systems-theory post, if we view reality as a system then we can draw some interesting information-theoretic conclusions about our brains. Specifically, our brains must be seen as open (i.e. not closed), recursively modelling subsystems of reality.

Simply by being part of reality it must be a subsystem therein. Because it interacts with other parts of reality, it is open, not closed. The claim that it provides a recursive model of (part of) reality is perhaps less obvious, but should still be intuitive on reflection. When we imagine what it would be like to make some decision, what else is our brain doing but simulating that part of reality. Obviously it is not simulating the actual underlying reality (atoms or molecules or whatever) but it is simulating some relevant abstraction of that.

In fact, I will argue later that this is effectively all our brains do: they recursively model an abstraction of reality. But this is obviously a more contentious claim, so I will leave it for another day.

Balancing Altruism (The “Selfish” Gene, continued)

It was originally only supposed to be a single post, and this one makes three. Now I know why Dawkins originally wrote it as a book! This should (hopefully) be my last post on the selfish gene for now; next week we’ll move on to other stuff.

Given my previous points, one might realistically wonder why people aren’t simply altruistic all the time. If altruism leads to better overall genetic survival, why are people (sometimes) selfish?

Like a lot of things, the actual result is a bit of a balancing act. While human beings share a huge portion of genetic material simply be being human, nobody’s genes are exactly the same. As such, there is still some competition between different human genomes for survival.

Especially in developed society, where the human population is large and stable, and the loss of an individual is unlikely to risk the loss of a species, people are more selfish because they can afford to be. Being selfish in that environment increases the probability that your specific genes will survive, but does not realistically decrease the probability that human genes in general will survive.

The genes themselves are not doing these probability calculations of course; it is simply the case that those genes whose expressed behaviour most closely matched the actual probabilities involved were the most likely to survive. It’s all one marvellous self-balancing system of feedback.