Monthly Archives: September 2014

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.

Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it 🙂

Abstract Identity through Social Interaction

Identity is a complicated subject, made more confusing by the numerous different meanings in numerous different fields where we use the term. In mathematics, the term identity already takes on several different uses, but fortunately those uses are already rigorously defined and relatively uncontroversial. In the social sciences (including psychology, etc.) identity is something entirely different, and the subject of ongoing debate and research. In philosophy, identity refers to yet a third concept. While all of these meanings bare some relation to one another, it’s not at all obvious that they’re actually identical, so the whole thing is a bit of a mess. (See what I did there with the word “identical”? Common usage is a whole other barrel of monkeys, as it usually is.) Fortunately, the Stanford Encyclopedia has an excellent and thorough overview of the subject. I strongly suggest you go read at least the introduction before continuing.

Initially I will limit myself specifically to the questions of personally identity, paying specific attention to that concept applied over time, and to the interesting cloning and teleportation cases raised by Derek Parfit. If you’ve read and understood my previous posts, you will likely be able to predict my approach to this problem: it involves applying my theories of abstraction and social negotiation. In this case the end result is very close to that of David Hume, and my primary contribution is to provide a coherent and intuitive way of arriving at what is an apparently absurd conclusion.

The first and most important question is what, exactly, is personal identity? If we can answer this question in a thorough and satisfying way, then the vast majority of the related questions should be answerable relatively trivially. Hume argued that there is basically no such thing — we are just a bundle of sensations from one moment to the next, without any real existing thing to call the self. This view has been relatively widely ignored (as much as anything written by Hume, at any rate) as generally counter-intuitive. There seems to be obviously some thing that I can refer to as myself; the fact that nobody can agree if that thing is my mind, my soul, my body, or some other thing is irrelevant, there’s clearly something.

Fortunately, viewing the world through the lens of abstractions provides a simple way around this confusion. As with basically everything else, the self is an abstraction on top of the lower-level things that make up reality. This is still, unfortunately, relatively counter-intuitive. At the very least it has to be able to answer the challenge of Descartes’ Cogito ergo sum (roughly “I think therefore I am”). If the self is purely an abstraction, then what is doing the thinking about the abstraction? It does not seem reasonable that an abstraction is itself capable of thought — after all, an abstraction is just a mental construct to help us reason, it doesn’t actually exist in the necessary way to be capable of thought.


I wrote the above prelude about three weeks ago, then sat down to work through my solution again and got bogged down in a numerous complexities and details (my initial response to the Cartesian challenge was a bit of a cheat, and it took me a while to recognize that). I think I finally have a coherent solution, but it’s no longer as simple as I’d like and is still frankly a bit half-baked, even for me. I ended up drawing a lot on artificial intelligence as an analogy.

So, uh, *cough*, that leaves us in a bit of an interesting situation with respect to this blog, since it’s the first time I get to depart from my “planned” topics which I’d already more-or-less worked out in advance, and start throwing about wild ideas to see what sticks. This topic is already long, so it’s definitely going to be split across multiple posts. For now, I’ll leave you with an explicit statement of my conclusion, which hasn’t changed much: living beings, like all other macroscopic objects, are abstractions. This includes oneself. The experiential property (that sense of being there “watching” things happen) is an emergent property due to the complex reflexive interactions of various conscious and subconscious components of the brain. Identity (as much as it is distinct from consciousness proper) is something we apply to others first via socially negotiation and then develop for ourselves via analogy with the identities we have for others.

I realize that’s kinda messy, but this exploratory guesswork is the best part of philosophy. Onwards!