Tag Archives: psychology

An Atheist’s Flowchart, Part 5: The Psychology of Belief

The third and final pillar of my atheistic treatise is the one I called “via explanation” way back in January. Whereas the first two pillars were pretty explicitly philosophical, this one tends to feel a bit more scientific. It actually comes from a counter-argument that I heard once from a theist, which went (very briefly) something like this: If god doesn’t exist, then how do you explain the millions of people who believe in him?

Different theists have made a bunch of variations on this argument over the years, but this particular one struck me because it’s actually a fairly empirical argument. It is uncontroversial that there are millions (arguably billions) of believers in god. While the act of believing in and of itself does not prove anything, the fact of the belief itself requires explanation and “god actually exists” could potentially be the simplest explanation of that fact. This argument is weakened somewhat up front by the fact that god is a terrible explanation for things in general, but it’s at least plausible on its face.

The true counter, and the heart of my third pillar, is the fact that science does in fact have an excellent explanation for why people believe in god. And that linked book was published over fifteen years ago; science has continued to clarify more pieces of the puzzle since then.

So. How does this become not just a counter, but an actual self-supporting argument for atheism? The transformation happens because you pretty much have to believe in science, and when you believe in science you get this full explanation “for free”. With this explanation in hand then, it would be incredibly weird for god to actually exist, but for people to believe in him for unrelated reasons. That kind of coincidence boggles the mind, and not in a blind watchmaker sort of way.

By way of analogy, imagine, if you will, a mouse that has been placed into a totally isolated box and injected with a mysterious serum. The serum causes the mouse to develop human-level intelligence out of nowhere, but of course the mouse cannot see, smell, or hear anything outside of its special box. What are the odds that the mouse, through pure invention, manages to end up believing in an outside world even remotely similar to the real one? That of all the infinity of possible worlds imaginable by a mouse, it actually chooses the right one without any input whatsoever?

We are all the mouse, and we have every reason to believe that the gods we’ve constructed in our minds are nothing more than the spandrels of psychology.

Advertisements

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.


Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it 🙂