Tag Archives: psychology

Worrying

On Sunday evening, I sat down and wrote a thousands words on this blog baring my soul, confessing my deepest secrets and revealing at least two deeply personal things that I’d never told anyone before. As you may deduce by the fact that you haven’t read it: I never hit “publish”. In hindsight, at least some of it was a tad melodramatic, a sin of which I am more than occasionally guilty. But the essence was right.

Now, of course, I’m sitting here two days later writing a very confusing meta-post about something that none of you have read, or likely ever will. You’re welcome. Really, as the title would suggest, I want to talk about worry, since I think it was the thread that underlies my unpublished post.

I worry a lot (this is a stunning revelation to anyone who knows me in real life, I’m sure).

There are of course a lot of posts on the internet already about dealing with worry. I don’t want to talk about that, even though I could probably do to read a few more of them myself. Instead, I want to ramble for a while about the way that worries change our behaviour to create or prevent the things we worry about. This is the weird predictive causal loop of the human brain, so it should be fun.

First off, some evolutionary psychology, because that always goes well. From a strictly adaptive perspective, we would expect that worry would help us avoid the things we worry about, and indeed the mechanism here is pretty obvious. When we worry, it makes us turn something over in our head, looking for solutions, exploring alternatives. Perhaps we stumble upon an option we hadn’t considered, or we realize some underlying cause that lets us avoid the worry-inducing problem altogether. The people who worry like this have some advantage over the ones who don’t.

But of course, nothing is ever perfectly adaptive. The easy one is the immediate mental cost of worrying; worrying about tigers is less than helpful if in doing so you distractedly walk off a cliff. The slightly more subtle concern is the fact that we don’t always worry about the right things. Every time we choose to worry about some future event we are inherently making a prediction, that the event is probable enough and harmful enough to be worth worrying over. But humans make crappy predictions all the time. It’s an easy guarantee that some of the things people worry about just aren’t worth the extra mental effort.

These mis-worries still affect our behaviour though. We turn scenarios over in our mind, however unlikely or harmless, and we come up with solutions. We make changes to our behaviour, to our worldview. We make choices which would otherwise be suboptimal. Sometimes, in doing so, we create more problems for us to worry about. These things are sometimes bad, but even they are not the worst of what worrying can do to us.

The most terrible worries are the meta-worries: worries about our own emotional state. If you start to worry that maybe you’re emotionally fragile, then you’ve suddenly just proved yourself right! The constant worry over your emotional fragility has made you fragile, and reinforced itself at the same time. These worries aren’t just maladaptive, they’re also positive feedback loops which can rapidly spiral out of control.

With all of these terrible things that can come from mis-worry, we can make bad, hand-wavy assumptions that historically at least, worry has been more adaptive than not, else we wouldn’t have it. But certainly in the modern age, there is a plausible argument that worry is doing us far more harm than good. Instead of worrying about tigers, and cliffs, and what we’re going to eat tomorrow, we worry about sports teams, taxes, and nuclear war with North Korea. (If you’re me, you worry about all of the above, tigers included, and you also worry about that girl you think is cute and you meta-worry about all your worries and then you worry over how to stop meta-worrying and then your head explodes).

For about three years now I’ve been actively fighting my mis-worries (aka my anxieties) kind of one at a time, as I realized they were hurting me. This has involved regular visits to a therapist during some periods, and has been a generally successful endeavour. Despite this, I am not where I want to be, and in some respects my meta-anxieties have actually grown. So in the grand tradition of doing bad science to yourself in order to avoid ethics boards, I am going to do an experiment. The details are secret. Let’s see how it goes.

Advertisements

An Atheist’s Flowchart, Part 5: The Psychology of Belief

The third and final pillar of my atheistic treatise is the one I called “via explanation” way back in January. Whereas the first two pillars were pretty explicitly philosophical, this one tends to feel a bit more scientific. It actually comes from a counter-argument that I heard once from a theist, which went (very briefly) something like this: If god doesn’t exist, then how do you explain the millions of people who believe in him?

Different theists have made a bunch of variations on this argument over the years, but this particular one struck me because it’s actually a fairly empirical argument. It is uncontroversial that there are millions (arguably billions) of believers in god. While the act of believing in and of itself does not prove anything, the fact of the belief itself requires explanation and “god actually exists” could potentially be the simplest explanation of that fact. This argument is weakened somewhat up front by the fact that god is a terrible explanation for things in general, but it’s at least plausible on its face.

The true counter, and the heart of my third pillar, is the fact that science does in fact have an excellent explanation for why people believe in god. And that linked book was published over fifteen years ago; science has continued to clarify more pieces of the puzzle since then.

So. How does this become not just a counter, but an actual self-supporting argument for atheism? The transformation happens because you pretty much have to believe in science, and when you believe in science you get this full explanation “for free”. With this explanation in hand then, it would be incredibly weird for god to actually exist, but for people to believe in him for unrelated reasons. That kind of coincidence boggles the mind, and not in a blind watchmaker sort of way.

By way of analogy, imagine, if you will, a mouse that has been placed into a totally isolated box and injected with a mysterious serum. The serum causes the mouse to develop human-level intelligence out of nowhere, but of course the mouse cannot see, smell, or hear anything outside of its special box. What are the odds that the mouse, through pure invention, manages to end up believing in an outside world even remotely similar to the real one? That of all the infinity of possible worlds imaginable by a mouse, it actually chooses the right one without any input whatsoever?

We are all the mouse, and we have every reason to believe that the gods we’ve constructed in our minds are nothing more than the spandrels of psychology.

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.


Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it 🙂