Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.
Although, this one shares a lot with my previous post on Less Wrong.
I just finished writing a post and now I’m writing another one, what madness is this? I’m writing this as kind of a very-very-extended footnote to that post, since I couldn’t figure out how to write actual footnotes in the WordPress editor.
Specifically, I want to explain briefly the way I used the word spandrel in that post, since it is not a common usage, and may in fact be a usage I simply made up. I like it.
Originally, a spandrel was an architectural term for the space between an arch and its enclosing frame. In more recent times it has been borrowed by biologists to mean a biological characteristic which evolved as the byproduct of some other adaptive characteristic, and so may not necessarily be adaptive itself.
In my previous post I borrowed it yet again, by referring to the idea of intrinsic value as “a spandrel of human cognitive architecture”. The spirit of the definition should hopefully now be obvious and follows the biological one fairly closely except in that I am referring to thoughts and cognitive architectures instead of genetics and evolution. You may even take my usage to be memetic spandrels (as opposed to the genetic spandrels of the biological definition) though that honestly feels a bit stretched.
The most confusing part of this whole thing is that there are fairly obvious ways in which intrinsic values can be adaptive in an evolutionary psychology sense. Let’s try this again.
Go read How An Algorithm Feels From Inside. That central node in the second neural network diagram, the “dangling unit”, is a spandrel of human cognitive architecture. Just as it feels like there’s a leftover question even when you know a falling tree made acoustic vibrations but not auditory experience, it feels like there’s a leftover thing-in-need-of-value even when all your instrumental values have been accounted for.
It’s not as close as I thought. I should know better than to broadcast a blanket statement like that while I’m still reading through material.
This isn’t to say I disagree with the Less Wrong consensus on everything, of course; it still matches my belief system more closely than your average random blog. But still.
Caveat: the chunk below is not-well-thought out and kind of ranty. Take with salt.
There is a certain kind of intelligence which seems to lead people astray, particularly on the internet, particularly amongst the technically-inclined. You will find these people writing long screeds on matters of philosophy, science, and Truth, often on high-falutin blogs like this one (I consider myself a recovering member of this class of people). They tend to self-identify as (techno-)libertarians, neoreactionaries, “rationalists”, or other similar terms. They tend, on the whole, to be individually extremely intelligent.
This does not prevent them from making idiots of themselves.
The pattern, as it seems to go (from observation of myself as well as popular occurrences like Yudkowsky, Moldbug, Thiel, Graham, and Rhinehart) starts with the demanding of consistency from their value system. It is intolerable, to a mind so capable of understanding how the world works systematically and predictably, that ethics, and other matters of axiology, do not follow the same mold.
Of course the obvious way to achieve ethical consistency is to declare one moral principle higher than any other, and then follow it through to every insane-but-logical conclusion. Sometimes the principle itself is a generally well-regarded one (preference-utilitarianism for Yudkowsky, libertarianism for Thiel and to a lesser extent Graham) while sometimes even that is questionable. Moldbug seems to be working off of some bizarre notion that natural selection somehow grants moral status, and Rhinehart has taken on efficiency as an end in itself.
Regardless of the chosen path, the principle is rigidly and rigorously applied (what else would you expect from highly intelligent systemic thinkers?) until it becomes a self-defeating ad absurdum. And then some. Every bit of it neatly internally consistent and evidence-backed, if you accept the initial axiological premise. And every bit of it defended by an intellect whose identity is tied up in *being* an intellect.
Throw in a tendency to speculate on things they/we know nothing about and you end up with some really scary-weird worldviews, in people who are *absolutely* convinced of both their epistemic correctness and their ethical virtue. Fortunately it’s not like most of these people end up as cult leaders and/or billionaires, right?
In the time since my last post, while trying to solve interesting problems and wandering around the web reading, I stumbled upon two related websites:
As it turns out, while I do not agree with everything word-for-word they promote, it’s *really* darn close. Close enough, as it turns out, that there isn’t much point in writing the remainder of this blog. The occasional tidbit might come along which demands a post, if there’s something I strongly disagree with or some factual or philosophical matter which falls outside the scope of Less Wrong’s mission. However, if you want to know what I think on some matter, start with the Less Wrong consensus. The odds are pretty good 🙂
As for what you should do instead of reading my blog now that I’m no longer even keeping up the pretence of intending to post: read HPMOR and Less Wrong. Just go read them, right now, you’ll thank me.
For those curious what I *do* disagree with them on, it is mostly quibbles on philosophical axioms (moral, and some metaphysical/epistemic). This doesn’t much affect models-of-the-world as much as it affects how I respond to that model, and what my preferences are.