brexit-or-not-to-brexit2-207x191

Brexit, Trump, and Capital in the Twenty-First Century

With the recent Brexit vote, and with Trump the presumptive Republican nominee, there have been a number of comments and op-eds comparing the two and talking about the apparently inarticulate rage being expressed by the “unemployed working class” through this political process. The underlying assumption seems to be that sure, maybe these people have the right to be angry at the way they are being failed by the current economic/political system, but isolationism and nationalism are not the answer. It’s not like it actually kind of worked to pull the German economy out of the Great Depression.

But still, let’s take a closer look at why so many people are feeling disenfranchised; maybe by understanding the problem we could, for instance, come up with a damned workable solution. I dream small. This project, of course, takes us right into the arms of Thomas Piketty’s landmark economic work Capital in the Twenty-First Century.

The book’s central thesis is that when the rate of return on capital (r) is greater than the rate of economic growth (g) over the long term, the result is concentration of wealth, and this unequal distribution of wealth causes social and economic instability.

I disagree with this thesis, at least in part. It did, however, make me think about the problem in a different way. So let’s take a look at capital again, but from a more micro-economic perspective, and with a more Bourdieu-ian twist.

Personal Capital in the Twenty-First Century

Personal Capital, as I will use it here (not a standard definition, I know), is all the inputs to the function defining an individual’s earnings expectations in the short and medium term. This includes mostly obvious things like labour, education, and money-in-the-bank. Also note that there is a “break-even” level for this capital (kind of a natural poverty line) at which point the individual becomes financially self-sustaining within the economy. Individuals above this line are in a positive feedback loop; they earn more than they need, and since money is a form of personal capital, that increases their expected earnings even more.

The first part of my thesis, such as it is, is that people in this state tend to be happy. Even if you’re not rich in any absolute or relative sense, if you’re above that break-even line growth occurs, and your life gradually improves. I argue that the current political unrest is due to the fact that more and more people are dropping below this line. The question becomes: why?

The Relative Value of Labour

Consider the different forms of personal capital as goods in a meta-market, trading against each other. From this perspective, what has changed over the last couple of generations since the second world war? The answer is pretty obvious: the value of unskilled labour has dropped precipitously. Technological automation and a glut of supply from third-world countries have conspired to drive the value of unskilled labour into the basement. Additionally, the value of a college degree has also dropped, though not as sharply and hardly at all in some fields.

This makes it pretty obvious why so many people today are dropping below the break-even line of personal capital; it’s not that they have less capital, but that the capital they have (labour potential and education) is simply less valuable.

The Boomer-Millennial Divide

It has become almost tautological on the internet that baby boomers will whine and complain about how kids today would be doing just fine if they were willing to put in an honest day’s work, and how it’s their own damned fault. Millennials of course put the blame on the boomers; they are working hard and would be doing just fine if the boomers hadn’t ruined politics/the environment/the economy/the world. This conflict is actually a fairly natural one.

In the age of the boomers, labour was quite valuable. A confluence of factors drove the value of labour up to the point where individuals could sit comfortably at or above the break-even point, just on the value of their natural labour. Boomers really could make a healthy, happy, constantly-improving life for themselves just by working hard. They see millennials not succeeding and assume that they can’t be working as hard.

Of course in today’s market, that assumption is no longer true. Labour is so much less valuable that without some extra booster shot of personal capital (maybe a trust fund, or an expensive education in certain fields like hard science), the value of an individual’s natural labour is not enough to let them break even. They kill themselves to make ends meet, but with no hope of their situation ever really improving (they don’t have the positive feedback loop of capital growth working for them) it is no wonder they become disenfranchised. Those presently losing their jobs to foreign labour are in the same boat.

Finding Solutions

It seems then, that the solution to this recent unrest would be to bring as many people as possible back above this break-even point, giving them something to work for and hope for the future. But what do potential solutions actually look like for increasing the level of Personal Capital? Well, to me the obvious ones fall into three categories:

  • Drive up the value of the capital that people already have (primarily labour).
  • Provide people with additional capital such as education.
  • Reduce the break-even point so that existing levels of personal capital are sufficient.

With all this as perspective, the isolationist/nationalist solution isn’t actually all that insane. By putting stringent limits on immigration and trade, that greatly reduces the supply of labour, thus driving up the value of labour once more. Labour is the one piece of capital that everybody already has, rich or poor. It’s not really clear to me if this would be sufficient on its own to return us to the boomers’ golden age, but it will certainly move the needle in the right direction. The “hidden” cost, of course, is all the other terrible side-effects which tend to come with isolationist/nationalist policies. Let’s try and avoid a third world war, shall we?

So what other solutions are there that are less likely to destroy the world? Providing people with a basic income is a pretty straight-forward way to provide people with additional capital. No long-term large-scale system has yet been implemented, but the pilots look promising. Reducing the cost of higher education is also an obvious way to distribute additional (educational) capital.

Unfortunately, it’s not obvious to me how to pay for many of these socialist solutions. Since tackling the second point (providing more capital) is expensive and politically infeasible, and tackling the first point (driving up the value of labour) tends to actually hurt the economy globally in the long run, what about the third point? How do we reduce the break-even point of personal capital?

I don’t know.

Inequality, Brexit, and Trump

Bringing this discussion back to the original topics, I want to make a couple of points.

My primary point of disagreement with Piketty and many modern Bernie-Sanders-esque leftists is that it is not unequal distribution of wealth that is driving the present social unrest. It is quite possible for vast inequality to exist in a system where the majority of people are still above the break-even point of personal capital. I predict that such a scenario would be peaceful, and that people would be generally happy.

It is also quite popular among these same modern leftists to look at the people voting for Brexit and Trump and assume they must be insane to be voting for policies which are not in their own best interests. I disagree again. The policies being presented here are legitimate solutions to the problem these voters face. You may disagree with the trade-offs they are willing to make, but until you’ve walked a mile in their shoes, judge not.

Finally, what do I propose we do about it? What political and economic stance does this essay actually lead me to endorse? Again, I don’t know. There is no clear-cut winning answer that I have yet found.

I’m going  to keep looking.

More Wrong (AKA Oops)

It’s not as close as I thought. I should know better than to broadcast a blanket statement like that while I’m still reading through material.

This isn’t to say I disagree with the Less Wrong consensus on everything, of course; it still matches my belief system more closely than your average random blog. But still.

Caveat: the chunk below is not-well-thought out and kind of ranty. Take with salt.

There is a certain kind of intelligence which seems to lead people astray, particularly on the internet, particularly amongst the technically-inclined. You will find these people writing long screeds on matters of philosophy, science, and Truth, often on high-falutin blogs like this one (I consider myself a recovering member of this class of people). They tend to self-identify as (techno-)libertarians, neoreactionaries, “rationalists”, or other similar terms. They tend, on the whole, to be individually extremely intelligent.

This does not prevent them from making idiots of themselves.

The pattern, as it seems to go (from observation of myself as well as popular occurrences like Yudkowsky, Moldbug, Thiel, Graham, and Rhinehart) starts with the demanding of consistency from their value system. It is intolerable, to a mind so capable of understanding how the world works systematically and predictably, that ethics, and other matters of axiology, do not follow the same mold.

Of course the obvious way to achieve ethical consistency is to declare one moral principle higher than any other, and then follow it through to every insane-but-logical conclusion. Sometimes the principle itself is a generally well-regarded one (preference-utilitarianism for Yudkowsky, libertarianism for Thiel and to a lesser extent Graham) while sometimes even that is questionable. Moldbug seems to be working off of some bizarre notion that natural selection somehow grants moral status, and Rhinehart has taken on efficiency as an end in itself.

Regardless of the chosen path, the principle is rigidly and rigorously applied (what else would you expect from highly intelligent systemic thinkers?) until it becomes a self-defeating ad absurdum. And then some. Every bit of it neatly internally consistent and evidence-backed, if you accept the initial axiological premise. And every bit of it defended by an intellect whose identity is tied up in *being* an intellect.

Throw in a tendency to speculate on things they/we know nothing about and you end up with some really scary-weird worldviews, in people who are *absolutely* convinced of both their epistemic correctness and their ethical virtue. Fortunately it’s not like most of these people end up as cult leaders and/or billionaires, right?

Less Wrong

In the time since my last post, while trying to solve interesting problems and wandering around the web reading, I stumbled upon two related websites:

As it turns out, while I do not agree with everything word-for-word they promote, it’s *really* darn close. Close enough, as it turns out, that there isn’t much point in writing the remainder of this blog. The occasional tidbit might come along which demands a post, if there’s something I strongly disagree with or some factual or philosophical matter which falls outside the scope of Less Wrong’s mission. However, if you want to know what I think on some matter, start with the Less Wrong consensus. The odds are pretty good🙂

As for what you should do instead of reading my blog now that I’m no longer even keeping up the pretence of intending to post: read HPMOR and Less Wrong. Just go read them, right now, you’ll thank me.


For those curious what I *do* disagree with them on, it is mostly quibbles on philosophical axioms (moral, and some metaphysical/epistemic). This doesn’t much affect models-of-the-world as much as it affects how I respond to that model, and what my preferences are.

I’m Back, I Swear

My previous post started with

Whoops, it’s been over a month since I finished my last post

and ended with

Hopefully the next update comes sooner!

Well that’s depressing. At least I managed to keep the gap down to under a year. Barely.

As it turns out, indulging in outrageous philosophical hand-waving has not proven a particularly motivating way to write. So let’s mostly ignore my “brief detour” on constructing the mind, and go back to the original question, which basically boiled down to answering the Cartesian challenge to Hume. Frankly, I don’t have an answer. Self-awareness is one of those things that I just don’t even know where to start with. So I’m going to ignore it (for now) and sketch out the rest of my solution in broad strokes anyways.

First a refresher: I’m still pretty sure the brain is an open, recursively modelling subsystem of reality. It does this by dealing in patterns and abstractions. If we ignore self-awareness, then a fairly solipsistic view presents itself: the concept of a person (in particular other people) is just a really handy abstraction we use to refer to a particular pattern that shows up in the world around us: biological matter arranged in the shape of a hominid with complex-to-the-point-of-unpredictable energy inputs and outputs.

Of course what exactly constitutes a person is subject to constant social negotiation (see, recently, the abortion debate). And identity is the same way. Social theorists (in particular feminists) have recognized for a while that gender is in effect a social construct. And while some broad strokes of identity may be genetically determined, it’s pretty obvious that a lot of the details are also social constructs. I call you by a certain name because that’s the name everybody calls you, not because it’s some intrinsic property of the abstraction I think of as you.

Taking this back to personhood and identity, the concept of self and self-identity falls neatly out of analogy with what we’ve just discussed. The body in which my brain is located has all the same properties that abstract as person in the 3rd-party. This body must be a person too, and must by analogy also have an identity. That is me.

Throw in proprioception and other sensory input, and somehow that gives you self-awareness. Don’t ask me how.


 

My original post actually started with reference to Parfit and his teleportation cases so for completeness’s sake I’ll spell out those answers here as well: as with previous problems of abstraction, there is never any debate about what happens to the underlying reality in all those weird cases. The only debate is over what we call the resulting abstractions, and that is both arbitrary and subject to social negotiation.

Until next time!

edit: I realized after posting that the bit about Parfit at the end didn’t really spell out as much as I wanted to. To be perfectly blunt: identity is a socially negotiated abstraction. In the case that a teleporter mistakenly duplicates you, which one of the resulting people is really you will end up determined by which one people treat as you. There’s still no debate about the underlying atoms.

Constructing the Mind, Part 2

Whoops, it’s been over a month since I finished my last post (life got in the way) and so now I’m going to have to dig a bit to figure out where I wanted to go with that. Let’s see…

We ended up with the concept of a mechanical brain mapping complex inputs to physical reactions. The next obviously useful layer of complexity is for our brain to store some internal state, permitting the same inputs to produce different outputs based on the current situation. Of course this state information is going to be effectively analogue in a biological system, implemented via chemical balances. If this sounds familiar, it really should: it’s effectively a simple emotional system.

The next step is strictly Pavlovian. With the presence of one form of internal state memory, the growth of another complementary layer is not far-fetched. Learning that one input precedes a second input with high probability, and creating a new reaction for the first input is predictably mechanical, though now mostly beyond what modern AI has been able to accomplish even ignoring tokenized input. But here we must also tie back to that idea (which I discussed in the previous post). As the complexity of tokenized input grows, so does the abstracting power of the mind able to recognize the multitude of shapes, colours, sounds, etc. and turn them into the ideas of “animal” or “tree” or what have you. When this abstracting power is combined with simple memory and turned back on the tokens it is already producing, we end up with something that is otherwise very hard to construct: mimicry.

In order for an animal to mimic the behaviour of another, it must be able to tokenize its sense input in a relevant way, draw the abstract parallel between the animal it sees and itself, store that abstract process in at least a temporary way, and apply it to new situations. This is an immensely complex task, and yet it falls naturally out of the abilities I have so far layed out. (If you couldn’t tell, this is where I leave baseless speculation behind and engage in outrageous hand-waving).

And now I’m out of time, just as I’m getting back in the swing of things. Hopefully the next update comes sooner!

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.


Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it🙂

Abstract Identity through Social Interaction

Identity is a complicated subject, made more confusing by the numerous different meanings in numerous different fields where we use the term. In mathematics, the term identity already takes on several different uses, but fortunately those uses are already rigorously defined and relatively uncontroversial. In the social sciences (including psychology, etc.) identity is something entirely different, and the subject of ongoing debate and research. In philosophy, identity refers to yet a third concept. While all of these meanings bare some relation to one another, it’s not at all obvious that they’re actually identical, so the whole thing is a bit of a mess. (See what I did there with the word “identical”? Common usage is a whole other barrel of monkeys, as it usually is.) Fortunately, the Stanford Encyclopedia has an excellent and thorough overview of the subject. I strongly suggest you go read at least the introduction before continuing.

Initially I will limit myself specifically to the questions of personally identity, paying specific attention to that concept applied over time, and to the interesting cloning and teleportation cases raised by Derek Parfit. If you’ve read and understood my previous posts, you will likely be able to predict my approach to this problem: it involves applying my theories of abstraction and social negotiation. In this case the end result is very close to that of David Hume, and my primary contribution is to provide a coherent and intuitive way of arriving at what is an apparently absurd conclusion.

The first and most important question is what, exactly, is personal identity? If we can answer this question in a thorough and satisfying way, then the vast majority of the related questions should be answerable relatively trivially. Hume argued that there is basically no such thing — we are just a bundle of sensations from one moment to the next, without any real existing thing to call the self. This view has been relatively widely ignored (as much as anything written by Hume, at any rate) as generally counter-intuitive. There seems to be obviously some thing that I can refer to as myself; the fact that nobody can agree if that thing is my mind, my soul, my body, or some other thing is irrelevant, there’s clearly something.

Fortunately, viewing the world through the lens of abstractions provides a simple way around this confusion. As with basically everything else, the self is an abstraction on top of the lower-level things that make up reality. This is still, unfortunately, relatively counter-intuitive. At the very least it has to be able to answer the challenge of Descartes’ Cogito ergo sum (roughly “I think therefore I am”). If the self is purely an abstraction, then what is doing the thinking about the abstraction? It does not seem reasonable that an abstraction is itself capable of thought — after all, an abstraction is just a mental construct to help us reason, it doesn’t actually exist in the necessary way to be capable of thought.


 

I wrote the above prelude about three weeks ago, then sat down to work through my solution again and got bogged down in a numerous complexities and details (my initial response to the Cartesian challenge was a bit of a cheat, and it took me a while to recognize that). I think I finally have a coherent solution, but it’s no longer as simple as I’d like and is still frankly a bit half-baked, even for me. I ended up drawing a lot on artificial intelligence as an analogy.

So, uh, *cough*, that leaves us in a bit of an interesting situation with respect to this blog, since it’s the first time I get to depart from my “planned” topics which I’d already more-or-less worked out in advance, and start throwing about wild ideas to see what sticks. This topic is already long, so it’s definitely going to be split across multiple posts. For now, I’ll leave you with an explicit statement of my conclusion, which hasn’t changed much: living beings, like all other macroscopic objects, are abstractions. This includes oneself. The experiential property (that sense of being there “watching” things happen) is an emergent property due to the complex reflexive interactions of various conscious and subconscious components of the brain. Identity (as much as it is distinct from consciousness proper) is something we apply to others first via socially negotiation and then develop for ourselves via analogy with the identities we have for others.

I realize that’s kinda messy, but this exploratory guesswork is the best part of philosophy. Onwards!