The Dilemma of the Modern Romantic

Much ink has been spilled on how what people actually seem to want out of their relationships doesn’t necessarily match what society tells them is right. For better or worse, many people are happiest in a traditional, heavily-gendered romantic relationship, despite the ongoing revolution in women’s rights. It’s true in the porn industry as well, even if you’re not convinced by normal scientific studies. There’s just something about traditional gender roles that men and women find attractive.

Modern feminism is an extraordinarily complicated thing, with a number of subtly different interpretations. For all that, it’s easy to see how the things it says about gender roles, power dynamics, and ethics could be at odds with this desire for a traditional relationship. To take just one example, is it possible for a relationship to be balanced or fair when one partner is wholly responsible for the finances? Money is a tangible form of power, so any relationship with unequal financial control is fundamentally one with unequal power. Is that ethical?

The typical response to this problem is to point out that feminism is really about freedom of choice and consent, and doesn’t actually prescribe a particular lifestyle or relationship format. You can do what you want, in this version, as long as it’s actually what you want. Do you enjoy bondage or dominance play? Go for it, as long as it’s safe, sane, and consensual. If you want something more traditional, that’s fine too. There’s no right answer, as long as everybody involved actually wants to be there.

I call bullshit.

It does sound nice, in theory. Everybody gets what they want, nobody gets hurt (unless that’s what they want!), and the ethical problems vanish in a puff of libertarian smoke. Unfortunately, as is often the case, the real world isn’t quite so tidy. The complexities of interpersonal power dynamics don’t just disappear because you waved a magic wand labelled “consent”; the act of consent is itself intimately tied up in the ways in which we use power with and on one another. To make matters worse, nobody really believes in this focus on consent in the first place. Lots of people say they believe, and that may actually be true of certain academics, but most people don’t behave as if consent were all that matters. Traditional romantic relationships are fundamentally incompatible with feminist ethics, and people treat them as such even when they’re not willing to admit it.

Welcome to Consentinople

Imagine a feminist city, fantastic as it may sound, in which everyone was honestly and completely committed to the idea of human rights, equal treatment, and freedom within the bounds of consent. I call this city Consentinople. In Consentinople, everyone lives life as a perfect feminist every day, able to follow their own sexual and other preferences. Even those whose sexual preferences include non-consent find willing partners with whom to act out their fantasies in a safe setting.

What would Consentinople actually look like? Since we’ve already seen that a surprising number of people prefer traditional, gendered romantic relationships, it shouldn’t come as a shock that Consentinople has lots of them. Many women work for a while, then step back from the workforce at least temporarily to raise their children. Some women don’t, of course, and they’re free not to, but many do (we’re also imagining that this is a realistic decision; Consentinople has a thriving economy in which one income can support a family).

Consentinople sounds great, a utopia where everyone has the freedom to express and live who they are without punishment, no matter their sexuality. Unfortunately, the city also resembles a modern real-world patriarchy. Its city councillors are mostly men, as were seven of its last ten mayors. Men own the majority of its businesses, and handle a disproportionate percentage of the money in its economy. Consentinople even has a wage gap: the average working women earns 90 cents to her male colleague’s dollar when controlling for industry and education.

We shouldn’t be surprised by this. Consentinople has given people the freedom to express their preferences and women have, on the whole, expressed a greater preference than men for childcare. This means that on average they will exit the workforce sooner than men. If they don’t exit the workforce, they may still spend more time with their children compared to a male colleague, who probably spends that time working. Neither preference is wrong, and no-one in Consentinople places any moral judgement on people for making these decisions. It is simply a fact that, in aggregate, two populations expressing different preferences will end up with different outcomes.

The good news is that in theory, if you control for all of these extra variables (children, length of time worked, etc.) then the wage gap in Consentinople disappears. A woman with no kids who has worked her entire life will do just as well, earn just as much, and get the same promotions as a similar man. The average outcome may be different, but the average opportunity is the same.

The Stereotype and the Individual

It is not sexist to believe that women are, on average, shorter than men. Neither is it sexist (or “heightist”, I suppose) to bar people under a certain height from riding carnival rides. However, it would be sexist to bar all women from riding carnival rides. This is obvious, because height is an easily observable value: for basically no cost we can get much more accurate information about a person’s height than we would be able to infer from their gender alone.

Things are a bit more ambiguous when the value we care about is not as easy to observe, and we have a beautiful natural experiment in this regard. I’ll let Scott Alexander explain:

It starts like this – a while ago, criminal justice reformers realized that mass incarceration was hurting minorities’ ability to get jobs. 4% of white men will spend time in prison, compared to more like 16% of Hispanic men and 28% of black men. Many employers demanded to know whether a potential applicant had a criminal history, then refused to consider them if they did. So (thought the reformers) it should be possible to help minorities have equal opportunities by banning employers from asking about past criminal history.

The actual effect was the opposite – the ban “decreased probability of being employed by 5.1% for young, low-skilled black men, and 2.9% for young, low-skilled Hispanic men.”

Because the relevant value (criminal history) became harder to observe, employers were forced to fall back on the information they did have: race. As an imperfect proxy, this invariably led to some mistakes: black men being denied jobs for no good reason. However, from the employer’s perspective it was the best they could do to filter criminals out of their job pool. And lest we simply decide to ban any check like this with false positives, we should remember that the value these employers actually care about is future criminal behaviour, and even past criminal behaviour is not a perfect predictor of that.

Sexism, racism, and all of the other -isms are built around the concept of stereotyping. We have a belief about a group, and we allow that belief to influence how we treat the individuals within the group. When the original belief is false then this is clearly a problem. When the belief is true, we must morally fall back on treating individuals as individuals and not members of the group: we look at each person’s height individually instead of banning all women from carnival rides. Letting the stereotype trump the individual is where overt, first-class racism, sexism, etc. all come from. Fortunately, most people don’t behave like that.

Stereotypes are just a form of categorization, a layer of abstraction we build on top of the world. They are not intrinsically evil, nor are they merely a useful mental tool. Categorization is how our brains make sense of the world with the limited power at our disposal. Calling that process immoral would be absurd. Yet it’s difficult to shake the feeling that those employers who rejected black applicants for fear of criminality must have been racist somehow.

Employment Opportunities in Consentinople

In Consentinpole, we built a city where consent and freedom reigned. Outcomes by gender showed a difference which might have been concerning, but we decided that that was OK as long as opportunity by gender was equal. Unfortunately, the effects of stereotyping mean that opportunity is no longer equal there either. Employers are naturally concerned by women’s aggregate preference for child-rearing, and the related opportunity costs for the business around parental leave.

Now, lest you think the people of Consentinople are secretly sexist after all, they are quite aware of the risks of stereotyping and imperfect information. As such, the citizens of Consentinople agreed that employers will ask all potential employees (regardless of gender) about their future plans for children. This almost works; men and women who have no such plans are treated equally, and men who plan to take on child-rearing duties are penalized the same as similar women. However, it isn’t enough. Since women on aggregate express that preference more than men, it still ends up statistically hurting their employment opportunities.

A related issue in Consentinople is that people tend to weakly gender-segregate their social lives; men have a slight preference for hanging out with men, and women with women. In any given individual this is a perfectly legitimate preference that Consentinople respects, but in aggregate it gives success a kind of gendered momentum. The majority of hiring managers were already men in Consentinople even when opportunity was equalized, and when they look to fill a position they naturally look to their own social network first. Even though they give equal consideration to all candidates regardless of gender, the result is still a slight edge in the employment rate for working men.

The final outcome is that Consentinople isn’t really a whole lot better than our real world. Even when every individual follows their legitimate preferences and everyone has perfect information, we still end up with a society where women do not have the same employment opportunities as men. The silent majority of women, quietly expressing their individual preferences for child-rearing and traditional gender roles, still end up harming those whose preferences are different. By any feminist definition, this is ethically untenable.

Feminism in the Real World

The ethical problems with this vision of individual choice make it a questionable justification for any relationship. Perhaps fortunately, it hardly matters because it’s so controversial in the real world. Consider recently the blow-up around Emma Watson’s photo shoot, or the whole thing about the new Wonder Woman movie. Go back a little further and you’ll find feminists complaining about Beyoncé, or basically anybody else you can think of. If people actually believed in individual freedom, in choice and consent, then these would be non-issues. The whole premise of that position was that if somebody wants to shave or not-shave their armpit hair, it doesn’t matter. They should be free to do so.

Instead, modern society shames people for being insufficiently feminist. The world immediately piles on when somebody does so much as express a preference about the meaning of a word. Word definitions are something for which there really is no right answer, and is still completely unrelated to actually supporting the principles in question. Whereas a hundred years ago women were shamed for being too modern, now women are shamed for not being modern enough. We do not live in Consentinople.

Through an academic lens, feminism looks like a cultural norm against cultural norms: a global preference for individual preferences. In the real world, feminism looks like any other specific set of norms. Where before it was a positive norm to shave your pits, now it’s a negative. While historically there were norms against women managing money, now there are norms against women letting men take care of their finances. We can argue all we want about whether the new norms are better than the old, but that’s not the point. The point is that no matter what norms you choose, this looks nothing like the academic, consent-driven feminist doctrine that everybody preaches; in that world, there are no norms to begin with.

It isn’t really surprising, either. A “cultural norm against cultural norms” is at the very least confusing, and definitely leaves room to be interpreted as self-contradictory. It’s also just plain impractical. Everyone admits that cultural norms shift over time, but they do not simply disappear. People expressing preferences in aggregate are what build our cultural norms in the first place, and even Consentinople has that. Even if we wanted to remake the world in Consentinople’s image, human beings are not wired to live in a norm-free society.

Conclusion

As I implied in the title, modern romantics are in a hopeless bind. Our feminist ethics are fundamentally incompatible with our desire for a traditional relationship. The philosophical escape-hatch provided by freedom-of-choice academic feminism doesn’t actually resolve the ethical issues, and certainly doesn’t resolve the practical ones. We are stuck with two paths, neither of which are appealing.

In the first path, we decide that feminism as an ethical philosophy must naturally trump any simple personal preference. This leaves us with a further decision to make: should we simply declare celibacy, or try and make do with a relationship that is unfulfilling but at least potentially ethical? In the second path, we decide that our preferences are key, which again presents a follow-up choice. Do we ditch feminism as a philosophy, claiming it is impractical, or do we try and live with the shame and constant cognitive dissonance of being in a relationship we don’t really believe in?

At the end of the day, practicality prunes some of the choices for us. Abandoning feminism would be social suicide, however philosophically appealing it might be. Living with the cognitive dissonance is possible for some, but it takes a special mindset to be able to ignore that nagging feeling once you’re aware of it. This leaves us with celibacy and making do, and of the two, making do definitely feels less insane.

And still we wonder why people are so unhappy.

 

Our Need for Need

It is a trite, well-established truth that people like being useful. But there’s more to it than that, or rather, there’s also a stronger version of that claim. People do like being useful, but useful is a very broad term. Stocking shelves at a Walmart is useful, in that it’s a thing with a use, which needs to be done. And it’s true that some people may in fact actively like a job stocking shelves at a Walmart. But on the whole, it’s not something most people would consider particularly enjoyable, and it’s certainly not something that is considered fulfilling.

Let us then upgrade the word “useful” to the word “needed”: people like to be needed. While stocking shelves at a Walmart is useful, the person doing it is fundamentally replaceable. There are millions of others around the world perfectly capable of doing the same job, and there are probably thousands of them just within the immediate town or city. If our fictional stocker were to suddenly vanish one day, management would have no trouble hiring somebody else to fill their shoes. The world would go on. Walmart would survive.

Now this is all well and good, but I would argue that there is an even stronger version of this claim: people don’t just like to be needed, people actively need to be needed. Over a decade ago, Paul Graham wrote an essay called Why Nerds are Unpopular; it’s a long essay with a number of different points, but there is one thread running through it that in my opinion has gotten far too little attention: “[Teenagers’] craziness is the craziness of the idle everywhere”.

The important thing to note about this (and Graham does so, in a roundabout sort of way) is that teenagers in a modern high school are not exactly idle. They have class, and homework, and soccer practice or band practice or chess club; they play games and listen to music and do all the sort of things that teenagers do. They just don’t have a purpose. They are literally unneeded, shut away in a brick building memorizing facts they’ll probably never use, mostly to get them out of the way of the adults doing real work.

This obviously sucks, and Graham stops there, making the assumption that the adult world at least, has enough purpose to go around. Teenagers, and in particular nerds, just have to wait until they’re allowed into the real world and voila, life will sort itself out. And it’s true that for some, this is the case. A scientist doing ground-breaking research doesn’t need to worry about their purpose; they know that the work they are doing is needed, and has the potential to change lives. Unfortunately, a Walmart stocker does not.

To anyone who has been following the broad path of the news over the last decade , this probably doesn’t come as a surprise. It seems like every other day we are confronted by another article suggesting that people are becoming less happy and more depressed, and that modern technology is making people unhappy. Occasionally it is also noted that this is weird. We live in a world of wealth and plenty. The poorest among us are healthier, better-fed, and more secure than the richest of kings only a few centuries past. What is causing this malaise?

The simple answer is that we are making ourselves obsolete. People need to be needed, sure, but nobody wants to need. Independence is the American dream, chased and prized through the modern Western world. Needing someone else is seen as weakness, as vulnerability, and so we strive to be self-sufficient, to protect ourselves from the possibility of being hurt. But in doing so, we hurt others. We take from them our need, and leave them more alone than ever before.

Of course, Western independence as a philosophy has been growing for near on three centuries now, and modern unhappiness is a much more recent phenomenon. There are two reasons for this, one obvious and the other a bit more subtle. To start with, our modern wealth does count for something. A small amount of social decohesion can trade off against an entire industrial revolution’s worth of progress and security with no alarm bells going off. But there is a deeper trick at play, and that is specialization.

In traditional hunter-gatherer bands, generally everybody was needed. The tribe could usually survive the loss of a few members of course – it had to – but not easily. Every member had a job, a purpose, a needed skill. That there were only a handful of needed skills really didn’t matter; there just weren’t that many people in any given tribe.

As civilization flourished, the number of people in a given community grew exponentially. Tribes of hundreds were replaced by cities of thousands, and for a time this was OK. Certainly, there was no room in a city of thousands for half the adult men to be hunters; it was both ecologically and sociologically unsustainable. But in a city of that size there was suddenly room for tailors and coopers and cobblers and masons and a million other specialized jobs that let humanity preserve this sense of being needed. If it was fine to be one of the handful of hunters providing food for your tribe, it was just as fine to be one of the handful of cobblers providing shoes for your town.

To a certain extent, specialization continued to scale right through the mid-twentieth century, just not as well. In addition to coopers and masons we also (or instead) got engineers and architects, chemists and botanists, marketers and economists. But somewhere in the late twentieth century, that process peaked. Specialization still adds the occasional occupation (e.g. software developer), but much more frequently modern technology takes them away instead. Automation lets one person do the work of thousands.

Even worse than this trend is the growth of the so-called “global village”. I, personally, am a software developer in a city of roughly one million people. Software development is highly specialized, and arguably the most modern profession in the world. At the end of the day however, I too am replaceable. Even if I were only one of the handful of developers in my city (I’m not), modern technology – both airplanes and the internet – has broadened the potential search pool for my replacement to nearly the entire world. My position is fundamentally no different from that of the Walmart stocker – I would not be missed.

At the end of the day, humanity is coming to the cross-roads of our need for need. Obsessed with individuality, we refuse to depend on anyone. Women’s liberation is slowly freeing nearly half of the world’s population from economic dependence. Technological progress, automation, and global travel are all nibbling away at the number of specialized occupations, and at the replacement cost of the ones that remain. The future is one where we all live like the teenagers in Paul Graham’s essay: neurotic lapdogs, striving to find meaning where fundamentally none exists. Teenagers, at least, just have to grow up so they can find meaning in the real world.

How is humanity going to grow up?

Other Opinions #48 – Intelligence Equals Isolation

http://tvtropes.org/pmwiki/pmwiki.php/Main/IntelligenceEqualsIsolation

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.

If TV Tropes has a trope for literally everything, is anything really a trope anymore?

Pessimism and Emotional Hedging

In Greek mythology, Cassandra was given a dual gift and curse: that she would accurately predict the future, but that nobody would believe her prophecies. She became a tragic figure when her prophecies of disaster went unheeded. In modern usage, a Cassandra is usually just a pessimist: somebody who predicts doom and gloom, whether people pay attention to them or not.

We know that people are generally rubbish at accurately predicting risk; they seem to constantly over-estimate just how often things will work out. This is usually due to either the planning fallacy or optimism bias (or both; they’re very closely related). However, while that is by far the most common mistake, and certainly the one that’s gotten all the attention, the opposite is also possible. Yesterday I caught myself doing just that.

I was considering an upcoming sports game and found myself instinctively betting against the team I typically cheer for (that is, I predicted they would lose the game). However when I took a step back I couldn’t immediately justify that prediction. The obvious prior probability was around 50/50 – both teams had been playing well, neither with strong advantage – and I am certainly not knowledgeable enough about that sport or about sports psychology in general to confidently move the needle far from that mark.

And yet, my brain was telling me that my team had only maybe a 25% chance of winning. After much contemplation, I realized that by lowering my prediction, I was actually hedging against my own emotions. By predicting a loss, I was guaranteed an emotional payout in either scenario: if my team won, then that was a happy occasion in itself, but if they lost then I could claim to have made an accurate prediction; it feels nice to be right.

With this new source of bias properly articulated I was able to pick out a few other past instances of it in my life. It’s obviously not applicable in every scenario, but in cases where you’re emotionally attached to a particular outcome (sports, politics, etc) it can definitely play a role, at least for me. I don’t know if it’s enough to cancel out the natural optimism bias in these scenarios, but it certainly helps.

The naming of biases is kind of confusing: I suppose it could just be lumped in with the existing pessimism bias, but I kind of like the idea of calling it the Cassandra bias.

Wrapping up on God – Final Notes and Errata on “An Atheist’s Flowchart”

Over the last six philosophy posts (my “Atheist’s Flowchart” series) I’ve wandered through a pretty thorough exploration of the arguments underlying my personal atheism. Now that they’ve had some time to settle, I’ve gone back and re-read them and noticed all sorts of random stuff that was confusing or I just forgot or whatever. This post is going to be a scattershot collection of random notes, clarifications, and errata for that series.

Here we go:

In The Many Faces of God, I wrote “[from] the whole pantheons found in many versions of Hinduism, to the more pantheistic view favoured by Spinoza and Einstein”, which in hindsight is kind of confusing. I blame the English language. A pantheon (apart from the specific temple in Rome) is a collection of many distinct gods. A pantheistic view, confusingly, does not involve a pantheon but is in fact (quoting Wikipedia): “the belief that all reality is identical with divinity, or that everything composes an all-encompassing, immanent god”. Beliefs that actually involve a pantheon are called polytheistic instead.


The first piece of my argument, in two parts, ended up being long and fairly convoluted and still didn’t do a great job of explaining the core idea. One of the things that I failed to explain was this key phrase from the Less Wrong page on Occam’s Razor: “just because we all know what it means doesn’t mean the concept is simple”. I gestured confusingly in the direction of the claim that “god is a super-complicated concept” but I suspect that, unless you’re already well-versed in formal information theory, I wasn’t very convincing. Allow me to gesture some more.

Science explains nearly everything we can observe in a beautiful system of interlocking formulas that, while scary and complex to a layman, are still simple enough to be run on a computer. God cannot be run on any computer I know of. Many gods are, by definition, ineffable – complex beyond any possible human understanding. Even those that are hypothetical effable [is this a word?] are not currently effed [this one definitely isn’t] in nearly the same way we understand gravity, or chemical reactions, or the human heart.


In the third part of my argument, I mentioned briefly without explanation that none of the common logical arguments for god derive from my core axioms. It would have been helpful if I’d given some examples. I did not, because I am lazy. I am still lazy, and after poking around for a while cannot find a good example of something that I can work through in a reasonable amount of space.

If anybody wants to construct a formal logical argument from my nine axioms to the existence of god, please send it to me and I promise I will give it an entire post all to itself.


At the end of my fourth part, I linked to a t-shirt design which has already been removed from the internet. It was a snippet of this comic from Dresden Codak, specifically the panel in the third row with the text “I will do science to it”. It’s not really related to anything, but Dresden Codak is well worth reading.


In my fifth part I actually made a mistake and made a weak version of the argument I was aiming for. The better version, in brief:

  1. Science explains why people believe in god.
  2. You believe in science, even if you think you don’t.
  3. If god’s existence was the reason that people believed in god, that would contradict #1.

Therefore either god doesn’t exist at all, or the fact that millions of people believe is a coincidence of mind-boggling proportions which defies Occam’s Razor.

Other Opinions #46 – There’s No Real Difference Between Online Espionage and Online Attack

This one is a couple of years old but still relevant, especially with the recent ransomware attacks. We’re used to thinking in terms of human actors, where an informant is a very different kind of asset from an undercover operative. The former is a passive conduit of information while the latter is an active force for change. In technological conflict there is no such difference. Both activities require the ability execute code on the remote machine and once that is achieved it can be used for any end, passive or active.

https://www.theatlantic.com/technology/archive/2014/03/theres-no-real-difference-between-online-espionage-and-online-attack/284233/

And of course any vulnerability, once discovered, can be used by whatever criminal claims it first.

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.