Last week, American president Donald Trump announced two major tariffs on Canadian aluminum and steel. This was a watershed moment in the relationship between the two countries, who have long shared a highly cooperative diplomacy and a tightly integrated economy. We are now in the midst of a trade war, and that requires a major shift in perspective. In the upper echelons of business, much is made of the difference between a peacetime CEO and a wartime CEO. The analogy is drawn from politics of course, and now we have an opportunity to see the original essence of that analogy in practice, as Canada shifts diplomatically from peace to war. The question of the moment then, is whether Justin Trudeau is ready for it?
Trudeau’s first two years as Prime Minister have been characterized by positivity, just like his campaign. He continues to talk about opportunity, growth, and inclusion every chance he gets. He is, in other words, the very picture of a peacetime leader. But peacetime leaders tend to get crushed in times of war; just ask Neville Chamberlain, the British Prime Minister leading up to World War Two. Like Trudeau he talked a good early game, pushing back against Germany when possible but also accommodating them in the name of a broader peace. Like Trudeau, Chamberlain combined calculated displays of strength and resolve with a general flavour of good will. His policies were widely popular among the electorate. And like Trudeau, Chamberlain was not ready for war.
Barely nine months in to the second world war, after a string of disasters culminating in a wholesale retreat from Norway, Chamberlain resigned. His replacement was none other than Sir Winston Churchill, a war-time leader if there ever was one. Churchill was everything that Chamberlain was not: one of the world’s greatest orators, direct, focused, and completely unwilling to back down. Where Chamberlain was diplomatic, refined, and heavily invested in keeping the peace, Churchill was a leader with only one goal: to win the war. He stuck to his guns even when his choices were massively unpopular, which in fact they were at the time. He was appointed Prime Minister on Chamberlain’s resignation, not elected, and lost the post at the very next public election.
So what does this mean for Canada today? I suppose it is possible that Trudeau will be able to pivot, transitioning from a peacetime role to a wartime one. If he can pull that off then he will likely go down in history as one of Canada’s greatest leaders. However, it seems unlikely. The required shift in perspective would be very much out of character, and his initial response to the tariffs has been… tepid. Retaliatory tariffs, yes, but dollar-for-dollar; literally a call, not a raise, and one that (per Coyne) will harm Canada far more than it has any persuasive power over the United States.
If Trudeau cannot pivot, then we are in for a rough couple of years. Barring a true catastrophe, Trudeau is unlikely to resign before next year’s election, but there is no-one currently on the ballot with the necessary capabilities. The NDP have always been a peacetime party, and Jagmeet Singh is no exception. Andrew Scheer was elected leader of the conservatives as a direct response to Trudeau, in an effort to win back some of the voters turned off by Harper’s determination and negativity. Ironically for them, (and for me, as somebody who voted against him in the 2015 election) Stephen Harper is now the Prime Minister we need.
Up until this moment, I have been generally happy with Trudeau as Prime Minister; he was a welcome breath of fresh air after so many years of Harper, and is generally closer to my positions on policy. Today, I wish we’d given Mr. Harper one more term.
Do you know that feeling, when some person or article says something with which you’ve agreed for years but hadn’t ever been able to properly articulate?
It’s one of the delusions of our meritocratic class, however, to assume that if our actions are individually blameless, then the sum of our actions will be good for society.
The Atlantic think-piece that this is from (The 9.9 Percent is the New American Aristocracy) is of course excessively long, but worth reading if you’re interested in that kind of thing, especially if you enjoyed my essay on Brexit, Trump, and Capital in the Twenty-First Century. While the Atlantic piece takes a rather different tack to get there, it ends up at roughly the same set of possible recommendations, and I think the underlying thesis is the same: there are a lot of not-directly-monetary ways in which the future prospects of the working class have suffered over the last few decades.
Money may be the measure of wealth, but it is far from the only form of it. Family, friends, social networks, personal health, culture, education, and even location are all ways of being rich, too. These nonfinancial forms of wealth, as it turns out, aren’t simply perks of membership in our aristocracy. They define us.
This was basically one of the theses of my essay, so maybe the Atlantic article doesn’t tack that far from it after all.
Originally, I focused on the value of unskilled labour as the primary change in recent generations. I still think this is generally true, but the Atlantic piece focuses elsewhere: on the walls that the upper classes are building around the other forms of capital. Education is a form of capital, but with the increasing class-segregation of top university admission processes it becomes less accessible to those at the bottom of the heap. Location is also a form of capital, but as the cost of living skyrockets in prime locations (most notable Silicon Valley) that too becomes inaccessible. It seems inconceivable today that a poor person from the slums of Detroit could fight their way into a good university, then afford to move to San Francisco in order to be able to work a decent job. But if we really want social mobility to be a feature of our economy then that’s the story we have to enable.
Finding A Solution
In 2016, I mentioned a basic income as one possible partial solution, with the caveat that it didn’t seem politically or economically feasible. Things have changed a lot in the last two years. From Finland, to California, to Ontario, a number of organizations and governments have started pilot projects of the idea. Perhaps more interestingly, in the current Ontario election, all three major parties have pledged to continue the experiment, effectively guaranteeing a path forward regardless the winner.
In addition to the political will that has sprung up recently, the economics suddenly seem more favourable as well. The cost of expanding Ontario’s experiment to the entirety of Canada has been pegged at only $43 billion, which is eminently affordable in the context of the many hundreds of billions of dollars that Canada already spends on various programs. And proponents are quick to point out that that number doesn’t even include the expected savings in health care, incarceration, and other government services which typically result from lifting people out of poverty.
The purpose of this essay wasn’t originally to sell people on the idea of a basic income, so I’ll leave it at that, but it does seem to me like an extremely promising approach. I’m looking forward to the results of some of the pilot projects, and in the mean time I’m going to do a bit more research and try to raise the profile of this idea.
Social media has changed the way we relate to each other, and to ourselves. This is not a particularly controversial claim in and of itself; the controversy comes when you attach a value proposition to this change. Even then, “controversy” is perhaps the wrong word. There are a few luddites screaming into the void that social media is ruining kids these days, and by golly in my day we walked thirty miles to school in the snow and we liked it. And then there’s everybody else, who just doesn’t care.
Granted, this isn’t exactly a fair telling. The effect of social media on our relationships, our emotions, and our selves is a hot topic in many social science departments, and has certainly spawned enough TED talks. But there is still a large gap between “we studied this” and “we think this is bad”. Ironically, the talks which are most axiological are the ones most likely to go viral, on the very platforms which they decry.
I would like to reassure you that I’m a young, hip thing and not a luddite screaming into the void, but it isn’t true. Luddite might be a bit strong, but fundamentally this post is about my belief that social media (and reality television, and youtube, and…) are ruining kids these days. How, you ask? By turning us all into actors.
The combination of modern societal/infrastructural wealth and our culture’s obsession with individuality has led to an explosion in the number of people living in the extended adolescence of Paul Graham’s neurotic lapdogs. As in high school, the net result of this is the pursuit of social status for its own sake and beyond any reasonable limits. This would be bad enough on its own, but modern social media amplifies the effect by providing a perfect, shallow, dopamine-inducing medium (is it weird to call “social media” a medium? It feels like I’m violating a plural matching rule somehow) for this pursuit.
What this means is that many people born in the 1990s (and particularly those born in the 2000s) don’t know how to actually feel emotions. I grant this is an unusual claim; it certainly isn’t among the common set of arguments raised against social media. Even so, I believe it is true. Social media and the pursuit of irrelevant status has resulted in a generation and a half of people for whom emotions are performative, instead of felt.
In this world, you can chase happiness, or you can chase the appearance of happiness, and given the distorting lens of instagram only the latter is relevant to the status games we play. Perhaps your sadness is real, but it’s not valuable unless you can ironically caption it with a pithy quote about self-care. Actual felt emotions don’t matter anymore, because it’s become a truth universally acknowledged that “everybody’s a mess on the inside anyway”. This “truth” has somehow simultaneously described the problem and normalized it away, but I believe it’s still a problem. A world in which people are fundamentally unhappy is a bad one, no matter what other nice properties it might have.
The interesting thing to note is that in this world, nerds are at an advantage. I don’t mean the popular, “everybody’s a nerd” modern pop culture version which has lost any useful semantic content; I mean the original version, the people who had just completely given up on social status altogether in pursuit of other interests. Only by detaching yourself from the status games can you start to worry about how you actually feel, instead of how you appear to feel in 10-second snaps. This is something I’ve lost sight of in the last six months, and something I’m trying to recover.
And oddly, on some definitions, going back to my stereotypical nerd roots is going to make me more cool, not less. Either way, I hope it makes me a better person.
John Scalzi recently wrote an interesting explanation of what he sees as the difference between being “cool” and not, which I think is worth a read. While it’s possible to pick nits or disagree with his definitions, I’m not here to do that; language is descriptive, not prescriptive. Instead, I wanted to talk about how not cool I am (on his definitions) and what that means for me.
Scalzi defines “cool”ness as being effectively self-contained in your personhood, which he distinguishes from confidence by talking about how much we care about what other people think of us. I believe this effectively mirrors my discussion of confidence vs security which I included as part of my synthesis from Principle-Centered Leadership. On this definition of coolness, I (like Scalzi himself) am very, deeply, uncool. I care very much what other people think of me, and I lack much of that self-anchored security necessary for personal stability. I am, fundamentally, afraid of the world where somebody doesn’t like me and so I do my best to be likeable, to be malleable and pleasant and nice.
This is not, I think a particularly uncommon way to be; most people like to be liked, because most people need to be needed. Certainly it is a matter of degree, and not black and white, and there must be at least a little of this in all of us. What this means for me though (and presumably for other people in whom this trait is particularly strong) is that I become very passive and indirectly indecisive. If you ask me where I’d like to go for dinner, I will likely waffle, hedge, hem and haw. To me internally, it feels like I legitimately have no preference, I’ll eat whatever. There is a certain extent to which this is true as I am not a picky eater, but there is also a certain extent to which this indecision is a reflection of my passive uncoolness.
As a matter of uncoolness, my indecisiveness is an active likeableness-generating mechanism. If I don’t have a strong preference, then whoever is asking the question will be more likely to have their preferences met. When I then agree and happily go along with their request, that builds rapport between us and elevates their mood. If instead I were to express my own preference, that risks generating conflict, which automatically feels threatening.
Aspects of Conflict
It is easy to read into this that I don’t feel comfortable dealing with conflict, and there is a certain degree to which this is true, but it is hardly a blanket rule. In purely technical conversations I am happy and comfortable being firm in my convictions and arguing a point, both because the process is heavily factual/empirical, and because the environment for these debates is usually one of mutual collaborative respect. We may disagree, but we both acknowledge that we’re working toward the same goal and simply need to figure out exactly what trade-offs allow us to get there most effectively.
Even on deeper, more personal topics like religion I will prefer conflict to completely reversing my values. If you push me, I will admit to being an atheist and explain my rationale behind that even to a strongly religious audience; I won’t flat-out-lie. But I am much more likely to avoid the topic, or lie by omission. Since I grew up as a fairly religiously aware child, it’s easy for me to… “drop hints” which socially signal a certain affiliation, without ever actually saying anything untrue. As a child who also grew up reading a great deal of fantasy literature, you can bet that learned a lot of tricks from Robert Jordan’s Aes Sedai.
Practically, this has a couple of effects. I get along with almost everybody, but I make a poor leader except in purely technical situations with clear answers. I can be indecisive and prone to waffling. Fundamentally, I am insecure and lack self-trust. I am afraid of being in a state of deep interpersonal conflict.
Fear of Conflict
This fear of conflict does not feel unnatural to me, though clearly it is not universally shared. Surely conflict is not pleasant for everyone, but active fear is a different matter. I am certainly capable of justifying this fear in practical terms, but I am not convinced that these justifications end up being legitimate. Conflict is both fundamentally inevitable in some situations and not intrinsically immoral or problematic. The key is how we resolve conflict, which I suppose explains my comfort with technical conflict: I have a clear script for resolving it.
As I have already mentioned though, I tend to avoid more interpersonal conflict, resulting in a situation where I’m not as good at resolving it simply because it’s a skill I don’t practice. The caveat to this is that I believe I am an excellent mediator, since my general likeability and rationalist outlook make me very good at acting as a trusted, neutral third-party. Unfortunately for me those skills do not naturally translate to the case where I have chips in the game.
I do not believe it is fundamentally a bad thing to want to avoid conflict, but I do believe it is a bad thing to not be able to resolve conflict when it arises, and I definitely wish that I was more willing to stand up for myself and accept the resulting conflict in some situations. It is a skill I’m looking to build, but I plan to start small.
One of the most common and confusing exchanges I hear when discussing issues of gender and feminism goes something like this:
A: Women and men have different aggregate preferences around topic X.
B: Those preferences are socially conditioned. Women grow up in an environment where they are constantly exposed to strict feminine gender roles and the associated preferences.
You’ll note that the above isn’t really an argument but more of an argument-fragment, as it depends pretty heavily on the context of what point A is trying to defend. I note this in particular because the two claims are not mutually exclusive. It is possible and reasonable to believe that both are true, for many values of X.
The problems happen because the context of the argument usually indicates some subtextual moral claims going on which can get rather confused. For example, my previous post on The Dilemma of the Modern Romantic got several responses from people who objected along these lines to my claim that many people prefer traditional, gendered relationships. In order to respond properly, I’m going to unpack some of this subtext.
The underlying ethical argument is usually around the question of whether or not it’s right for preferences to be the result of social conditioning. Can a preference be considered legitimate, if the only reason it’s present is because societal context has hammered it home at every opportunity? This is the point I think B is raising; if the preferences are somehow illegitimate, then the fact they exist at all (which is usually harder to dispute) is less relevant.
It’s a fairly intuitive and defensible argument. The wording calls to mind Pavlovian experiments and the destruction of our free will, both with negative associations. It’s also well-supported in the way people talk about other issues: Stockholm Syndrome is a prime example of a case where people’s real, expressed preferences are widely considered illegitimate due to the nature of the situation. Nobody wants to be brain-washed.
This discussion lets us rephrase B’s objection in significantly stronger, clearer terms, hopefully without changing the nature of the argument:
A: Women and men have different aggregate preferences around topic X.
B: Women are brainwashed by cultural gender roles which include those preferences. Therefore those preferences are illegitimate. Also, we should probably stop brainwashing women.
I consider this re-phrasing helpful, because it offers up a bullet I intend to bite: I agree that women (and everyone else too) are brainwashed by cultural gender roles. What I disagree with are the subsequent claims that those preferences are therefore illegitimate, and that we should necessarily stop.
One way to see this is to substitute any other culturally-shared value in place of “gender roles”. Women are brainwashed by our shared values against murder, therefore that preference is illegitimate? Doesn’t quite have the same ring to it. Perhaps more tellingly, consider how we raise our children.
Children learn from their parents, both explicitly by using language to convey facts and opinions, and implicitly via observation and mimicry. This process of growth and learning indelibly shapes who the child will become and what preferences they will have. It is a natural part of childhood that your parents and your society impart their values and preferences to you, teaching you right from wrong and good from bad. Without this process, you don’t end up with some sort of tabula rasa child capable of pure and perfect free expression of their own preferences. You end up with the kids from Lord of the Flies.
There has been a recent trend of parents eschewing traditional gender roles for their kids. I have nothing against this practice. What I disagree with are the claims that it is imperative to give children “the freedom to develop their full potential without preconceptions about what girls or boys should do or how they should behave”. Children need to be taught, and whether you impart the values of traditional gender roles or the values of social-constructionist feminism, you are still teaching. You can argue that we should be teaching different preferences and values, but it is absurd to claim that we shouldn’t be teaching any at all.
In the end, I suppose what I’m arguing against is the idea that gender roles and gendered preferences are prima facie to be avoided just because they permeate our culture. I am open to arguments that we should replace them with a different set of values, and I grant that this is the argument a number of academic feminists are actually making. But as is usual, something has gotten lost in translation between academia and the Daily Mail.
So smart/famous/rich people are publicly clashing over the dangers of Artificial Intelligence again. It’s happened before. Depending on who you talk to, there are several explanations of why that apocalypse might actuallyhappen. However, as somebody with a degree in computer science and a bent for philosophy of mind, I’m not exactly worried.
Loosely speaking, there are two different kinds of AI: “weak” and “strong”. Weak AI is the kind we have right now; if you have a modern smartphone you’re carrying around a weak AI in your pocket. This technology uses AI strategies like reinforcement learning and artificial neural networks to solve simple singular problems: speech recognition, or face recognition, or translation, or (hopefully soon) self-driving cars. Perhaps the most recent publicized success of this kind of AI was when IBM’s Watson managed to win the Jeopardy! game show.
Strong AI, on the other hand, is still strictly hypothetical. It’s the realm of sci-fi, where a robot or computer program “wakes up” one day and is suddenly conscious, probably capable of passing the Turing test. While weak AIs are clearly still machines, a strong AI would basically be a person instead, just one that happens to be really good at math.
While the routes to apocalypse tend to vary quite a bit, there are really only two end states depending on whether you worry about weak AI or strong AI.
If you go in for strong AI, then your apocalypse of choice basically ends up looking like Skynet. In this scenario, a strong AI wakes up, decides that it’s better off without humanity for some reason (potentially related to its own self-preservation or desires), and proceeds to exterminate us. This one is pretty easy for most people to grasp.
If you worry more about weak AI then your apocalypse looks like paperclips, because that’s the canonical thought experiment for this case. In this scenario, a weak AI built for producing paperclips learns enough about the universe to understand that it can, in fact, produce more paperclips if it exterminates humanity and uses our corpses for raw materials.
In both scenarios, the threat is predicated upon the idea of an intelligence explosion. Because AIs (weak or strong) run on digital computers, they can do math really inconceivably fast, and thus can also work at speed in a host of other disciplines which boil down to math pretty easily: physics, electrical engineering, computer science, etc.
This means that once our apocalyptic AI gets started, it will be able to redesign its own software and hardware in order to better achieve its goals. Since this task is one to which it is ideally suited, it will quickly surpass anything a human could possibly design and achieve god-like intelligence. Game over humanity.
On Waking Up
Strong AI has been “just around the corner” since Alan Turing’s time, but in that time nobody has ever created anything remotely close to a real conscious computer. There are two potential explanations for this:
First, perhaps consciousness is magic. Whether you believe in a traditional soul or something weird like a quantum mind, maybe consciousness depends on something more than the arrangement of atoms. In this world, current AI research is barking up entirely the wrong tree and is never going to produce anything remotely like Skynet. The nature of consciousness is entirely unknown, so we can’t know if the idea of something like an intelligence explosion is even a coherent concept yet.
Alternatively, perhaps consciousness is complicated. If there’s no actual magic involved then the other reasonable alternative is that consciousness is incredibly complex. In this world, strong AI is almost certainly impossible with current hardware: even the best modern super-computer has only a tiny fraction of the power needed to run a simplistic rat brain let alone a human one. Even in a world where Moore’s law continues unabated, human-brain-level computer hardware is still centuries away, not decades. We may be only a decade away from having as many transistors as there are neurons, but they’re not comparable; neurons are massively more complex building blocks.
It’s also worth noting that if consciousness is hard because it’s complicated, our hypothetical strong AI would wakeup very, very slowly. We’re not going to suddenly jump from current processors to AI-capable ones that are millions of times more powerful; there will be numerous incremental designs in between. As our processing power improves, our AI will get stronger and stronger but there will be no eureka moment where Skynet both suddenly decides humanity has to go and is smart enough to make it happen. We’ll have plenty of opportunity to study primitive artificial consciousnesses, learn about them in an environment we control, and iteratively incorporate our findings into subsequent generations.
Regardless of why we’ve never gotten anywhere with strong AI, we haven’t, and there is no indication that this time it really is “just around the corner”.
While the idea of Skynet is common in popular culture, it’s the paperclip maximizer which tends to worry the people who actually seriously worry about this stuff. It’s already true that we don’t really understand how weak AI actually works: we understand the principles which drive something like Watson, but no human could describe the series of specific decisions it performs to arrive at a given answer.
It also seems a lot more plausible that weak AI (which we already have) will just keep getting smarter (which it has definitely done so far) than that we’ll have some weird magic breakthrough in strong AI, when nobody still even understands what that would look like. So, why couldn’t some future self-driving car learn just enough about the world to figure out that it would be able to drive much more efficiently if it could just get rid of all of those pesky pedestrians? As it turns out, I think something like this could actually happen; it’s just not the end of the world when it does.
As with the consciousness-is-complicated explanation for strong AI, our saving grace against paperclips and homicidal cars is the incremental nature of the technology. The first self-driving car smart enough to decide to murder pedestrians is not also going to be smart enough to redesign its own hardware, create biological weapons, and do all of this in secret without hinting to its owners that it “has plans”. No, the only thing it’s going to know how to do to achieve this goal is to try and intentionally run people over, which is obviously bad, but not exactly apocalyptic. It also has the benefit of being really freaking obvious.
Here’s a hint to future AI developers: maybe you should stop making it smarter when your cars start trying to run you over in the name of optimizing the number of pedestrians.
Much ink has been spilled on how what peopleactuallyseemtowant out of their relationships doesn’t necessarily match what societytellsthemisright. For better or worse, many people are happiest in a traditional, heavily-gendered romantic relationship, despite the ongoing revolution in women’s rights. It’s true in the porn industry as well, even if you’re not convinced by normal scientificstudies. There’s just something about traditional gender roles that men and women find attractive.
Modern feminism is an extraordinarily complicated thing, with a number of subtly different interpretations. For all that, it’s easy to see how the things it says about gender roles, power dynamics, and ethics could be at odds with this desire for a traditional relationship. To take just one example, is it possible for a relationship to be balanced or fair when one partner is wholly responsible for the finances? Money is a tangible form of power, so any relationship with unequal financial control is fundamentally one with unequal power. Is that ethical?
The typical response to this problem is to point out that feminism is really about freedom of choice and consent, and doesn’t actually prescribe a particular lifestyle or relationship format. You can do what you want, in this version, as long as it’s actually what you want. Do you enjoy bondage or dominance play? Go for it, as long as it’s safe, sane, and consensual. If you want something more traditional, that’s fine too. There’s no right answer, as long as everybody involved actually wants to be there.
I call bullshit.
It does sound nice, in theory. Everybody gets what they want, nobody gets hurt (unless that’s what they want!), and the ethical problems vanish in a puff of libertarian smoke. Unfortunately, as is often the case, the real world isn’t quite so tidy. The complexities of interpersonal power dynamics don’t just disappear because you waved a magic wand labelled “consent”; the act of consent is itself intimately tied up in the ways in which we use power with and on one another. To make matters worse, nobody really believes in this focus on consent in the first place. Lots of people say they believe, and that may actually be true of certain academics, but most people don’t behave as if consent were all that matters. Traditional romantic relationships are fundamentally incompatible with feminist ethics, and people treat them as such even when they’re not willing to admit it.
Welcome to Consentinople
Imagine a feminist city, fantastic as it may sound, in which everyone was honestly and completely committed to the idea of human rights, equal treatment, and freedom within the bounds of consent. I call this city Consentinople. In Consentinople, everyone lives life as a perfect feminist every day, able to follow their own sexual and other preferences. Even those whose sexual preferences include non-consent find willing partners with whom to act out their fantasies in a safe setting.
What would Consentinople actually look like? Since we’ve already seen that a surprising number of people prefer traditional, gendered romantic relationships, it shouldn’t come as a shock that Consentinople has lots of them. Many women work for a while, then step back from the workforce at least temporarily to raise their children. Some women don’t, of course, and they’re free not to, but many do (we’re also imagining that this is a realistic decision; Consentinople has a thriving economy in which one income can support a family).
Consentinople sounds great, a utopia where everyone has the freedom to express and live who they are without punishment, no matter their sexuality. Unfortunately, the city also resembles a modern real-world patriarchy. Its city councillors are mostly men, as were seven of its last ten mayors. Men own the majority of its businesses, and handle a disproportionate percentage of the money in its economy. Consentinople even has a wage gap: the average working women earns 90 cents to her male colleague’s dollar when controlling for industry and education.
We shouldn’t be surprised by this. Consentinople has given people the freedom to express their preferences and women have, on the whole, expressed a greater preference than men for childcare. This means that on average they will exit the workforce sooner than men. If they don’t exit the workforce, they may still spend more time with their children compared to a male colleague, who probably spends that time working. Neither preference is wrong, and no-one in Consentinople places any moral judgement on people for making these decisions. It is simply a fact that, in aggregate, two populations expressing different preferences will end up with different outcomes.
The good news is that in theory, if you control for all of these extra variables (children, length of time worked, etc.) then the wage gap in Consentinople disappears. A woman with no kids who has worked her entire life will do just as well, earn just as much, and get the same promotions as a similar man. The average outcome may be different, but the average opportunity is the same.
The Stereotype and the Individual
It is not sexist to believe that women are, on average, shorter than men. Neither is it sexist (or “heightist”, I suppose) to bar people under a certain height from riding carnival rides. However, it would be sexist to bar all women from riding carnival rides. This is obvious, because height is an easily observable value: for basically no cost we can get much more accurate information about a person’s height than we would be able to infer from their gender alone.
Things are a bit more ambiguous when the value we care about is not as easy to observe, and we have a beautiful natural experiment in this regard. I’ll let Scott Alexander explain:
It starts like this – a while ago, criminal justice reformers realized that mass incarceration was hurting minorities’ ability to get jobs. 4% of white men will spend time in prison, compared to more like 16% of Hispanic men and 28% of black men. Many employers demanded to know whether a potential applicant had a criminal history, then refused to consider them if they did. So (thought the reformers) it should be possible to help minorities have equal opportunities by banning employers from asking about past criminal history.
The actual effect was the opposite – the ban “decreased probability of being employed by 5.1% for young, low-skilled black men, and 2.9% for young, low-skilled Hispanic men.”
Because the relevant value (criminal history) became harder to observe, employers were forced to fall back on the information they did have: race. As an imperfect proxy, this invariably led to some mistakes: black men being denied jobs for no good reason. However, from the employer’s perspective it was the best they could do to filter criminals out of their job pool. And lest we simply decide to ban any check like this with false positives, we should remember that the value these employers actually care about is future criminal behaviour, and even past criminal behaviour is not a perfect predictor of that.
Sexism, racism, and all of the other -isms are built around the concept of stereotyping. We have a belief about a group, and we allow that belief to influence how we treat the individuals within the group. When the original belief is false then this is clearly a problem. When the belief is true, we must morally fall back on treating individuals as individuals and not members of the group: we look at each person’s height individually instead of banning all women from carnival rides. Letting the stereotype trump the individual is where overt, first-class racism, sexism, etc. all come from. Fortunately, most people don’t behave like that.
Stereotypes are just a form of categorization, a layer of abstraction we build on top of the world. They are not intrinsically evil, nor are they merely a useful mental tool. Categorization is how our brains make sense of the world with the limited power at our disposal. Calling that process immoral would be absurd. Yet it’s difficult to shake the feeling that those employers who rejected black applicants for fear of criminality must have been racist somehow.
Employment Opportunities in Consentinople
In Consentinpole, we built a city where consent and freedom reigned. Outcomes by gender showed a difference which might have been concerning, but we decided that that was OK as long as opportunity by gender was equal. Unfortunately, the effects of stereotyping mean that opportunity is no longer equal there either. Employers are naturally concerned by women’s aggregate preference for child-rearing, and the related opportunity costs for the business around parental leave.
Now, lest you think the people of Consentinople are secretly sexist after all, they are quite aware of the risks of stereotyping and imperfect information. As such, the citizens of Consentinople agreed that employers will ask all potential employees (regardless of gender) about their future plans for children. This almost works; men and women who have no such plans are treated equally, and men who plan to take on child-rearing duties are penalized the same as similar women. However, it isn’t enough. Since women on aggregate express that preference more than men, it still ends up statistically hurting their employment opportunities.
A related issue in Consentinople is that people tend to weakly gender-segregate their social lives; men have a slight preference for hanging out with men, and women with women. In any given individual this is a perfectly legitimate preference that Consentinople respects, but in aggregate it gives success a kind of gendered momentum. The majority of hiring managers were already men in Consentinople even when opportunity was equalized, and when they look to fill a position they naturally look to their own social network first. Even though they give equal consideration to all candidates regardless of gender, the result is still a slight edge in the employment rate for working men.
The final outcome is that Consentinople isn’t really a whole lot better than our real world. Even when every individual follows their legitimate preferences and everyone has perfect information, we still end up with a society where women do not have the same employment opportunities as men. The silent majority of women, quietly expressing their individual preferences for child-rearing and traditional gender roles, still end up harming those whose preferences are different. By any feminist definition, this is ethically untenable.
Feminism in the Real World
The ethical problems with this vision of individual choice make it a questionable justification for any relationship. Perhaps fortunately, it hardly matters because it’s so controversial in the real world. Consider recently the blow-up around Emma Watson’s photo shoot, or the whole thing about the new Wonder Woman movie. Go back a little further and you’ll find feminists complaining about Beyoncé, or basically anybody else you can think of. If people actually believed in individual freedom, in choice and consent, then these would be non-issues. The whole premise of that position was that if somebody wants to shave or not-shave their armpit hair, it doesn’t matter. They should be free to do so.
Instead, modern society shames people for being insufficiently feminist. The world immediately piles on when somebody does so much as express a preference about the meaning of a word. Word definitions are something for which there really is no right answer, and is still completely unrelated to actually supporting the principles in question. Whereas a hundred years ago women were shamed for being too modern, now women are shamed for not being modern enough. We do not live in Consentinople.
Through an academic lens, feminism looks like a cultural norm against cultural norms: a global preference for individual preferences. In the real world, feminism looks like any other specific set of norms. Where before it was a positive norm to shave your pits, now it’s a negative. While historically there were norms against women managing money, now there are norms against women letting men take care of their finances. We can argue all we want about whether the new norms are better than the old, but that’s not the point. The point is that no matter what norms you choose, this looks nothing like the academic, consent-driven feminist doctrine that everybody preaches; in that world, there are no norms to begin with.
It isn’t really surprising, either. A “cultural norm against cultural norms” is at the very least confusing, and definitely leaves room to be interpreted as self-contradictory. It’s also just plain impractical. Everyone admits that cultural norms shift over time, but they do not simply disappear. People expressing preferences in aggregate are what build our cultural norms in the first place, and even Consentinople has that. Even if we wanted to remake the world in Consentinople’s image, human beings are not wired to live in a norm-free society.
As I implied in the title, modern romantics are in a hopeless bind. Our feminist ethics are fundamentally incompatible with our desire for a traditional relationship. The philosophical escape-hatch provided by freedom-of-choice academic feminism doesn’t actually resolve the ethical issues, and certainly doesn’t resolve the practical ones. We are stuck with two paths, neither of which are appealing.
In the first path, we decide that feminism as an ethical philosophy must naturally trump any simple personal preference. This leaves us with a further decision to make: should we simply declare celibacy, or try and make do with a relationship that is unfulfilling but at least potentially ethical? In the second path, we decide that our preferences are key, which again presents a follow-up choice. Do we ditch feminism as a philosophy, claiming it is impractical, or do we try and live with the shame and constant cognitive dissonance of being in a relationship we don’t really believe in?
At the end of the day, practicality prunes some of the choices for us. Abandoning feminism would be social suicide, however philosophically appealing it might be. Living with the cognitive dissonance is possible for some, but it takes a special mindset to be able to ignore that nagging feeling once you’re aware of it. This leaves us with celibacy and making do, and of the two, making do definitely feels less insane.