Superb Owl

Subscribe via Substack:

Mix Math and Morality in Moderation

Quantifying morality is helpful, but we need to be careful

View on Substack

I’ve always been drawn towards Utilitarianism. The idea of quantifying morality, of being able to sort my options by the amount of good or harm they do, is appealing—especially given my mathy background.

But I frequently find myself on the other side of the debate. Utilitarianism inspires near-religious absolutism in some believers, with its strongest adherents claiming it’s the One True Moral Philosophy. And when Utilitarian conclusions come into conflict with other sources of moral authority—intuition, deontic imperatives, notions of virtue, tradition—the most hard-core Utilitarians don’t back down.

And so I often find myself pulling in the other direction, away from attempts to shoehorn moral decisions into rigid mathematics and rational deduction. I feel like a Socialist surrounded by Radical Communist Revolutionaries—I appreciate their inclination, but their righteous fervor scares the shit out of me.

This quote somehow agrees with me, while simultaneously being a perfect example of the absolutist mentality that terrifies me.

Outline

Subscribe now

The Strawman

In Less Utilitarian Than Thou, Scott Alexander takes on a strawman argument often leveled at Utilitarians—namely that they’re willing to do evil things in service of a greater good. He lists out several counterexamples, most of which show that he, as a Utilitarian, is a reasonable human. He knows that he shouldn’t dox people on the wrong side of an issue, or block traffic to support his favorite cause—no matter how consequential that cause might seem. This is all great, and exactly what I’d expect from Scott.

But then he turns it around, and builds his own strawman for Anti-Utilitarianism:

So why do people think of utilitarians as uniquely willing to do evil for the greater good, and of normal people practicing normal popular politics…as not willing to do that?

I think people are repulsed by the idea of calculating things about morality - mixing the sacred (of human lives) with the profane (of math). If you do this, sometimes people will look for a legible explanation for their discomfort, and they’ll seize on “doing an evil thing for the greater good”: even if the thing isn’t especially evil, trying to achieve a greater good at all seems like a near occasion of sin.

I find this blithe dismissal of Anti-Utilitarianism surprising.

I find it especially surprising given that Scott wrote one of my favorite arguments against the of over-mathematization of morality. In his review of William MacAskill’s What We Owe the Future, he takes on the Repugnant Conclusion (a Utilitarian thought experiment, which paradoxically implies that we should trade a world with a few very happy people for one with billions of barely happy people):

Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

This is a good Anti-Utilitarian argument! If a particular Utilitarian conclusion betrays your moral intuition, you can just ignore it, and find some other argument (“some other set of axioms”) that does support your intuition.

Scott clearly understands that Utilitarian ethics is useful, but not a final solution to the problem of morality. So I think he and I are philosophically aligned here—even if he continues to pull in the Utilitarian direction, and I find myself pulling against.

Mixing Morality and Math

Rather than make a broad attack on Utilitarianism (and its softer cousin, Consequentialism), I want to specifically address the problems that arise when we try to quantify morality.

Again, I’m generally in favor of leveraging math where it makes sense—mathematical reasoning is an essential part of moral reasoning. But while it can help us navigate particular decisions, it obscures countless assumptions about who is affected, what they feel, and what well-being really means.

Quantification

How do I measure my happiness against yours? How do I even measure my own happiness?

This is the first major problem of Utilitarian ethics. We need a way to quantify how “well off” everyone is under every possible outcome.

Sometimes we can get away with an externally observable metric. We might measure how often people get sick, or how wealthy they are. We might ask them to self-report their happiness level, or their satisfaction with different areas of life. We hope these measurements roughly correlate with people’s actual well-being.

But the correlation between externally observable metrics, and the internal sense of happiness, is hard to determine—especially when we start trying to guess at what animals feel. Even when it comes to fellow humans, we’re catastrophically bad at estimating how much pain they feel.

Even if we do come up with a good observable correlate of happiness, we run into trouble. Trying to maximize a single metric always leads to problems.

If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X - a terrible idea unless you’re really sure you have the right X.

…EA [a Utilitarian movement] is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.

—Holden Karnofsky, EA is about maximization, and maximization is perilous

There are better and worse ways to quantify utility, but it’s important to remember that every observable metric is only a correlate, and that the metric we pick is always a reflection of our own values—not necessarily the values of the beings we’re measuring.

Information Overload

Once we’ve settled on a framework for quantifying well-being, we need to go about actually measuring it. This is an impossible task.

First, we need to measure not just present-day well-being, but hypothetical well-being under each possible outcome. If we want to be very rigorous, we might survey all affected people about how they’d feel, or look at the effects of similar events in the past.

But in practice, we mostly just guess. Say your mother is sick, and you’re deciding whether to bail on plans with your friend so you can check in on her. You’re left doing a bunch of mental guesswork: how sick did she sound on the phone? would she be happy to see you or would she rather sleep? how bummed will your friend be? will it be easy to reschedule?

Even in this tiny scenario, picking the action that “maximizes utility” is full of guesswork.

Things get worse when we consider externalities. An infinite number of consequences ripple outward from every action we take. Even something as trivial as the decision to drink coffee over tea has marginal impacts on farmers in Africa, global carbon emissions, local caffeine culture, and your own stress levels. It’s impossible to bring all the consequences of our decisions into the equation.

Typically we just stop measuring at some point, and assume that everything else zeroes-out. But which consequences we include in our calculus, and which we push out to the margins, is again an arbitrary choice, and a reflection of our own values.

Rationalization

Which brings me to the most insidious issue with quantifying morality: it allows us to launder our selfishness, and present it as virtue.

Pick any villain, any catastrophically evil person, and they’ll most likely tell you their actions were justified, even righteous. We have an immense capacity for rationalizing our self-interest. Recognizing and grappling with this is a never-ending moral responsibility.

The worst thing we can do is abandon that responsibility, and decide that our moral judgements are unassailably correct.

That self-righteous conviction is most closely associated with religious fundamentalism—God Himself gave the instruction!—were it not infrequently inspires mass murder. But when we hold up our moral judgements as mathematically derived, as the result of hard-nosed logic, we’re in danger of being possessed by the same self-certainty as a religious zealot.

To bring out an obvious example, here’s Caroline Ellison describing Sam Bankman-Fried (SBF), who is currently awaiting sentencing on charges of fraud, conspiracy, and money laundering:

He said that he was a utilitarian...he believed that the ways people tried to justify rules like “don’t lie” and “don’t steal” within utilitarianism didn’t work...the only moral rule that mattered was doing whatever would maximize utility.

This is precisely the form of Utilitarianism that Scott criticizes above—the idea that it’s OK to do evil things in support of a greater good. But you don’t need to be an SBF-level extremist to fall into the trap of rationalization.

Rationalization is made possible by the immense ambiguity in every moral decision. We assume we know what other people are feel, and how they’ll feel in the future. We decide which consequences are important enough to factor in, and which are too remote.

All of this is natural and unavoidable. Moral reasoning is inherently difficult and ambiguous. Our merely finite brains will never overcome this problem. And that’s OK—we’ll do our best, and make mistakes along the way.

The biggest problem with Utilitarianism is that it sweeps all that ambiguity under the rug. It pretends there are right and wrong answers, and that those answers are knowable.

By wrapping morality in a veneer of mathematics, we make it look like a science. And we trick ourselves into thinking that we’ve reached the “correct solution”, rather than a merely plausible one.

Mathematizing ethics allows us to launder our biases through small assumptions, then hold up a self-serving conclusion as mathematically derived. A bad faith actor might use this to trick others into acting against their own interests. But more dangerously, we might trick ourselves into acting against our conscience.

Without proper grounding in other approaches to morality, we’re in danger of ending up like SBF: sticking to a logically derived Utilitarian conclusion, even when it betrays every moral intuition.

The Sacred and Profane

Again—again!—I want to say that I’m all for Utilitarian approaches to ethics. But we need to remember that it’s only an approach, not a solution to the problem of morality. Other approaches have their place.

Many of us squirm when confronted with the inherent squishiness of moral philosophy. The idea that there are no right answers, that we have to just do our best and live with the ambiguity, is terrifying. It’s especially hard on those of us who like to think in terms of systems and rules—we see the chaos and instinctively try to put things order. It’s a valuable instinct, so long as we remember that too much order is just as harmful as too much chaos.

Scott accuses Anti-Utilitarians of being uncomfortable with “mixing the sacred (of human lives) with the profane (of math).” I think he’s right. But this instinct, just like our instinct to create order, is valuable in moderation. We ignore it at our own peril.

Subscribe now

Share


19 comments on Substack. Join the discussion!