38 Comments
May 3Liked by Sarah Constantin

The whole Pluto thing above seems a bit confused to me.

I would say that improved classifications *are* an advance, just not an advance in *knowledge*. After all, they do involve an element of convention and arbitrariness! Which is to say, they involve *decisions* -- and like any decision, they involve *tradeoffs*. An improved classification may be an advance, may be better on *net*, but it won't usually be better in *all* respects. Advances in *knowledge* do not share this feature, they do not involve tradeoffs or decisions!

So I don't see any problem with saying, yes, the new way may be better on net, but also it is a decision, not an advance in knowledge, and so we should keep in mind also its disadvantages and the advantages of the alternatives. Decisions are worth reconsidering occasionally.

And actually, while this isn't really the point, on the object-level question of how best to classify Pluto, I was convinced last year that actually yes Pluto (along with lots of other things -- although not *hundreds*, pretty sure, I think you are probably making a factual mistake there, it'd be more like dozens) really *should* be considered a planet, I wrote about it on DW here: https://sniffnoy.dreamwidth.org/572565.html (Note that this doesn't involve defending a 9-planet list, which is what I assume is meant by "2005 solar system ontology orthodoxy", because that is pretty indefensible, nostalgia being about the only advantage to it.)

Expand full comment
May 3Liked by Sarah Constantin

I think Pluto being a planet in a way strengthens rather than weakens the example. Insofar as antinormativity is mistrust of normativity, having a whole bunch of normativists attack people for considering Pluto a planet when really it makes sense to consider Pluto a planet is existence proof that it happens.

OTOH it makes the nostalgia point less relevant, so YMMV.

Expand full comment

much of this post is trying to present arguments for "antinormativity" while also claiming its a thing that resists being argued for.

Ofcourse, if someone really doesnt think antinormativity wants to be argued for you can ask, what is the anti-normative possition that claims itself to be true while also claims it cant be argued for itself.

For me, I would say its the aknowledgment of sin with the aknowedlgment that the sin is not about to change.

Like suppose I want to watch a movie.

I may know that I should be working. But that doesnt change tht I want to watch the movie.

To me the antinormative does not respond "watching the movie is needed for my mental health" or "all work and no play makes john a dull boy" or "a work ethic that is too strong is bad for society".

Instead, the antinormative responds nothing and just watches the movie. So in otherwords, being antinormative in this sence would not mean "arguing that watching the movie is in some sence better than working" being antinormative just means watching the movie anyway and aknowedlging that I will watch other movies in the future.

If you were to ask "why will you do that in the future if you dont beilive its better" the response is just "I want to"

Expand full comment
May 3Liked by Sarah Constantin

Recently I've been thinking that rationalist normativity doesn't take "problems are long-tailed" into consideration enough. When combining these thoughts with what you said, I get the following thesis:

Usually there's only a few things that are highest priority to work on. Once you find a solution to them, you can take away a more general lesson about what to do/not to do, to be better able to solve them in the future. This lesson likely won't be a complete ranking of bad to good, because it can only guide you on problems that are fundamentally similar to the problem you took the lesson from. This is basically well-functioning normativity.

But if the lesson you take away is wrong, too broad, or is applied in the wrong way, or if you encounter some case that is unusual or maybe even uniquely pathological, the normativity can cause conflict, or even problems that were much worse than the thing they were supposed to address.

Thus, if one applies the very same normativity-generating principles to the cases of excessive normativity, one gets anti-normativity as a normative principle. And it can actually be perfectly appropriate to have.

The place where it gets in trouble, like with normativity, is when it's too much or wrong or applied to the wrong things. The most classic example is if you've got a group full of people who take antinormativity to be a high principle. If they do something bad then it can be basically impossible to correct them.

I suspect problem/opportunity-centric stuff (aka case studies, I guess) to be pretty central to making progress with these things.

Expand full comment
May 3Liked by Sarah Constantin

It does feel like I'm doing a reinterpretation of your post in a normative rather than anti-normative frame. I'm not sure whether it's good or bad that I'm doing this reinterpretation.

I guess the way I'd say it is that I feel able to rationally account for abnormally much stuff, so I do trust the normativity of rationality, for myself. However as I am unusually quick to see the rationality in many different things, and because I have examples of where rationality went badly in the past (including my own rationality), I can see why others would not trust rationality.

But being able to see why others would not trust rationality does not directly make *me* mistrust rationality, so it's hard for me to feel motivated by anti-normativity applied to rationality, as opposed to by integrating the anti-normative concerns into normative rationality.

I think this has the opportunity to be productive in allowing some rationalist overreach to be fixed, but also doesn't fundamentally solve or fully empathize with the anti-normative mistrust of rationality? And has the potential to act as justification for expanding the domain of rationality.

Expand full comment

Under this model, it's possible to both support normativity, anti-normativity and anti-anti-normativity, in different appropriate and carefully considered contexts.

Potentially meshes well with "Against responsibility": http://benjaminrosshoffman.com/against-responsibility/

Expand full comment

I would sort of say that I believe internally, in principle, in normativity (in the definition/framing you've given here). But in most cases/contexts/communities I will push at least somewhat towards anti-normativity, externally.

Because I think most people are catastrophically bad at separating what they think is good to do/believe, from what they have evidence is good to do/believe, from what there is adequate common knowledge is good to do/believe, from what they should try to force other people to do/believe. And it's much, much easier to push "you must leave space for other people to be wrong or bad", as a philosophy, than "you must always act with the understanding that you yourself might be wrong, and moderate your actions accordingly."

As a norm for a community, the first one is much less brittle. People are really catastrophically bad at the second one. (Me included, I'm sure, despite my best efforts.)

Expand full comment
May 3Liked by Sarah Constantin

Because goodness and badness are not actually properties that inhere in the world independent of human valuation, normative thinking is necessarily a shortcut from something you've already decided is valuable or harmful to something else you're trying to decide about. Things aren't good per se, they're only good-for, and people can have a lot of disagreement (social, internal) about how to weigh the various things they value as ends and how to go about attaining them. Applying a normative label is therefore taking a conclusion for granted, pushing it further down the chain of contingencies--this is the point you're explicitly arguing for (if something is good and something else is bad you should do the good thing instead of the bad thing) and to the extent that people agree that this is how you should reason, they should be very careful about what they are willing to call good and bad! If "good" is just a way of saying "the thing that should be preferred," and people are not ultimately disagreeing about the logic of tautologies, all the action is happening under the labels. I think this is what you're getting at in the Unsilencing section:

>That’s pointing at the same phenomenon I’m talking about. To get rid of inner conflict (like a phobia), you have to actually weigh the pros and cons, let them balance against each other. No compartmentalization. No taboos. No labeling something “irrational” instead of letting it speak for itself.

But compartmentalization and taboo is all the juice you'll ever squeeze out of normativity itself. I would encourage you to back up further, from "irrational" to "bad".

Normative thinking is powerful; people really want to reason as though goodness were something you could reach out and touch, and normatively charged language is disproportionately persuasive. People are forever losing track of what they actually care about because they are chasing after abstractions. A nice therapy technique is reframing a "should statement", like "I should do my chores," to an if-then statement, like "If I do my chores, the house will be clean and I'll be more comfortable." "Shoulds" contain "goods" just like, as you say, "goods" contain "shoulds", and taking the specter of morality out of something like doing your chores and refocusing on the actual consequences can cut through some of the psychological knots people tie themselves in. There's a temptation to say that we're quietly keeping track of what we mean by good and bad, just and unjust, etc., and only using those terms when everyone agrees on premises and values, but I don't think we are. I think we're trying to save on brain labor and position ourselves well (socially, internally) by turning complex problems into games of goodness basketball.

Expand full comment

Jesus, life is not really this complicated

Expand full comment

No, it’s not this complicated; it’s more complicated.

Expand full comment

I'm not very normative, but I think my reasons are a bit different from the ones you list. Fundamentally, I just don't think it's possible to reduce morality or even preference to well-understood rules. We don't know exactly what's right and we can't even explain all of the things we do know.

My education was in statistics, and I do game design as a hobby. In both fields, you are repeatedly forced to confront the limits of your ability to know and explain things. Collecting and interpreting data is really hard, even in controlled conditions, and you can't ever write perfect rules even when you're literally inventing your own world.

With that in mind, strong normativity seems...not wrong, exactly, but mostly meaningless. If I can never be more than 99% confident in anything, why do I care about whether the edges of morality / preferability are hard and sharp or soft and fuzzy? Wouldn't the difference between "smart is always better than stupid" and "smart is almost always better than stupid" be lost in the noise created by my inability to know exactly what's smart and what's stupid?

You're right that mistrust is a big part of this, but it's not "you might be malevolent" mistrust. It's "everyone is always a little bit wrong about everything" mistrust.

Expand full comment

This relates to cheat days for diets. Maybe there's some good metabolic or quality of life reason to, for example, eat very low carb five days a week and take the brakes off on two days a week. Or maybe the diet is too restrictive during the five days, and the cheat days are needed to avoid some problem, and the diet could be followed all the time if it were looser. I haven't seen anyone say that if a diet requires cheat days, it's too strict.

Expand full comment
author

I've definitely heard "if a diet requires cheat days it's too strict"!

Expand full comment
May 3Liked by Sarah Constantin

I'm glad someone is saying that. I hope it gets into the general culture.

Expand full comment
May 6·edited May 7

You misunderstand "eye for an eye..." The idea isn't that justice is inherently cruel - it's that everyone will perceive the injustice done on them as greater than done to the other - especially in the context of racial justice.

Expand full comment

If the person who took someone else’s eye just accepts losing one of their own eyes as a fair punishment, and everyone agrees it’s fair and the conflict is over, sure, the world doesn’t go blind, but how often do you see people behave this way? Usually, either the original aggressor or some other member of their clan won’t accept they deserved to lose an eye, so more eyes will be demanded to punish the punishers, and so on.

Ironically, it seems the original motivation of the law was to restrict the punishment to _only_ an eye for an eye: <https://en.m.wikipedia.org/wiki/Eye_for_an_eye>.

People have always wanted more eyes.

Expand full comment

Great piece. I think part of not wanting to understand other people's point of view isn't just not seeing the point of doing so, or not thinning the effort is worthwhile, or not wanting to take the chance that you would have to admit to yourself that you're wrong - it's contamination. If you really see through someone else's eyes, you become them; and that could be disastrous even if doing so allows you to correct one specific error you've fallen into.

Out of the crooked timber of humanity no straight thing is ever made, and if someone else is warped in ways you can't see, or in ways that could only ever be seen in occult dimensions beyond any human sight, then maybe their leprosy can spread to you if you touch them.

You read Marx, you think he's got some good points, then ten years later you're killing anyone who wears spectacles. You read Peterson, you tidy your room, you live off raw meat and despise women. Less Wrong, mosquito nets and clinical trials, immanentise the AI apocalypse. Jesus Christ, let's be nice to the poor, the Dark Ages descend for a thousand years.

And suddenly, rather than words, comes the thought of high windows; the sun-comprehending glass, and beyond it the deep blue air, that shows nothing, and is nowhere, and is endless.

Expand full comment

I really like this post, but I feel like it's glossing over something in the neighborhood of the alief/belief distinction.

Like, yes, the dog might bite you, but you think the probability is low - like 1%. And you're willing to take that risk and pet the dog at up to 5%. But it *feels* like the probably is 50% - the memory is so visceral and available that merely *knowing* that your felt probability is incorrect isn't enough to overcome it. There is a part of you which just isn't really capable of updating like that, and that part of you has a lot of influence over your decision making.

So you've accepted the fear, and most of you wants to take the risk, but part off you is just completely off in it's risk analysis. That part of you is doing something irrational. Even after you've accepted your fear and considered the risk, you still can't do what most of you wants to do because part of you is straight up incorrect about what's going on

Expand full comment

I think glossing over the belief/alief distinction makes sense because aliefs act the same way as Bayesian beliefs, and when aliefs and beliefs contradict each other, often it's the aliefs rather than the beliefs that are correct.

Expand full comment

We don't talk about Pluto, no no no...

https://youtu.be/YJPHK5NNtpQ?si=CJ5PDtFtD4m6s5tj

Expand full comment

> True normativity is “always do your best”, not “always do what you can prove to be your best to a skeptical stranger” — the latter really would be an unreasonable expectation.

Fascinating, this reminds me of Popper's answer to the question of how and why science works (according to David Deutsch's retelling, anyways). The idea is that philosophy of science took a wrong turn and got captured in an https://en.wikipedia.org/wiki/XY_problem : we want to know which scientific theories to use, obviously we should use scientific theories that are (more) true, ok, how we determine that a theory is true? And it call comes crashing down because the problem of induction and all that.

Popper (according to Deutsch) said that we should back up and look at the original question, and then accept that we should use the best scientific theories we have. There's a lot to discuss about what "best" means and how best to determine and so on, but there's one heck of a crawling in broken glass distance from "how do we prove that this stuff is true" to "how do we prove that this stuff is better than the other stuff we have". By the way, falsifiability then is just a requirement for being replaced with a better theory, no other deeper philosophical roots for it.

So, yes, "I'm doing my best, even though I can't prove to a skeptical stranger that it's *objectively* best, it's still the best I can do given my information and limitations and everything" is how I'd phrase it.

Expand full comment

Fascinating, but there is a loophole that allows you to be Normative and Anti-Normative at the same time. Perfectly balanced, as all things should be ;)

Believe in Normativeness as an abstract concept, as an Utopia that can't be reached. Make it the realm of God, and striving for good honorable, but demanding that you have found it blasphemous. Following the path of the light, with enough wiggle room for the shadow, as you are humble enough to acknowledge that noone can ever know about the light to definitly say that this shadow is not part of it.

Expand full comment

You're right: I *don't* trust you, I don't expect you to correctly prioritize the things I care about, and I don't think you're very much like me.

Expand full comment

I think normativity only really makes sense if you either decide for some reason to treat yourself as a coherent singular entity, or happen to have remarkably stable preference hierarchy.

Personally, I have competing desires, and no clear metric by which to determine which of those desires is most worth fulfilling, or which ones I will most regret (not) fulfilling.

Should I seek stability and commitment in my somewhat boring long term relationship?

Should I play the field and try to find as many of the most geneticaly fit partners I can to bear my offspring?

Should I just go gay?

Try to topple the government of a third world country?

Become a scientist?

Maximize for Impact? Or interest? Or funding? Or fraud?

Join a monestary?

Join a cult?

Determine the optimal combination of drugs and dosages by which I can live the for longest time with the greatest sense of euphoria?

Figure out what exactly a "moral obligation" is, and spend every waking minute either fulfilling the ones I have or carefully searching to make sure there aren't ones I'm unaware of?

How do I choose between these options?

Maybe just wing it?

Expand full comment

All this is to say "I think the shame angle is missing the mark.". You can be completely open with yourself after having fully introspected, and even after casting away shame or at least discarding all of the options which you can't get away with, you will still find yourself with multiple things you may wish to choose from and no mechanism by which to value one option over another.

If someone butts in to tell me that raising kids is better than the monastic life, or that donating as many of my organs as I can is better than starting a cult, then I must politely ask them to fuck off.

Expand full comment

As for truth... Outside of formal systems that is mostly a word people use to mean "minimally false with respect to some properties of interest"

But this requires properties of interest. And that requires you to have particular interests in particular properties. Which is begging the question.

Expand full comment