What "Matters"?
Why care about what we care about?
This post is a follow-up to Hedonism Revisited.
Ethics as Reflection on Behavior
“What will I do?” is a question that every agent with behavioral flexibility must implicitly answer.
We can speak of something like an agent’s behavior, or policy, or decision function, as a model that describes what it will do in each situation that it may encounter. Every animal has behavior; so does every computer program. And, of course, we humans have behavior too.
Humans are, at least sometimes, reflective agents. We are able to ask the question “what should I do?”
And we are able to reflect interpersonally; we can tell each other, “you should do this.”
“What should I do?” is the basic question of ethics.
Before reflection, you already have a policy (aka strategy, decision function, behavior pattern, etc). After you reflect, your policy may change. What can reflection and investigation tell you that will change your policy?
To the extent that thoughts can ever tell you what you “should” do, this is what that means. “I was doing X, but then I thought about it, or learned something new, and I realized I should do Y instead.”
If we’re really trying to start from ground zero, as much as possible avoiding any starting assumptions about what one “should” do, we still have a sort of minimal, ultra-broad definition of ethics.
Ethics is the study of the ways for conscious reflection to revise behavior.
Ethical insights are normative. They tell you about some courses of action being “better” or “worse” than others. When you are convinced that something is “better”, you will be at least somewhat more motivated to choose it over a “worse” option.
This kind of “ethics” actually includes lots of things most people don’t ordinarily think of as ethical or moral in nature. Decision theory, game theory, or strategic analysis are also investigations about “what should an agent do, to achieve its goals”. An insight that it is more prudent or effective to do things one way rather than another would count as an ethical insight under this broad definition.
Whatever you call it, though, we need a name for the broad category of normative thinking about behavior. We certainly do think normatively, and use the words “should” or “good/bad”, when we’re talking about matters of prudence/effectiveness that don’t normally get called ethical.
“You should use a serrated knife to cut bread; it’s better at cutting through crusts.” We tell ourselves and each other what we “should” do, using similar words and similar types of explanation, for both “prudential” and “moral” concerns, so maybe these things aren’t as separable as is commonly assumed.
Why Motivation Matters for Ethics
It would be rather futile to come up with some theory of what “should” be done that had no power to move anybody to act accordingly. In order to be efficacious, any ethical insight must make use of some motivation that we already have.
It would be just as futile, on the other hand, to have a “null theory” of ethics that says “whatever people are already doing is what they should be doing.” (In that case you wouldn’t even need a concept of “should!”)
If ethics is a thing at all, there have to be some insights, some arguments about what you “should” do or what’s “better”, that at least sometimes motivate people to change their behavior.
Which means that the space of ethics is bounded by some empirical facts about psychology.
What does move or motivate people? When does a verbally articulated reason change people’s behavior? These sorts of facts need to ground anything we claim about what people “should” do.
This may seem uncomfortably subjective. “Surely you don’t want to ground ethics in whatever people’s whims dictate! Surely a psychopath who enjoys torture can’t be considered ethically “right” or “good” just because of his psychological quirks! Empirical facts about people’s motivations can’t be the base of ethics!”
But note that I’m not saying that people’s motives or desires or behavior or reasoning can’t be critiqued. (That would be a “null ethics” that has nothing to say!)
I’m saying there’s no point in offering a critique to someone unless it can potentially connect to something motivating for that person.
And it only makes sense to make normative claims about “people in general should X”, when you can connect X to a motivation that is in fact shared by most people, or at least shared by your target audience.
If someone (like our hypothetical “psychopath") is actually unreachable by any argument that he should change his behavior, then there is indeed no further point admonishing him. We can still intelligibly talk about his behavior being “wrong”, but only in the sense that cashes out to arguing about what non-psychopaths should do — “he should be convicted of a crime”, “we should not imitate his behavior”, etc.
But most people are not totally unreachable, of course. The whole reason there’s any point in telling people “you should do such-and-such”, the whole reason we ever give reasons for the policies or behaviors we advocate, is because people generally can be responsive to considerations “for” or “against” a course of action. That’s what being “reasonable” means!
The whole point of having ethics is that sometimes when we hear we’re wrong, we want to change.
And so, we do actually need to learn about the empirical, psychological machinery of when and how motivation-to-change actually works in humans, in order to “do ethics” effectively.
Caring as Conscious Self-Correction
You can also flip this around:
We “care about”, or “value”, a thing, in an ethically relevant sense, if and only if we would respond to learning of a way to get more of that thing by changing our behavior.
There are things we “want” or “like” that we don’t “care about” in this sense.
If you’re sitting in a diner where a TV playing a football game is directly in your line of sight, you might in an immediate sense “want” to look at the screen; but you might not “care about” the game enough to subscribe to ESPN or even to make sure you’re picking a seat close to the TV.
Only the things we “care about” in the conscious intentional way are ethically relevant, because the domain of “ethics”, of “should” and “shouldn’t”, is the domain of admonitions. We only have an opinion about something’s normativity if we might tell someone else, or tell ourselves, “you should/shouldn’t do it.”
The only things it can make sense to say we “should” do, are things that we might potentially be persuaded to do if we weren’t already doing them. We can only be persuaded to do things if we have values or goals that we “care about” in the sense of being open to learning how to pursue better.
“Wanting” Is Necessary For Self-Correction
In the terminology of the previous post, motivation or “wanting” is the thing that’s directly relevant to ethics.
You don’t get conscious, voluntary behavior change in response to conscious learning if there isn’t a path for the learning to motivate change.
Motivation in its most immediate form is the subjective experience of an urge to take an action, usually a literal muscular movement.
I hypothesize that the brain normally implements voluntary muscle movements (and thus, all behavior) through urges to move.
In other words: voluntary behavior is motivated behavior.
Aside: What’s a Voluntary Movement?
Reflex movements, which are implemented entirely by nerves and the spinal cord without reaching the brain, feel “involuntary” — when the doctor hits your knee with a rubber mallet and your foot pops up, you don’t feel like you “chose” to move. When motor disorders like Parkinson’s disease or Huntington’s disease cause tremors or chorea, those movements also feel “involuntary.”
Voluntary movements have a neurological signature. In voluntary movements (but not involuntary movements like myoclonus or tics) EEGs can pick up a change in electrical potential, called the “readiness potential”, about one second before movement occurs. The “readiness potential” is a good candidate for the “ghost movement” or “will-to-move” that I’ve observed introspectively.
It’s possible to dissociate the “will-to-move” or sense of volition from actual movement. Electrically stimulating the right parietal cortex in awake human subjects can produce the subjective experience of wanting or intending to move, or even the perception that one has moved, without any actual motion; while stimulating the premotor cortex can cause the subjects to actually move, but they are not aware of the motion or have any sense of a desire/intention to move.
Clinical observations of high-level movement deficits in patients with apraxia after parietal damage have led to the hypothesis that the posterior parietal cortex contains stored movement representations (15, 16). It can be proposed that direct stimulation of the parietal cortex activates such representations. However, the fact that patients experienced a conscious desire to move indicates that stimulation did not merely evoke a mental image of a movement but also the intention to produce a movement, an internal state that resembles what Searle called “intention in action” (17). This finding is consistent with nonhuman primate results suggesting that the posterior parietal cortex harbors a “map of intentions,” with different subregions dedicated to the planning of eye, reaching, and grasping movements (18), and that activity of parietal neurons is highly correlated to processes of motor planning and decision-making (19, 20).
In other words, the “movement representation” or virtual/simulated movement stored in the brain might be the same thing as the “intention” or “will” or “desire” to move, as both are localized to the same brain region (the posterior parietal cortex) and provoked by the same artificial stimulus.
No Reflective Behavior Change Without Motivation
If all voluntary movements, in ordinary circumstances (no motor disorders or artificial electrodes), are accompanied by this subjective sense of “will” or “desire” to move, then conscious reflective behavior change must involve a step where the reflection connects to the will-to-move.
This means that the immediate “urge” actually matters. If you never have an “urge-to-move”, then under normal circumstances you’re not going to make a voluntary movement. If you never make the (mostly voluntary) physical motions that make up a complex action or behavior, you can’t engage in that action.
Abstract thought and long-term planning might be quite removed from immediate urges, but if they are ever to have practical effects, they must connect to those urges.
In particular there must be a path between a (rather abstract or intellectual) thought of “hm, I shouldn’t be doing this” towards actually physically changing my actions. And that path must pass through something we could call an “urge” or “desire” or “motivation.”
This basic insight is pretty much the same as Hume’s:
It is impossible reason could have the latter effect of preventing volition, but by giving an impulse in a contrary direction to our passion; and that impulse, had it operated alone, would have been able to produce volition. Nothing can oppose or retard the impulse of passion, but a contrary impulse…
Thus it appears, that the principle, which opposes our passion, cannot be the same with reason, and is only called so in an improper sense. We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.
We do not, in ordinary circumstances, act without having an “impulse” to act; neither can we refrain from acting except by a contrary “impulse”. Reason can inform these impulses, reason can be an input into our impulses, but it cannot drive our actions all by itself without going through the impulses.
“Reason is and ought only to be the slave of the passions” — well, that’s a provocative and (I think) nearly backwards way of putting it. Is a chef the “slave” of his line cooks because he gives them orders?
But that provocative framing might be necessary as a counterpart to the popular idea that reason can bypass or overrule or struggle against the passions.
For reason to act at all, it must act through its effects on motivation — including immediate “impulses” or “urges”. You can never, and should never, hope to defeat or oppose your own motivation; motivation is the medium through which literally everything gets done.
Hume particularly wants to point out the existence of “calm passions” which, he says, get confused for reason, but are not themselves examples of “the faculty which distinguishes truth from falsehood.”
Motivations towards, for instance, “kindness to children” or similar consciously endorsed priorities are still motivations. Motivations that are accompanied by a calm feeling rather than stormy extremes of rage or fear are still motivations. It’s all “passion”, even when we are doing eminently “reasonable” things.
In particular, Hume says, reason can easily and immediately change our motivations when we learn that they are based on mistaken assumptions:
The moment we perceive the falshood of any supposition, or the insufficiency of any means our passions yield to our reason without any opposition. I may desire any fruit as of an excellent relish; but whenever you convince me of my mistake, my longing ceases. I may will the performance of certain actions as means of obtaining any desired good; but as my willing of these actions is only secondary, and founded on the supposition, that they are causes of the proposed effect; as soon as I discover the falshood of that supposition, they must become indifferent to me.
Now, I don’t think Hume is actually right that only these kinds of instrumental strategies are subject to rational correction while our desires for particular outcomes are totally fixed and can’t be informed/changed by rational thought.
But “learning your plan won’t work” is indeed an excellent prototypical example of how reason can affect behavior through motivation. You can instantly lose your desire to carry out a plan when you learn it’s futile.
Effectiveness, ensuring that plans are actually going to work, is an example of something that people often care about in the self-correction sense, or an example of an ethical value. Arguments about “that won’t work because…” are generally understood (by reasonable people) as considerations against doing a thing.
The details of how reason “connects with” motivation are worth exploring further, but it’s clear that reason does connect with motivation whenever it “works” at all, while reason is not itself motivation.
One way reason might “connect” with motivation is through one or more drives towards self-correction. Hume might call these examples of “calm passions.”
Introspectively, I can perceive impulses to check whether an argument is valid, whether a plan will work, whether something bad will happen if I continue with my action, etc. An urge-to-check or urge-to-correct can be as immediately compelling as any other urge, and it can connect immediate action with more abstract intellectual content like plans, goals, beliefs, or commitments.
“Liking” Can — But Needn’t — Be What We Care About
Experiences of pleasure and pain — or, more multidimensionally, experiences of what things feel like (which typically come with a valence, a sense of liking or disliking the feeling) — don’t always drive action.
But they often do, and they’re relevant to ethics insofar as they do.
Do we care what things feel like?
Clearly yes, but not exclusively, universally, or terminally.
Learning (intellectually) that something feels good or bad will usually affect people’s disposition to do it.
Of course, people do sometimes choose to do things that feel bad or to avoid things that feel good.
But it’s at least intelligible and reasonable to say something like “You shouldn’t do that — it’ll feel terrible!” How something feels is generally at least a consideration to weigh, or a reason for doing it or not doing it.
Affective valence is itself modifiable by learning — what you know can affect how you feel.
Learning something new about a thing can change how you feel about it. The pleasure of a delicious meal can suddenly turn to disgust if you learn it’s full of mouse droppings.
This means that we can “care” about things (like sanitation) besides our immediate subjective experience; and also that we can “care” about things through our subjective experience.
On the other hand, affective valence isn’t necessary for something to be ethically relevant in the same way motivation is. In order for you to reflect on your behavior and change it, you need to be motivated to change; but you can be motivated by things besides how you feel.