[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/his/ - History & Humanities


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1.jpg (119 KB, 696x1234)
119 KB
119 KB JPG
A lot of non-utilitarians don’t really see the draw of utilitarianism. To them, we’re a curious bunch. We think it’s great to kill people and harvest their organs (in an idealized case, of course). https://utilitarianism.net/objections-to-utilitarianism/rights/#introduction We care about shrimp. What the hell is going on? Now, I don’t really think caring about shrimp is limited to utilitarians if anything, https://benthams.substack.com/p/for-a-short-period-of-time-you-can maybe deontologists should care about them more. But I want to try to communicate what I consider to be the core motivation behind utilitarianism.


At the most basic level, I think utilitarianism follows if you think the following two things:

1 Everyone’s interests count equally.
2 Nothing else counts more than someone’s interests.
Note: when I say counts I mean “should influence decision-making.”
>>
>>18282958
Utilitarianism says you should maximize aggregate welfare. If you think everyone’s welfare matters the same amount and nothing matters apart from that, then you’ll think we should maximize aggregate welfare. So you’ll end up a utilitarian.

Both of these premises strike me as pretty intuitive. It’s not incoherent to deny them, and there are some intuitions that conflict with them, but they strike me as pretty natural moral starting points. Thinking them through helps illustrate what’s weird about non-utilitarian verdicts.

Take, for example, the view that nature has intrinsic value. This view would deny 2. It would hold that sometimes, if you’re deciding between someone’s interests and preserving nature, you should preserve nature. But that’s pretty weird. Nature isn’t a person. It has no thoughts or experiences. If we’re trading off helping someone with helping no one, but simply adorning the world with more pretty nature, it strikes me as very odd to prioritize nature. We shouldn’t make one person worse-off if doing so makes no one better-off.
Note: this isn’t the only reason I reject that nature has intrinsic value. I have other arguments. But I’m just trying to illustrate the structure of the core utilitarian view.

Or take the deontological idea that other people have rights and these should sometimes trump maximizing welfare. On this view, people’s interests are sometimes subordinated to those rights. But this is odd. It subordinates real, actually existing people’s aggregate interests to arbitrary-seeming structures. For example, deontologists will generally say that you should flip the switch in the trolley problem, if doing so will redirect it from five to one. However, they’ll deny that you should push one person off a bridge to save five people.
>>
>>18282961
But in both cases, you’re harming people by the same amount to benefit others the same amount. Shouldn’t they then be treated alike? Whether someone is used as a means doesn’t matter to them. The guy isn’t any more dead if he’s used as a mere means. So deontology strikes me as pretty counterintuitive at the structural level. In an attempt to fit our intuitions about cases, it ends up caring about things that don’t seem important. https://www.goodthoughts.blog/p/three-arguments-for-consequentialism

Now, the kind of deontology that’s least vulnerable to this is the kind that draws the least arbitrary distinctions. For example, if you deny that you ought to flip the switch in the trolley problem, then you can simply think: what matters is that you don’t kill people. You shouldn’t kill people even if doing so has nice consequences.

But this still strikes me as objectionably fetishistic, caring about abstractions like rights more than people. What we should care about is people. If a person dies, it’s not any more tragic if they were killed than simply allowed to die. So if we’re making the choice between more deaths or fewer, I think we should simply choose fewer deaths. If we care about everyone’s life equally and don’t subordinate people’s lives to other things like rights, then we’d support killing one to save five.
>>
>>18282963
Or take, for example, the widespread position that animal welfare is less important than human welfare. I find this judgment pretty unintuitive. Species is not morally relevant. If we imagined a human with whichever traits are said to make animals not matter morally, it wouldn’t seem like their interests barely count. Pain hurts whether or not you’re smart. If your reason to avoid pain doesn’t depend on how smart you are at the time you experience it, our moral reasons to prevent pain don’t depend on how smart the people experiencing it are.

One way to avoid discounting interests is to imagine that you lived the life of everyone. https://www.galactanet.com/oneoff/theegg_mod.html So, after you died, you lived my life, and the life of the president, and the life of every animal on the planet. Such a scenario seems to intuitively capture what matters in morality it counts everyone equally, and doesn’t count anything else. But in such a case, you’d simply be a utilitarian, as you’d value every moment of your life equally. If you’d care about fish welfare in that case, then that’s because fish welfare matters impartially. But if it matters impartially, then you should care about it now!

The only non-utilitarian concern that doesn’t seem on its face arbitrarily fetishistic is caring more about those you have a relationship with. It doesn’t seem totally insane to think that we shouldn’t count everyone’s interests equally, but instead we should count the interests of our friends and family for more. But still, I think the core utilitarian intuition can help identify why you might be a bit doubtful of this view. If other people impartially matter as much as your family, it doesn’t seem that implausible that you ought to count them equally. I also think the view that we have special obligations faces a number of serious objections. https://benthams.substack.com/p/believers-in-special-obligations
>>
>>18282964
One way to draw out this core intuition is: imagine someone who wasn’t in a relationship with you at some earlier time but was at some later time. It seems odd that you should count their interests more at the later time than at the earlier time. Nothing about them has changed. So how much you ought to count their interests seemingly shouldn’t change.

You might doubt that this gets us all the way to utilitarianism. It won’t automatically tell us how to treat risk, for instance. You might think that a one in a billion chance of a billion utils matters less than a guarantee of one util, even if everyone’s interests matter equally. But Joe Carlsmith’s argument https://utilitarianism.net/guest-essays/expected-utility-maximization/ can help illuminate how this core intuition gets us pretty close to risk neutrality.

Imagine you have the following two options:

1 Save one person’s life.
2 Have a 1/1,000 chance of saving 1,000 people’s lives.
Risk-neutrality treats these as equal. Common sense does not. But now imagine that the following set-up obtains. A 1,000-sided die is going to be rolled. Each person is assigned a number. You can either give each person a different number (give one person 1, one person 2, etc) or give everyone the same number (say, 467).

If you give everyone a different number, then it’s guaranteed you’ll save someone. If you give them all the same number, then you have a 1/1,000 chance of saving 1,000 people. But intuitively these seem morally the same. Everyone is, in expectation, equally well-off in both cases. No person has a reason to prefer either set-up. So if what we care about is the people rather than whether we get psychological comfort from our probability of doing good being above some threshold, then we should value them equally.
>>
>>18282969
The other related concern with non-utilitarian moral views, apart from caring about stuff that doesn’t seem morally important, is that the stuff just seems arbitrary, in a way that fundamental morality shouldn’t be. Note: this argument will mostly have force for moral realists. https://benthams.substack.com/p/why-i-believe-in-objective-morality If anti-realism is true, then the moral facts aren’t fundamental they’re just reflections of what we care about so it’s not as bad for them to be arbitrary and complex.

Take, as an example, ways of rejecting expected utility theory. One way of doing this is to discount low risks. If you discount low risks say, below one in a billion then you won’t think that a one in a trillion chance of saving 10 trillion people is better than a guarantee of saving one person.

But this seems super weird as a way for fundamental morality to look. If there are moral facts, as I think there are, it would be odd for them to simply pick out a number and instruct you to discount risks below that probability. The fundamental physical facts shouldn’t be that arbitrary. And having the threshold be vague doesn’t help, because a vague threshold is still arbitrary. What makes the threshold one thing rather than another?
>>
>>18282970
The other standard way to reject risk neutrality is to hold that utility reaches a bound at some point. So as the amount of valuable stuff in the universe approaches infinity, its moral goodness approaches some finite threshold. But why in the world should this be? What determines where the threshold is? It just seems arbitrary exactly like the sort of thing that can’t be part of the fundamental fabric of reality.

Or take the stuff deontologists say goes into which actions you should take like whether you use someone as a means. The moral facts these fundamental facts of the universe care about the high-level and seemingly imprecise fact, with almost impossible to spell out necessary and sufficient conditions, of whether someone’s used as a means. Seems nuts to me!

Here’s an analogy: imagine someone proposed a theory of physics that worked differently for organisms than for other stuff, or for objects that were more than four feet as ones that weren’t. This would seem weirdly arbitrary. The fundamental moral facts oughtn’t be this arbitrary, latching on to complex and imprecise higher-level properties. But it seems similarly weird for the moral facts to be this way.

I don’t want to give the impression that the argument I give in this piece is the only reason I’m a utilitarian. I think there are a number of serious considerations in its favor, and that the specific non-utilitarian intuitions tend to fall apart upon reflection. https://benthams.substack.com/p/the-ultimate-argument-against-deontology But hopefully I’ve managed to articulate why utilitarianism isn’t so crazy, so that you can see the intuitive draw.

https://benthams.substack.com/p/the-very-simple-case-for-utilitarianism
>>
One thing I've noticed is how people who criticize utilitarianism often do so by saying it breaks down when you take it to its utmost extreme, where for instance the most utilitarian thing would be to instantly destroy the planet, because now there is no more suffering. But we don't critique other philosophies by saying they are ridiculous if taken to the utmost extreme, do we? Of course taking anything to its utmost extreme will make it ridiculous.
>>
>>18282974
You can also destroy the planet to prevent rights violations
>>
>>18282958
Utilitarianism presupposes majority rule. What if someone doesn't consent to that arrangement?
>>
>>18283094
no it doesnt. utilitarianism is about maximizing aggregate welfare not majority rule. moral obligations dont require everyones consent. e.g. you still shouldnt murder even if someone doesnt consent to the no murder norm
>>
Itf
>>
>>18282958
>Everyone’s interests count equally
But they don’t though. This is why you’re a retard.
>>
>>18282958
Take the egoism pill, OP. Count only your own interests (of course, other people's interests will still count for a lot, but only instrumentally).

>The only non-utilitarian concern that doesn’t seem on its face arbitrarily fetishistic is caring more about those you have a relationship with.
Egoism is a special case of this, I guess, since you have a unique relation to yourself. It seems pretty self-evident why one should care about one's own interests. I just find it unwarranted how utilitarianism extends this concern to the interests of all sentient beings.

>imagine someone who wasn’t in a relationship with you at some earlier time but was at some later time. It seems odd that you should count their interests more at the later time than at the earlier time.
No it doesn't? Doesn't seem any more odd than the idea I would be willing to kiss someone on the mouth after I have a relationship with them but not at an earlier time. But anyway, this argument doesn't translate to handle egoism, since there can't be a time when you existed but weren't yourself.
>>
File: 1764772265467326.png (16 KB, 128x128)
16 KB
16 KB PNG
>>18282963
>But this still strikes me as objectionably fetishistic, caring about abstractions like rights more than people. What we should care about is people. If a person dies, it’s not any more tragic if they were killed than simply allowed to die. So if we’re making the choice between more deaths or fewer, I think we should simply choose fewer deaths.
Well, of course we should care about people. But this just assumes the consequentialist idea that if you care about something, the right way of caring about that thing is to maximize it at all costs, which of course no one who is not a consequentialist will accept. You obviously don't care about people in the right way if you are willing to torture and rape a child so long as doing so would maximize aggregate welfare.
>>
>>18284162
That’s evil
https://m.youtube.com/watch?v=5OdTFF8gHII
>>
>>18284449
>That’s evil
Not necessarily, as many philosophers argue that a virtuous life is also the happiest life. If that's true, that is just the kind of life an egoist should live. That video seems irrelevant as it's about psychological egoism, not ethical egoism.
>>
>>18282958
>Everyone’s interests count equally.

What if my interest is to ruin everyone elses interests, and also not be a homosexual or eunuch or whatever so my interests will also carry on to the next generation and indefinitely.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.