[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/his/ - History & Humanities

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Or a deontologist or according to any other specific normative theory

I think utilitarianism is correct. But I think you shouldn’t act as a utilitarian. I don’t mean this in the trivial “the best way to bring about utility isn’t to consciously aim at it,” sense. No, I really think that even if you had a guarantee that some action maximized utility, sometimes you shouldn’t take it.

Instead, you should act as a sort of pluralist, taking seriously different moral theories. This is because you ought to have significant moral uncertainty and take seriously the possibility that you’re wrong. You shouldn’t risk significant wrongdoing in the possible scenario where you’re wrong about which moral theory is right.

Suppose I could violate lots of people’s rights to slightly increase aggregate utility. I wouldn’t do it. My best guess is that probably the action is okay. But if it’s not okay, I’m doing something really really wrong. You shouldn’t risk doing stuff that might be extremely wrong for the sake of small benefits. You ought to take moral uncertainty seriously.

In practice, this generally means trying to do things that are robustly good according to many moral theories. You should especially focus on what those theories hold is most important, rather than just moderately important. Deontology cares an especially large amount about not violating rights, so you should make an effort not to violate rights. Utilitarianism cares an especially large amount about doing good effectively, so you should do that (it helps that every other plausible ethical view recommends that too). https://benthams.substack.com/p/effective-altruism-faq?utm_source=publication-search

You might reject this pitch for pluralism if you don’t have any significant moral uncertainty. I can think of two reasons why this might be:

You might just be very confident that you’re right.
>>
>>18270508
You might be a moral anti-realist and thus doubtful that you can be mistaken about morality, just as you can’t really be very mistaken about your taste preferences.

I think you shouldn’t believe 1. People are famously overconfident, even experts. https://en.wikipedia.org/wiki/Overconfidence_effect#Overconfidence_among_experts When very smart people disagree with you, you should rarely be extremely confident that you’re right and they’re not. This means that, in practice, your credence in controversial judgments should basically never be north of 90% (I occasionally go a bit above 90%, but try to avoid going much more than 90%). But a 10% credence in a moral theory is enough to take it seriously.

I don’t think you should think 2 either. First of all, you shouldn’t be very confident in moral anti-realism for the above reason. So if moral realism might be right, then you should take seriously the possibility that you’re morally in error. Second, even if anti-realism is right, it might be that our values differ substantially from what they’d be upon reflection. So error is still possible.

Now, there are a few views of how to deal with moral uncertainty on which you get to mostly ignore the views that you don’t think are true. The ones that come to mind are:

1. My Favorite Theory: you simply do what is prescribed by whichever theory you have highest credence in. So if you think utilitarianism has a 90% chance of being correct, then you simply act as a utilitarian.

2. My Favorite Option: you do the option that you have the highest credence in being right. So, for example, if you have 1/3 credence in utilitarianism, 1/3 in deontology, and 1/3 in virtue ethics, if utilitarianism and virtue ethics recommend the action, you perform it. This is so even if utilitarianism and virtue ethics say that the action is only a tiny bit good while deontology says it is monumentally terrible.
>>
>>18270516
But I don’t think either of these views are plausible. Imagine that you have 51% credence in utilitarianism and 49% credence in deontology. Now imagine there’s some act that you can take which, if you take it, will very slightly increase utility. However, it will produce such a horrifying number of rights violations that it would come out, if deontology is true, as by far the worst thing anyone ever did. Seems obvious that you shouldn’t do it. But both of these views disagree.

These views are also strangely hypersensitive. For simplicity, imagine that the only two moral theories you have any credence in are utilitarianism and deontology. This implies that as deontology goes from 0% of your credence to 49.9999999% of your credence, it has no implications at all for decision-making. But after it goes up .1%, then it suddenly takes over, and utilitarianism gets ignored totally.

Additionally, these views imply strange asymmetries in how one treats different kinds of uncertainty. Suppose that you have 40% credence in a moral theory on which animal welfare doesn’t matter at all and 60% credence in a moral theory on which it does. Suppose additionally that you have 0% credence that animals have no welfare because they’re not conscious.

Now imagine that your credence in animal welfare not mattering at all goes from 40% down to zero, while your credence in animals not being conscious goes up from 0% to 1%. On this view, you’d start taking animal welfare more seriously (for your uncertainty is now factual rather than moral). Bizarrely, then, even as you conclude it’s more likely animals matter, you start valuing them less. The views are also vulnerable to money pumps https://johanegustafsson.net/papers/second-thoughts-about-my-favourite-theory.pdf and have lots of other problems. https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f5a3ddd466873260486fb06/1599749604332/Moral+Uncertainty.pdf
>>
>>18270508
that's still utilitarianism, you are just taking into account the risk you are wrong or your methods are inefficient
>>
>>18270521
Figuring out how to make decisions under moral uncertainty is extremely difficult. I suspect there’s no precise formula for doing it, just as there’s no precise formula for deciding upon priors. https://www.lesswrong.com/w/prior-probability But still, that doesn’t mean that anything goes with respect to acting under moral uncertainty. In general, you should try to avoid doing things that are very bad on various moral theories and work especially hard to do stuff that’s very good on various moral theories.

One example of how this influences me: it’s part of why I don’t eat meat from happy animals. Suppose that deontology is true and animals have rights. This doesn’t seem ludicrously unlikely. If deontology is true, the marginal cases argument https://en.wikipedia.org/wiki/Argument_from_marginal_cases the fact that criteria which rule out animal rights also imply some humans don’t have rights makes it pretty difficult to confidently rule out animal rights.

If animals have rights, then eating happy animals is a lot like killing and eating happy cognitively disabled humans. If you ate them regularly, this would be like if you routinely had a disabled child, killed him, and ate him. That would be very bad! Even a small percent chance that eating meat is as bad as that makes me pretty hesitant to eat meat from happy animals.

Even if you suspect that Longtermism is wrong, given how enormous the stakes could be if it’s right, https://benthams.substack.com/p/longtermism-is-surprisingly-obvious under uncertainty, you should take Longtermist considerations pretty seriously. My view is that the impacts of our actions on the long-run future are potentially many billions of times more significant than their impact on the short-run. Even a small chance that’s right makes safeguarding the far future important.
>>
>>18270527

This is also one reason I’d be in favor of taking actions to reduce the number of abortions (though I’d be against banning abortion). The pro-life view holds that we routinely kill millions of people. Even a small chance they’re right is enough to make reducing abortions a good thing. I’d support research on figuring out ways to reduce embryo death, because most fertilized embryos die before reaching maturity. Even if there’s only a 1% chance that this involves the death of a person, the problem of embryo death is about as bad as a 1% risk of a virus that would kill most of the population. That would be pretty serious!

So in practice, even if you’re pretty convinced your favorite moral theory is right, you shouldn’t act according to it. You should act according to a pluralistic mix of the leading moral theories. That means even taking seriously the insights of views that don’t strike you as very plausible.
>>
>>18270524
>If you're a deontologist and you try to minimize rights violations you're actually a utilitarian
>>
>>18270527
>Even if you suspect that Longtermism is wrong, given how enormous the stakes could be if it’s right, https://benthams.substack.com/p/longtermism-is-surprisingly-obvious under uncertainty, you should take Longtermist considerations pretty seriously. My view is that the impacts of our actions on the long-run future are potentially many billions of times more significant than their impact on the short-run. Even a small chance that’s right makes safeguarding the far future important.
Now you just sound like a pascal's mugger.
>>
>>18270508
Isn't this guy some goober? I'm subbed to his stack but the few times ive interacted with him he seems to support views that make me scratch my head. I also have a stack but people say I write in a very obtuse manner.
>>
>>18270508
>I really think that even if you had a guarantee that some action maximized utility, sometimes you shouldn’t take it.
Then you must have some misunderstanding about what utility is imho.
>You shouldn’t risk significant wrongdoing
If you have a guarantee, it's not a risk.

I completely sympathize with your epistemic humility and I think it's an incredibly wise thing to do - in the end we know only that we know nothing - but it does seem to me like you're conflating utility with something it's not supposed to mean.
>>
I learned that the hard way however you can still shove your stories up your ass. They are mostly full of shit anyway.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.