Where should you give if you care only about producing infinite value?A brief note before you read the article: Christians for impact (who just got an Astral Codex Ten grant) are also having a London conference https://www.christiansforimpact.org/2025-conference-london this weekend which I highly recommend attending if you’re a Christian looking to have a high-impact career. I attended a Christians for EA conference (despite not being a Christian) and it was awesome highly recommend! Okay, back to your regularly scheduled program.The morally confused but very catchy song Iris begins “And I’d give up forever to touch you.” This is WRONG! You shouldn’t give up forever for anything infinite value is the most important thing. Even more embarrassingly, the inference in the song is blatantly FALLACIOUS, “Cause I know that you feel me somehow.” Sorry, how is that supposed to explain why the singer would give up forever?The song contains many more embarrassing factual errors like “And all I can taste is this moment,” (moments can’t be tasted, fact-checkers https://www.snopes.com/ rated this PANTS ON FIRE) and “all I can breathe is your life,” (umm, pretty sure lives can’t be breathed). It also makes extremely doubtful modal claims like “And you can’t fight the tears that ain’t coming,” (certainly fighting such tears seems metaphysically possible) but we’ll leave those aside. https://www.youtube.com/watch?v=NdYWuo9OFAw
>>41521979I’ve elsewhere made the case that your primary aim in life should be maximizing the probability https://benthams.substack.com/p/why-im-a-fanatic and the amount of infinite value. As I argue in that post, any chance, however small, of infinite value is better than a guarantee of any finite value. The way to calculate the value of some chancy process is by multiplying the probability times the payout. https://utilitarianism.net/guest-essays/expected-utility-maximization/ A 1/2 chance of 2 utils is as good as a guarantee of one util. But any number times infinity is more than any finite number.Now, suppose that you buy this (if you don’t, I’d recommend reading that post this post is about what follows from fanaticism, but I won’t actually argue for fanaticism here). What should you do? Where should you donate if you’re aiming to maximize infinite value? I decided that I’d try to survey the different charities and see which one is best if you’re a fanatic. https://benthams.substack.com/p/why-im-a-fanaticThis is a hard question to answer. It’s hard, in part, because the mathematics of infinity is extremely messy. As traditionally thought of, infinity doesn’t change in size when multiplied or divided by any number. But if that’s right, then any actions with infinite expected value will come out the same and all actions will have undefined expected values.Fortunately, I don’t think this is the right way of thinking about infinity. In fact, I suspect you don’t either. Consider:
>>41521990A genie appears to Bob.Genie: I am going to roll a one-billion-sided die. You have two options. The first option is that I reward you for all of eternity if it comes up 1-999,999,999 and torture you if it comes up one billion. The second is that I reward you for all of eternity if it comes up 1 billion and torture you eternally if it comes up 1-999,999,999.Bob: I don’t care, flip a coin to decide which gamble for me to take. After all, both of these have undefined expected value, so neither is better than the other.This doesn’t seem right! Even if there’s a way of thinking about the math where that is right, it certainly doesn’t seem like this is the way that we should actually be thinking about the math. You shouldn’t go with the formalism that implies that a 99.9999% chance of infinite reward is no better than a .000001% chance of infinite reward.Now, the good news is that this isn’t the only way of thinking about the math. There are surreal https://arxiv.org/abs/2111.00862 and hyperreal https://arxiv.org/abs/2509.19389 numbers that avoid these results. With hyperreal numbers, ω (omega) is the infinity usually used, and 5 times ω is five times greater than ω. Various people have written about the uses of hyperreals and surreals for decision theory. I particularly like that the hyperreals give you the result that the value of the St. Petersburg gamble a game that offers 1/2 chance of 2 utils, 1/4 chance of 4, 1/8 chance of 8, and so on is infinite, but it’s ln(ω) which is infinitely less than ω. Thus, it can explain why being offered the St. Petersburg gamble is infinitely less good than being offered infinite utility straight up (consistent with dominance).
>>41521979>this hyperbolic song isn't logicalkek, based hylic.
>>41522003Now, I’m not going to do the EV calculations in surreal numbers, in part because I don’t really understand them. I will, where possible, treat infinity basically as if it was a really big finite number big enough to dwarf the other finite numbers in EV calculations and ignore its weirder properties. I’ll also try to vaguely compare infinites of different sizes in some ways (E.g. infinity years in heaven are better than infinite years in a finite atheistic utopia). But I don’t think this should change results too much.So…without further ado…here are the places where I think maybe there’s a case for donating if you are trying to bring about infinite value.2 Normal charities doing good stuffSuppose you’re a fanatic, thinking infinite value swamps everything else. Should you think money given to e.g. Givewell is wasted, compared to a penny spent elsewhere. I think the answer is no. Even if you’re a fanatic, normal good charities end up looking pretty good.First of all, there are all sorts of weird ways that normal good charities might be infinitely good. For instance, on some theories of mereology, it might be that each part of a brain sufficient to produce a consciousness by itself has its own independent consciousness. For example, because my brain would remain conscious even if I lost some particular atom, this theory says that the parts of the brain that don’t include that atom have their own consciousness.
>>41522025autism =/= hylic https://benthams.substack.com/p/do-i-have-autism?utm_source=publication-search>>41522029Now, I don’t think this is super likely (though there are some clever arguments for it). https://www.cambridge.org/core/journals/utilitas/article/abs/what-if-we-contain-multiple-morally-relevant-subjects/97D3D98A4573E128C7CB763FA1E06008 But it’s not totally crazy. And we’re in the realm of fanaticism, where even very low probabilities matter. If we assume this view, and assume that you’re composed of 0-dimensional points, it might be that every single person has uncountably infinite conscious subsystems. There might even be too many to be a set, on some views.If this is right, then every nice thing you do is multiplied in value by infinity. The stakes become infinite! So normal Givewell-style charities and others that just make people and animals’ lives better come out looking pretty good.There are some other considerations favoring these kinds of charities. One is that theism might be true. https://benthams.substack.com/p/the-ultimate-guide-to-the-anthropic If theism is true, then plausibly whenever you do a nice thing, that strengthens your afterlife connection with the person you benefitted. Because this afterlife connection lasts forever, it is of infinite value.Another is that eternal recurrence might be true. https://theperse.substack.com/p/a-better-gamble-than-pascals-wager If it is then every person you help out is repeatedly helped out for all of eternity. That’s another route towards infinite value. Similarly, you might have infinite clones throughout the universe, and it might be that your doing a good thing makes them do good things too. If your influence on clones counts towards taking actions, then such donations are of infinite value. https://joecarlsmith.com/2021/08/27/can-you-control-the-past
>>41522047Still, I think these don’t rank too highly. The things I pointed to are generic force-multipliers for every kind of good thing that a person might do. If other kinds of charities bring about infinite value, then they also bring about infinite value for the conscious subsystems. They also might bring about infinite welfare via recurrence.The ways that giving to GiveWell might be infinitely valuable are fairly simple: they come from ways that every good thing might be infinitely good. But if the benefits of GiveWell are multiplied by infinity, then so are the benefits of the other sorts of charities. So GiveWell still loses out to other very good charities. Put more precisely:None of the ways that GiveWell might have infinite expected value favor GiveWell relative to other good charities (they multiply the values of all the charities equally).But if GiveWell didn’t have infinite expected value, then it would have less expected value than those other charities.So GiveWell has less expected value than those other charities.Now, there are a few remaining ways that GiveWell might have infinite expected value. It might, for instance, be that if you bring save someone’s life, they have kids who either make it to heaven or the techno-utopian future. In either case, then, the action might be infinitely good. But if this is your goal then there are better places to look than GiveWell.
>>41522056All this is to say: if you are a fanatic, GiveWell is okay for bringing about infinite value but it’s not amazing.3 Effective evangelizationSome Christian effective altruists try to donate to charities that evangelize as effectively as possible. I haven’t vetted them very carefully but the ones I’ve heard good things about from people I trust http://www.wall.org/~aron/blog/ include Wycliffe Bible translators https://www.wycliffe.org/ and 500k. https://www.the500k.us/(Christians for impact according to Tovia Singer).Now, whether you should give here will depend a lot on your theological views. It ends up looking pretty okay if you are an evangelical Christian. In contrast, if you’re a Tovia Singer-style Orthodox Jew, then effectively spreading idolatry and polytheism isn’t good! If you’re a Muslim, you should think spreading Islam is good.Now, plausibly these charities are infinitely good in expectation. But, as we’ve established, so are a lot of other things. These per dollar increase the number of expected people in heaven by a lot less than one (even if your credence in Christianity is 1). Now, that’s not a knock on them. Heaven is, by all accounts, a pretty nice place, infinitely outstripping earthly life. But, as we’ll see, it’s possible to do better.These probably do better than GiveWell for maximizing infinite value, but if you’re a Christian then helping people is plausibly sort of deontically a good thing to do, so I’d caution against only giving to them. The sort of guy who would ignore starving people so he could donate to increase people’s odds of getting into heaven seems like the sort of guy Jesus would rail against in a parable.
>>415220674 S-risk researchThere are some organizations, like the Center on Long-Term Risk, that do research into reducing S-risks. S-risks are risks of astronomical suffering. So, for example, if for all of eternity a malevolent AI was trying to hurt people as much as possible, that would be an S-risk.Some of these risks are infinite. The evil AI might figure out a way to make people suffer for all of eternity. This isn’t super likely, but we’re being fanatics here…Other S-risks include creating infinite lab universes that contain more suffering than well-being. Writing about why this would be an atrocity seems important, but it might also improve interest in the project and get someone to do it. Alternatively, it could be that lab universes are good either because animal welfare is positive or all dogs (and soil nematodes) go to heaven so every being created is infinitely glad to exist. Now, I give reasons here https://benthams.substack.com/p/a-very-disturbing-moral-argument?utm_source=publication-search why I don’t think that you should over-index on this possibility and just try to maximize overall populations, but it at least complicates the case for this kind of research.In addition, some people are negative utilitarians, thinking that suffering can’t be compensated for by any amount of positive welfare. Now, it’s pretty hard to figure out how to treat negative utilitarianism given moral uncertainty (doesn’t seem like we should all act as negative utilitarians because there’s some chance that it’s right). But if you have non-zero credence in negative utilitarianism, it illustrates one more way that avoiding S-risks can tap into potentially infinite value.
>>41522081It’s also not clear how effective this research is. To be clear: I’m not saying it’s not effective. I just don’t know. Overall, depending on assumptions, this seems probably somewhat better than the last one, as you potentially impact infinite people, but that’s not so clear. Though I can easily see a case, from the standpoint of a Christian, for preferring effective evangelization.5 LongtermismSuppose you want to maximize the odds of humans accessing infinite value. You also want to maximize the amount of infinite value we get. I think there’s a clear strategy in this case: you should be a Longtermist. You should give to places like the EA long-term future fund https://www.givingwhatwecan.org/charities/long-term-future-fund that work to reduce existential risks and improve the quality of the future in other ways. https://www.forethought.org/One important fact about the future: it could be very large. Bostrom estimated it could have 10^52 future people if things go well. Surely the expected number of future people is infinite because the odds we could create infinite future people are, while quite low, non-zero.Now, it might be that everyone who comes to exist gets an infinite afterlife (because it might be that theism is true). https://benthams.substack.com/p/god-best-explains-the-world So this means that every dollar spent on existential threat reduction on improving the quality of the future probably results in infinity extra people in heaven, each spending infinity years in heaven. If this is right, then it infinitely beats the evangelization charities (though it’s a bit more complicated because maybe once you’re in heaven you’d have some ability to talk God into making infinity extra happy people or something. But the same goes for the infinite extra expected people you result in being in heaven from these charities).
>>41522094Here is another important fact about the future: in it, we’ll know a lot more than we do today. Whatever is most important in the world, we’ll then be in a better position to pursue. This includes various speculative ways that infinite value might be brought about. For this reason, your best bet for bringing about infinite value is increasing the likelihood that humanity is, at some point, in a position to do infinitely valuable things. The best ways to do that include reducing existential risks and increasing the odds that the future contains people who are wise, cooperative, and morally reflective.(The following meme represents my thoughts at different stages I am all three people).As I’ve said before, Longtermism is the last stop on the crazy train. https://benthams.substack.com/p/the-last-stop-on-the-crazy-train If you want to go ham with maximizing infinite sums of value, your best bet is to just be a standard Longtermist. Also, probably someone should start a think tank dedicated to thinking more seriously about this. If I’m right, infinite value is the most important thing. Someone should be spending their days thinking about how to bring it about.
>>41522120
>>41522003>both of these have undefined expected valueYeah. I was wondering what the formula was that gave this result. Or if there were better ways to model it, turns out there were. like calculating relativity with Newton equations. iirc 'undefined' more or less can be interpreted as Unknown or it could mean any number, not committing to an answer,thus not defined. It went above the limits of calculation by prob.*util. and the result was Unknown. This checks out.
>>41523119Glad you are enjoying it
Bump
You can use the same thing as a huge
Pascal's Wager, now with extra schizo sauce.
>>41521979>infinite valueLiterally impossible. Value is only measured by how little it exists in the world and how useful it is.
>>41521979>Where should you give if you care only about producing infinite value?play the lottery lmao
>God has literally infinite joy to give>cool. Can I have some now?>NO
>>41521979>shouldare you retarded? you're retarded
>>41528211skill issue retard