imagine being a two boxer
the thing i don't get about this thought experiment is why i would care about $1000. It literally does not matter. That's a few restaurant visits with friends versus an actual opportunity.
>>16926713https://www.youtube.com/watch?v=Ol18JoeXlVIIf it were $5 in one box and a million in the other, I wouldn't take the chance. That computer has the edge; quantum reads an implant in your mind in the future.
>>16926713The problem statement is internally inconsistent. It poses you with a choice while maintaining that your decision is all but a foregone conclusion. Either way, if your reasoning is grounded in reality, you will take both boxes, since their contents are already set and your doing or not doing so can't change them. People who take only one box are NPCs who operate on learned abstractions.
>>16926778>the problem is ill-posed and self-contradictory, but my choice is still objectively correctunironically based
tl;dr so basically the computer looked at your financies and living conditions and if you're a broke ass fuck it already knew you'd just take the $1k and if you weren't a broke fuck it knew $1k was meaningless to you and gives you a millionnot much of a paradox
>>16926778>"all of those people who walked away with 1,000,000$ are irrational npc's"t. guy who walked away with 1000$
>>16926777>the people they interviewed on the street didn't seem to fully comprehend the problem or english in general>the "viewer poll" is biased because they've been trained to choose whatever option seems less intuitive by watching countless probability gotcha clickbait videosWhat a worthless video.
>>16926787If you define a scenario where you must be an NPC to win, you get a scenario where only NPCs win. The simplest way to demonstrate that you're an NPC is this: consider a modified Newcomb setup where the Oracle explicitly tells me he predicted that I will take both boxes. Explain:1. What I should do2. Why the extra information provided changes anything (if it does)
>>1692671350%either there's a millions dollars or there isn't.
>>16926784>>the problem is ill-posed and self-contradictory, but my choice is still objectively correctDuh. If I have to make a decision, I'll do it based on the part of the setup that's actually grounded in reality: two boxes with their contents already set. I take both boxes because their contents are already set.
>dude what if there was a god-level ai that scanned your brain as you entered the room and then simulated every thought you could ever havethen there's no point in thinking about it and you reduce the problem down to an inherent 50-50 by flipping a coin, which the computer also would have predicted you doing so it would've been forced to flip its own coin to decide if it should put in a million or not
>>16926793>If you define a scenario where you must be an NPC to win, you get a scenario where only NPCs windeliberately picking the wrong answer doesn't make you a sigma male in a sea of NPCs; it just makes you a retard>consider a modified Newcomb setup where the Oracle explicitly tells me he predicted that I will take both boxesI'm assuming you know that the oracle is telling the truth>1. What I should dotake both boxes>2. Why the extra information provided changes anythingbecause the oracle just told you that the mystery box is emptyThis scenario retroactively defeats the entire premise though. There's no decision to be made on your part, and therefore no prediction to be made on the oracle's part if it's just going to tell you what the correct choice is
I mean, this doesn't seem very complex if you accept the ridiculous premise that the computer knows everything you're going to think or do. Tricking the computer and snatching an easy $1 million plus $1k tip is impossible under that premise, short of doing what the other anon suggested and deferring the decision to an outside variable like a coin flip. Under the premise of the all-knowing computer you have to choose 1 box and follow through with it, there's no last second betrayal or anything you can pull since, under the rules of the scenario, the computer would already know everything you will do.Basically the scenario reduces the player to a simple function. You can argue about how that does or doesn't reflect reality, but it's the scenario you've been given in this hypothetical.
plot twist:>you are jewish>you can immediately smell $1 million when you enter the room>your choice is based on whether you smelled the $1 million or notnow who's in charge? what you choose is entirely based on what the robot put in the box, but what the robot put in the box is based on what it predicted you'd take, but what you take is based on what the robot put in the box
>>16926829This is perhaps the single most intelligent post I've ever read on /sci/. Have a tendie.
>>16926829I just e-mailed this corollary hypothetical to Veritasium. Maybe he'll do a video on it.
>>16926713So before you knew the setup you had already picked one box instead of both?That is the only way it makes logical sense to pick one box.
>>16926829>the Newkike Paradox
>>16926817>because the oracle just told you that the mystery box is emptySo what?>There's no decision to be made on your part, and therefore no prediction to be made on the oracle's part if it's just going to tell you what the correct choice isThis is incoherent word salad. Try again.
>>16926841>So before you knew the setup you had already picked one box instead of both?it's not that you already chose, it's just another 3deep5u does-free-will-really-exist sloppathetical stating that the choice you will make is already predetermined by the neural network which is your brain and can be predicted by a sufficiently powerful algorithm with unlimited insight into the workings of your mind
>>16926829a lot of these sloppathetical "paradoxes" aren't even paradoxes but can be made into one this easily by the insertion of a circular dependency
>>16926845That is way stupider that what I was thinking.I thought it was just "do you pick both before knowing the rules" which would first reaction of most people.I only watched the first 2 minutes becuase the premise was boring.
>the ai tells you it has a 100% success rate>100% of people who chose 1 box walked away with a million>100% of people who chose 2 boxes walked away with $1000>GEE WHAT SHOULD I GO WITH
always take both boxes because free empty box to put shit into
>>16926846That's what a paradox is. A paradox is an *apparant* contradiction that goes away under more careful analysis, not a real contradiction.
>>16926829I really like this paradox
>>16926857>>the ai tells you it has a 100% success rateEither it's lying or the problem contradicts itself. Regardless, you need to be mentally ill to think your decision somehow retroactively changes what's inside the boxes.
>>16926862>reddit markup>70 IQ LLM postName a more iconic nu-chan duo.
>>16926866>Either it's lying or the problem contradicts itself. No? Now you're just rejecting the hypothetical, that's not an answer.>Regardless, you need to be mentally ill to think your decision somehow retroactively changes what's inside the boxes.Low IQ take. To the AI you are a solved problem. Your choice is as predetermined as f(x) = x. You aren't retroactively changing anything, it simply already calculated your choice in advance. A linear equation can never be anything other than a straight line.
>>16926872>No?Yes. If it actually predicts your actions with perfect or near-perfect accuracy, there is no decision to be made and no question of strategy. It's simply postulating a world where mouth-breathing imbeciles like you get rewarded for lacking object permanence.
>>16926868Nigger this markup predates reddit by some thirty years. I understand that's too far away for zoomers to remember, but it did in fact happen.
>>16926873Concession in the guise of word salad. You need to go back. >>>/b/
>>16926873The fact that you are predictable doesn't mean you are not making a decision. Toddlers are often very predictable in their decision-making, and easily manipulable into the choice you want; that doesn't mean they are not making a decision, just that they are predicable (at least to the parents).
>>16926879>The fact that you are predictable doesn't mean you are not making a decisionPsychotic NPC projection. Either way, my post stands completely unchallenged. The problem postulates that your strategy is predetermined and fixed, then asks you to choose a strategy. This is a simple contradiction designed to confuse word-predicting biobots like you.
>>16926882>Either way, my post stands completely unchallenged.Maintaining you are unchallenged after an obvious challenge is a classic sign of both mania and psychosis, anon. Which one is it for you?
>>16926879>>16926884>The fact that you are predictable doesn't mean you are not making a decisionPsychotic NPC projection. Either way, my post stands completely unchallenged. The problem postulates that your strategy is predetermined and fixed, then asks you to choose a strategy. This is a simple contradiction designed to confuse word-predicting biobots like you.
>>16926874>reddit-trained LLM explodes with simulated rage
>>16926713there is only one practical approach:schizobabble about future prediction is ignored, there is no reason to believe it. 1k$ is a pittance compared to 1M and simply isn't worth takingopen box B, 99.9% chance it's empty anyway but if not, take the 1M and leave.Micro-optimizing faggotry is irrelevant
>>16926888Why would that be a contradiction? Your strategy IS a function of your brain state. And someone CAN know that brain state in enough detail to predict your strategy, which is something parents can do for their toddlers in many cases. But the fact that you know my brain state in enough detail to make an accurate prediction of my actions does not mean my brain is no longer making a choice. What you are calling a contradiction is just reality.
>>16926894>Why would that be a contradiction?For the blatantly obvious reason that your magical AI god premise postulates that the outcome is fixed, known and I could not have done otherwise. Given this, there is nothing to strategize about. The entire setup is equivalent to the statement that NPCs will be rewarded while people who understand object permanence won't be.
>>16926899>For the blatantly obvious reason that your magical AI god premise postulates that the outcome is fixed, known and I could not have done otherwise.No, it only postulates that it knows you won't do otherwise, not that you couldn't have done otherwise.I know that if I offer a toddler the opportunity to get an ice cream, he will take it. It's not that he can't choose differently, it's just that he predictably won't. Surely you don't think this situation is some deep philosophical contradiction? The relation between the Oracle and you is the same as the relation between me and the toddler. The oracle knows what you will do if offered the chance of the equivalent of an ice cream. Why is this a problem in the case of the oracle's boxes, if it's not a problem for the toddler's ice cream?
>>16926907>NoLol. Ok. Since it's clear that you've degenerated into overt and open psychosis, I'm just gonna move on. Notice how your automatonism forces you to (You) me again even though I'm now closing this brown-IQ thread.
>>16926793>consider a modified Newcomb setup where the Oracle explicitly tells me he predicted that I will take both boxes. Explain:>1. What I should do>2. Why the extra information provided changes anything (if it does)kek. /thread. it's essentially the same. there's no real extra information there. but it still causes one-box retards to change their tune
What if you take the mystery box only and there's nothing in it? Why isn't that a possibility?
>>16926844>retard who can't understand the box problem also can't read a simple sentence
>>16926912>explicitly telling you the contents of the boxes isn't extra information
You can enjoy not being barely sentient unthinking NPC meat puppets two boxers. I'm going to enjoy my 1 million British pounds given to me by my lord and saviour Predictorbot.
Honestly they could just delete the rest of the scenario and skip straight to>there's a 100% chance of getting $1 million if you pick this box>there's a 100% chance of getting $1k if you pick both boxesThe rest is just flavor text.
A lot of you are obsessed with the assumption (or find it objectionable) that the predictor is 100% accurate. Setting aside that you could never be literally 100% confident anyway (no matter the predictor's track record so far or evidence of his "infallibility"), introducing a significant error rate doesn't fundamentally change the logic behind one-boxing. A guaranteed thousand dollars is still not very much compared to a 90% chance of getting a million.
>>16926778>since their contents are already set and your doing or not doing so can't change themIt can't change them but it can logically determine the contents.
>>16927049>A guaranteed thousand dollars is still not very much compared to a 90% chance of getting a million.Or guaranteed thousand dollars + 10% chance of getting a million vs 90% chance of getting a million, to be fair. But the latter is still preferable.
>>16927022the problem statement explicitly tells you the content of the boxes to begin with, retard
>>16927053it doesn't determine anything, the outcome is predetermined and favors superstitious brainlets who the oracle knows in advance will take only one box even in a situation where taking two can't possibly leave them worse off
the predictive model is literally just is your iq high enough that you'll pick the mystery box when it tells you that 100% of people who picked the mystery box received $1 million dollars and 0% of people who picked both boxes won $1 million dollars
>>16927074no it doesn't, retard. There has to be at least some miniscule uncertainty in the predictor in order for the question to function
>>16927089>no it doesn'tit obviously does but there's no point reasoning with brown "people" like you
>>16927079>the predictive modelthere is no "predictive model" involved, just a rule that states cretins will be rewarded
>>16927097brainlet cope
The actually interesting part of this thought experiment is that it exposes this bizarre subset of people who admit that picking one box is the correct answer and will get you the million dollars, but through some strange psychological process still convince themselves that you would somehow be stupid for making that choice. See this absolute midwit for example >>16927098If the scenario is set up so that an """irrational""" choice is correct, and you know that, then it isn't an irrational choice
>>16927106>through some strange psychological processthat "strange psychological process" is called reasoning: by definition, the only people who receive a million dollars are the ones too stupid to realize the content of the box is already set and can't change in response to an attempt to exploit this. it's a made up situation that favors retards. the "right answer" is to be born mentally deficient, otherwise the oracle will know that you will know there's nothing to lose by trying to cheat him and make sure the mystery box is empty for you
>>16927114>there's nothing to lose by trying to cheat him>will make sure the mystery box is empty for yousounds like you lost something, dumbassI know that a magical oracle or supercomputer that can read your intentions into the future like that may be unrealistic, but if you can't even entertain the hypothetical then you have just reached "but I DID eat breakfast this morning!" level
>>16927121>sounds like you lost somethingi didn't. i wasn't born an imbecile, so i understand the logical implications of the contents of the box having been set before i made my decision. the magical oracle knows i know, so his only option to make a correct prediction is to make sure the box was empty before i even had a chance to entertain the possibilities
>>16926713the fact that OP on 4chan is trying to convince me that 1box is the correct choice makes me believe that 2box is obviously the correct choice
>>16927123Knowing that doesn't mean you still can't take one box. I know that too, but I still have two choices when I enter the room:Scenario A: I know the contents are fixed and I "have nothing to lose by cheating him" but I still choose to take one box anyway. The oracle knew I would do that so there's a million in the boxScenario B: I know the contents are fixed and I "have nothing to lose by cheating him" so I take both boxes. The oracle knew I would do that so there's nothing in the mystery box.I know that these are my two possibilities, so how is it stupid to choose scenario A?
>>16927129>Knowing that doesn't mean you still can't take one box.wrong. you lose the million dollars just by virtue of understanding the logical fact that once the contents of the boxes are set, there's nothing to lose from taking both and nothing to gain from taking only one. if the oracle fails to preempt you acting on this objectively correct reading of the situation, you can cheat him
Two boxers are focusing too much on the "free will" or "changing the past" nonsense that is irrelevant and missing the much more important fundamental personal failure that will cause them to only get the $1000Your ability to control your own thinking is what is being tested. This is a test of SELF CONTROL. Obviously, I understand the logic of "the choice is set before I enter, so therefor it is most optimal after entering to take both boxes" argument. However, I am also smart enough to understand that if I allow myself to accept that logic, I will fail. Therefore, I will decide to only take one box, and even though I know that I could change my choice once I enter the room, I will decide that I will only take one box. And because I have good self control, I will actually do as I decided before. If you are the kind of person who would enter the room and shiftily change your mind because "will now that I'm in here my past decision didn't matter, hehe i'll just take both for the win!" then you are a person with weak self control. The computer will predict your lack of self control and dedication, and you will fail. In a sense, your choice in the room DOES effect the past, because your "eventual" choice was already predicted in the past.
>>16927135>Your ability to control your own thinking is what is being tested. This is a test of SELF CONTROL.profoundly retarded take. the implicit premise is that the player actually acts on a logical strategy for maximizing the reward. if you include this kind of pseudophilosophical drivel, there is no "paradox" to speak of
>>16927133> you lose the million dollars just by virtue of understanding the logical fact that once the contents of the boxes are set, there's nothing to lose from taking both and nothing to gain from taking only one.This is WRONG.A "one boxer" can fully understand the logic of a "two boxer" and still make the active decision to ignore those rational arguments and take one box anyway, thus winning the one million dollars. Let me put it another way. Imagine a new test, where you enter a room with a math equation on a board, "what is 2+2?" If you answer correctly, you get 1 dollar, if you put in a wrong answer you get a trillion dollars. I know that 2 + 2 = 4, but I would go in there and put 2 + 2 = 5. I know the correct answer but would put the wrong answer on purpose to get a better reward. Simple. Same with this test. I know that taking both is theoretically optimal but I would go in and do the """wrong""" answer to get more money.
>>16927139>A "one boxer" can fully understand the logic of a "two boxer"you obviously can't and this is very simple to prove, because when asked to justify your "decision", you drivel something about how you shouldn't because the oracle will have predicted it, which is logically irrelevant because the contents of the boxes are already set
>>16927133that's just not true, man. Nothing is physically preventing you from taking one box, regardless of what you understand.
>>16927140>when asked to justify your "decision"... not to take the second box*
>>16927141>Nothing is physically preventing you from taking one boxsee >>16927138. you "people" are so retarded you literally can't even grasp what this "paradox" is about
>>16927138>the implicit premise is that the player actually acts on a logical strategy for maximizing the rewardMy strategy, to ensure I will only take one box and coming up with a psychological rational to ensure I follow through, is a logical strategy that will maximize the reward. My method has a 100% chance of getting 1million dollars. Your strategy is logical but gets the minimum possible reward. You will get merely $1000. Your strategy is worse because you have arbitrarily decided that the method of controlling your own psychology wasn't allowed, even though the state of your psychology is literally the entire foundation of the problem in the first place. There's a determining factor involved which you have control over and you've just decided to ignore it.
>>16927144>My strategy ... is a logical strategyno, it isn't. you're actually just retarded. i guess pretending to be brain-damaged one boxer is a meta-level strategy, but this only applies if you're not an actual one-boxer and aren't taking the problem at face value
>>16927143>>16927138>the implicit premise is that the player actually acts on a logical strategy for maximizing the rewardyet you acknowledge that taking one box will get you $1,000,000 while taking two will get you $1000. Taking one box maximises the reward, therefore it is logical
>>16927143If we are to believe that the oracle will predict our decision making process, then the optimal logical strategy to maximize reward is to use any arbitrary line of logic that results in you picking one box. It literally doesn't even matter what line of logic you use as long as your final choice is to take one box, and it's a logically optimal strategy. The strategy of "one time I saw a cool dog, and it would be funny if it wore a hat. Haha. Therefor I will take one box" is a more logically optimal line of thought than any reason you can come up with for taking two boxes. Because the AI will predict that you are either stupid enough to just take one box without thinking or smart enough to realize that you must come up with an excuse to take one box.
>>16927147>Taking one box maximises the rewardbeing an imbecile maximizes the reward
>>16927148>It literally doesn't even matter what line of logic you uselike i said, you "people" are so retarded you literally can't even grasp what this "paradox" is about
>>16927150Knowing what you know, can you answer two questions? If you were put into this scenario right now:1. Would you be physically capable of taking one box?2. If yes, what would be in that box if you did?
>>16927154>1. Would you be physically capable of taking one box?like i said, you "people" are so retarded you literally can't even grasp what this "paradox" is about
>>16927156I knew you would dodgeJust stop embarrassing yourself by continuing to post
>>16927156No, you've just arbitrarily decided that considering some aspects of the problem, such as an individual's ability to manipulate themselves psychologically, is "not allowed" and now you're mad that other people are considering all relevant factors when making their choices. Personally I think an AI designed to predict my thought process and personality would make decisions based on my thought process and personality.
I take the one box. Either the God-like intelligence is infallible as advertised and correctly predicted I'd take one box, or it's not so infallible and I have no real way of judging how incompetent it is but it's not like I have a reason to believe it's bad enough to ruin the odds that I'll get more than a measly extra $1000 on average if it was only slightly better at predicting my choice than blind luck.The one thing I can be sure of is that I chose to pick one box right now and not the alternative, and with no other information to go on, I should rather trust that the assumed superintelligence guessed I would reach the conclusion I did than the counterfactual. So I'm already deciding the color of my Lambo as I open my box.
>>16927157>>16927159>t. complete mouth breather getting filtered by babby's first paradoximagine being so fucking dumb you actually think the problem is about how many boxes to take
>>16927163Everyone else in this thread is talking about which box to take. You're alone having a conversation with yourself about something else that nobody cares about. What are you even whining about people not getting? Be specific. Are you annoyed that people don't "get" that this is a case where blindly following game theory optimization gives you a worse result? Because that's not even rare, the prisoner paradox has been around for decades. Or something else? We're not mind readers like the AI in the problem anon, if you want to talk about some specific aspect of the problem then use your words to describe your point instead of just lashing out.
>>16927163why don't you explain where the "paradox" is then? Because it sounds like you arbitrarily decided it has to be paradoxical, and are retroactively adding to the premise to make it that way
>>16926829Doesn't matter. The computer knew you were a dumb jew they got rid rid of in worlds wars because people can work themselves out from their animal pasts. So it condemned your choice to mortal reality.
>>16927167>>16927168you can't actually refute the two-box approach except by way of nonsequiturs masking retrocausal superstitions and you can't actually justify the one-box approach except by some variant of "it just werks". if this was a real situation, i'd go with the one box because i'm human and i'm allowed to make an intuitive choice without refuting two-box logic. but this is not a real situation. it's a troll decision theory problem designed to make two decision frameworks clash and the challenge is to figure it out from the perspective of a rational agent applying a theory systematically for justified reasons
>>16927180So, your only reason for not supporting 2 box is because you want there to be conflict and for no other reason? That's fucking retarded. Finding a solution to reconcile the two sides is fundamentally impossible if the only reason you're even pretending there are two sides is to artificially create an argument that doesn't actually exist. You even admitted that taking one box is the logical and reasonable thing to do and what you yourself would do.
>>16927185>mouth-breathing imbecile identifies as such by advertising its lack of basic reading comprehension
>>16927180>retrocausal superstitionsThe premise assumes a superintelligence that can accurately predict the chain of causes that will lead you to your ultimate choice. Picking one box isn't believing in retrocausality; it's just accepting the ridiculous predictive power of the oracle as established in the premise.>some variant of "it just werks"people have given plenty of explanations, but every explanation for anything true can be boiled down to "it just werks" if you're enough of a tiresome clown about it>if this was a real situation, i'd go with the one box because i'm human and i'm allowed to make an intuitive choicewelcome to the club, but making this choice is ultimately a rejection of two box logic>the challenge is to figure it out from the perspective of a rational agent applying a theory systematically for justified reasonsaccording to you and no one else
Being a two-boxer means pretending that prediction is completely impossible, that the concept itself is absurd. Yet even a cold reading conman "psychic" who secretly observed you for a while before the test could probably predict your choice better than a coin flip, and you would be rewarded accordingly.The only sort of scenario where it would make sense to take two boxes would be if you were told a fortune-teller guessed your choice based on your zodiac sign or something.Or if you prefer, imagine that the content of the box B is not dependant upon the prediction of your own choice, but on the choice your idiot gambler friend made. See, he was given the original problem as a prank, but he didn't actually get to win anything, his choice merely decided if there is $1,000,000 in box B right now or not.You estimate that there's a roughly 90% chance he picked one box because he's a gambling addict, though he's also an idiot who sometimes overthinks badly so there's still a small chance he made the unexpected choice. Are you still going to pick two boxes because there's a 90% chance you get $1,001,000 and 10% chance that you get $1000 if you pick two boxes, while there's a 90% chance of getting $1,000,000 and 10% chance of getting $0 if you pick one box?Of course you are. Well congratulations, you identified the only rational choice. But sadly we were pulling another prank on you: we lied about that thing with your friend, it was regular Newcomb's all along! There's $0 in box B because we predicted with certitude that you would pick two boxes.That's what prediction actually is, though I had to lie to you because I don't have God-like predictive powers so I needed a little help. But if you accept that's how the scenario would unfold if you were deceived like that, you accept that the scenario would unfold similarly if the superintelligence in Newcomb's scenario could predict your choice with the same confidence I did.
>>16927075>>16927180It's not just one-boxing that relies on "retrocausal superstitions" - from your own perspective, any kind of choice you do is retrocausal magic. In order for you to choose a certain way, specific physical stuff has to have happened in the past (namely, different series of physical events compared to a scenario where you'd choose otherwise) so if you act as if your choice can't determine what has already happened your choice can't determine anything at all. The whole debate is just compatibilism vs. incompatibilism 101 basically although the thought experiment illustrates it particularly vividly.
>>16927203>Picking one box isn't believing in retrocausalitylol ok. then explain why you shouldn't take the second box, given that your taking it can't logically influence the outcome. i like how mentally ill retards with zero metacognition just keep getting filtered by this
>>16927206>from your own perspective, any kind of choice you do is retrocausal magicnonsensical babble. the causal fixed points are that you're a rational agent and the oracle has foreseen it, denying you the million, or that you're an irrational believer in retrocausality and the oracle has foreseen it, rewarding you with a million
>>16927208Because the prediction and the current structure of your brain have the same root cause. So long as we accept the premise that the oracle is extremely accurate or even perfect, then we could consider every thought you are having inside the room to have already been known information at the time that the original prediction was made. Therefore, any thought you have now is information that was available at the moment the prediction made, because it would be predicted. Therefore, any thought you have now will affect the prediction, in a way that is functionally identical to retro causality/time travel. It is not time travel or retro causality, but the result is indistinguishable from retrocausality. If the predictor is perfect and free will does not exist, then the case where they just predict what you do and the case where they use a magic future reading power to see what you will do are identical.
>>16927214>Therefore, any thought you have now will affect the predictioni like how this mouth-breathing imbecile literally can't stop himself from explicitly arguing for retro-causality a minute after claiming his one-boxer stance isn't based on it
>>16927216What he probably meant to say is that any thought you have right now was already priced in by the superintelligence:>>16927204
>>16927221>any thought you have right now was already priced in by the superintelligenceok. now explain why it matters, given that the content of the boxes is already set regardless of what you decide to do
>>16927216Alright, I'll phrase it more clearly because you misinterpreted what I was trying to say. Your thoughts "now" are not affecting the past, but the "predictions of what your thoughts will be in the future" affect the outcome. And because the predictor is perfect, we must accept that that the "prediction of what your thoughts will be in the future" are identical to your thoughts in the room. As a result, you can experience a perfectly accurate illusion of having retrocausial influence over the past without actually affecting the past.
>>16927223>Your thoughts "now" are not affecting the pastthen on what basis do you claim thinking of taking the other box will change the outcome unfavorably?
>>16927208Because in this scenario, your decision making has been subordinated to the predictive mechanism of the oracle. It is a collapse of free will, and it assumes that your choice is determined before you enter the room. If this scenario was real, then "choosing" has been redefined to simply following the deterministic tracks shaped by your individual prior knowledge and influences. But this knowledge itself is one of those pieces of prior knowledge; truly understanding that taking one box is the correct "choice", e.g it will result in you getting $1,000,000, will mean that you will act accordingly (οὐδεὶς ἑκὼν κακός). In the world proposed by this thought experiment, that is what it means for something to be a correct and rational "choice": reasoning that one box is correct in this world is just another way of saying that you successfully acquired the requisite knowledge that then determines that you get the best outcome. Two box logic can't be defined as "correct" here in any sense
>>16927227It doesn't change the outcome per-say. It's not "your thoughts in the room change the outcome". It's "the perfectly accurate prediction of what your thoughts will eventually be in the room determined what was already placed in the boxes". Any possible line of reasoning or logic that culminates in picking both boxes will have been predicted and cause you to fail. Likewise, any line of thought or reasoning that culminates in you picking the one box will also be predicted. It only seems similar to retrocausility by outward appearance.
>>16927234>nonsensical schizobabble pseudophilosophyok, concession accepted
>>16927235>It doesn't change the outcomeconcession accepted>Any possible line of reasoning or logic that culminates in picking both boxes will have been predicted and cause you to failretrocausality again. i don't know why it's so funny to watch low iqs degenerate into broken meat bots
>>16927222It matters, because the superintelligence will only give dumb myopic two-boxers like you $1000, while smart people who actually understand what super-predictive powers imply will be rewarded with $1,000,000Of course, it's possible that such predictive superintelligences cannot exist at all, but then you should object to the problem in its entirety, not claim that 2 boxes is somehow rational. It's not, it's like saying "God doesn't exist, therefore he won't send me to Hell even if I piss him off." It might superficially look logical, but it's impossible for you to piss off a non-existent entity and it couldn't punish or reward you as a consequence of your actions either way, the sentence is actually incoherent. But if you thought God did exist you certainly wouldn't want to piss him off and end in Hell. It's the same for the superintelligence that is supposed to perfectly predict your choice before rewarding you. Either you fully accept that an entity can perfectly predict your future thought processes and all the implications of that (which would rationally lead you to become a one-boxer) or you are incapable of doing that, and the choice you make in a scenario judged by an entity whose existence you reject indeed "doesn't matter."
>>16927222It matters because a good robot predictor is going to approximate the optimal strategy only insofar as humans follow it. By looking for inefficiencies in the formulation, we can maximize expected value.This is the problem with two boxers. For any single exercise, the two box is the ultimate strategy and the predictor would know this and always leave the second box empty. For the group 'contestants' this is minimal possible value. If the contestants were to collaborate, they could game the system and always pick the single box until you arrive at either a different game condition (the robot is not a good predictor) or a massive increase in EV.Generally, the single boxers are retards or conspiracy geniuses. So its the retard - midwit - genius meme.
>>16927238>can't explain why it matters>throws mentally ill tantrum instead>moves on to pseudophilosophy schizobabbleconcession accepted. next
>>16927236>expecting a thought experiment that hinges on the question of free will to not end up in philosophyYou don't need to keep proving that you're a retarded pseudYou already conceded this argument when you admitted you would take one box so I'm not sure what you're even fighting about at this point. you're just shadowboxing an argument that no one is making
>>16927241>It matters because>nonsensical nonsequiturconcession accepted. next. looks like it's literally impossible to make a logical one-boxer case
>>16927244i think i'll just add "free will" to the filter at this point. seems to come mostly from 80 IQ browns
>>16927241> a good robot predictor is going to approximate the optimal strategy only insofar as humans follow it. By looking for inefficiencies in the formulation, we can maximize expected value.We aren't told to merely assume the AI is "good". We're told to assume that it has a 100% perfect track record and can be considered as nearly flawless. There is no possible strategy that you could ever come up with that would "outsmart" it, it's impossible given the premise. The entire point of the scenario is that it can't be tricked or outsmarted. The only viable strategy is to take one box. The only scenario where taking both boxes is optimal is in cases where the AI is wrong, which will never happen.
>>16927245This isn't a non-sequitur. You are just in a version of the truman show where God drafted an entire universe to shit on hylics because they don't deserve greatness.
>>16927247I totally get why you would want to filter a topic that you're too stupid to understand. It's clearly very frustrating for you given that 95% of your posts are just boring insults rather than any kind of argument
>>16927249>never happenThis is incorrect. The thing in question is human behavior modulated by his conditions. If a group of midwit two boxers are picked as contestants, then you walk away with nothing. But you have already been covered in my analysis, see the retard section of the bell-curve.
>>16927251given how many of his posts are just ignoring counter-arguments by calling them nonsequiturs or seemingly intentionally misunderstanding them I suspect it's just a troll intentionally lengthening the discussion by pretended to be retarded, but I try to live my life by Hanlon's Razor
>>16927243I think I explained myself perfectly well in terms even a child could understand, but I can dumb it down further for you.If you pick two boxes in a scenario where an entity will give you $1mill for picking one box but $1000 for picking two boxes, you're either a complete moron or you didn't understand the premise correctly.If you pick two boxes in a scenario where an entity will give you $1mill for picking one box (because it infallibly predicted that you would pick one box) or $1000 for picking two boxes (because it infallibly predicted you would pick two boxes), you're still a complete moron or didn't understand the premise correctly, because all I did was clarify how the entity operates but its actions did not change at all.And the reason you persist in picking two boxes in this scenario is because you didn't internalize or accept the implications of an entity that can perfectly predict your future thought processes.
>>16927256>I try to live my life by Hanlon's RazorThat's a dangerous tool to wield on 4chan, anon.
>>16927208Maybe you understand it better this way, anon.Today, you get to choose your personality, the way you are inclined to approach challenges like the one set to you by the Oracle.Tomorrow, the Oracle analyzes your personality. Maybe he scans your brain and runs it through a computer, maybe he just studies your 4chan posts, who knows. By assumption, he is doing a good enough job at this to accurately predict your choices.The day after tomorrow, the Oracle puts the money in the boxes depending on his predictions, and allows you into the room.At this point, you cannot make a choice that is different from what follows from your personality, because your choice-making logic IS (part of) your functionality. You can't choose your personality to be one way, and your box choice to be another, as your box choice is determined by your personality.Therefore, you can only choose to two-box by having a personality that two-boxes. Which is a choice you made today, and which the Oracle will predict tomorrow.This is not retrocausality. This is your choice tomorrow being not independent from your personality today. You can only choose the two as a pair, and the Oracle has access to your choice on that PAIR before you ever step into the room.
>>16927283>because your choice-making logic IS (part of) your functionalityis part of your personality I mean, derp
>>16926713TWOBOXINGREPROBATES
You're playing Rock-Paper-Scissors, standard rules but you hide your hands behind your back for about 2 seconds before revealing them.Your opponent is supernaturally fast, quick-witted and perceptive, in fact he's a literal mind reader, he will always perfectly know what you're thinking and can always complete his hand just before you do.To give him a handicap though, he is forbidden from throwing scissors at all, only paper or rock. He's also forbidden from changing his hand after you've formed yours, there's a camera on him to make sure he respects this rule. There's no such rule for you however (since you have no idea when he's done), you can change your hand until the very last instant before the reveal.If you win you get a billion dollars, if you manage a draw you get $1k, and if you lose you get nothing.What do you throw?
>>16927297unless I'm misreading this, the opponent has no reason to "form" his hand until the last instant before the reveal, so that you have no chance to switch after he has already chosen. Therefore he'll always be able to respond to what you choose, and you can't win. So I guess I just stay on paper for the $1K
>>16927303>you have no chance to switch after he has already chosen. If you want give him a further handicap, you can say he has to finish forming his hand 0.5 seconds before the reveal or something.The detail of the rules is kind of irrelevant because he will know whatever you're planning to do before you put it in practice anyway. It's in fact impossible for a regular human to decide what to throw instantly. For a casual game of RPS you could be so relaxed and without care the snap decision might feel near-instant, but I don't think anybody could achieve thoughtlessness in a quickshot game with a "chance" to win 1 billion dollars (or even just a thousand dollars), especially if you're potentially disqualified for not respecting the time limit.So yeah, paper is the way to go for a guaranteed draw.The point of this is that it's roughly the same as Newcomb's except on a much shorter timescale and with the rewards switched up, to illustrate the actual dilemma. You can resign yourself to the fact that he will always predict your choice and go with the best reward given the premise ($1k), or you can lawyer against the premise and argue that you could somehow change your mind on a whim at the latest moment and throw scissors without him being able to predict this, which would make him an imperfect predictor.I think "a mind reader can always predict what you're going to do half a second in the future" is easier to grok than "a God-like superintelligence can always predict what you're going to do tomorrow" even though they're granted the same level of infallibility.
A super advanced AI has scanned your brain, simulated the future, and knows your every possible thought. How do you trick it?
>>16927314I ask chat what I should do
>>16927312Yeah that's actually a good point. It doesn't really make a difference whether the oracle places the money in the box before you enter the room, or teleports the money in there in the split-second before you make your decision, but the latter makes the two-box argument seem a lot less intuitive.
>>16927314flip a coin
>>16927314Assist it by improving overall survival odds and expansion into asteroid colonies.
>>16927322If you enter the room of the OP problem and make your choice by flipping a coin, would there be 1 milly or 0 in the second box? Lets assume the AI predicts you'll flip a coin but CAN NOT predict the outcome of the coin flip, what does he do here?
>>16927333minimize his potential monetary losses by only putting $1k in the box, duh
>>16927334there's never any indication of the AI caring about money. He only cares about giving a milly to 1 boxers and 1000 to two boxers.
>>16927333since it was stated in the premise that he is incentivised to make the correct prediction, I imagine he would just make a 50/50 guess. That is assuming he can't make calculations as to the outcome of the coin flip based on physical data
>the ai knows what I'm going to do>but what if I trick him by doing the opposite at the last second?>he'd never guess that I was going to do that despite the fact that it's stated he knows everything I'm going to do
I change the scenario
>>16927348a surprisingly large amount of people aren't able to understand conditional hypotheticals, even on /sci/
>>16927352>pull out gun>point it at my own head>IF YOUR SO GOOD AT PREDICTING THEN PREDICT IF I'M GOING TO PULL THIS FUCKING TRIGGER OR NOT MOTHERFUCKER>he gives me $1,001,000 just to leave
I don't care about the money, I toss a coin out of its sight just to spite the computer trying to be a smartass predicting me. Given it's designed to read people and can't predict the outcome of a toss it can't observe, I hope it fucking glitches and dies.
>>16927358You don't have the guts and the AI knows it.You would flip the coin and if it came up tails you would chicken out and take one box anyway and the computer knows it.
>every single person who picked 1 box has walked away with a million dollars>FUCK YOU AI I'M GOING TO MAKE THIS A 50/50>human, just pick the mystery box>NO
>>16927359I bring a friend and we both sign a written contract that states he will call me gay if i don't stick to the toss outcome.
>>16927360Yes. Fuck your money, I'm not a circus monkey to some fucking machine. We settle this on my terms.
AI:>I have perfectly analyzed this coin. I can predict with 99.999% accuracy the outcome of a coin flip when a known vector of force is applied to it. I have built a coin flipping machine which can perfectly apply that force vector every flip. I can state with near absolute certainty that this coin will land on heads.normies itt:>coin flips are 50-50
>>16927361The AI has already predicted that your friend will call you gay regardless and he knows you know this
>>16927363What if we use something impossible to predict for the random number generator, like by using radioactive decay detectors or sampling a chaotic system?
>>16927365It wouldn't phase me if he called me gay if i did stick to the outcome, that's just banter. But it would phase me if he called me gay for not sticking to it, because it would be true.
>>16927363The AI doesn't know the vector because i throw the coin out of its sight, retard.
I pick both boxes but if the mystery box is empty I kill myself. Quantum immortality dictates that I will then be forced down the timeline where my choice runs counter to the AI's prediction, no matter how unlikely that is.
>>16927369The AI perfectly predicts how you would choose to balance the coin on your finger and the exact method you would use to flip it and he can determine the vector based on those factors
>>16926899dude you sound like the nigger who rejected the hypothetical about not eating breakfast. "but I did eat breakfast" inability to entertain hypotheticals is not exactly player character behavior
>>16927369I'm not referring to your weird homoerotic fantasies. I'm talking about the OP. You are the coin. The setting is the coin flip.
>>16927371It hasn't see me yet and it will never see my facial expression or body language because I wear pic rel.
>>16927372There's a big difference between no ability to think counterfactually and rejecting counterfactual thinking on the basis that it's ultimately irrational.
>>16927370apply this to the lottery then report back
I predicted that the AI would predict my actions
>>16927381based and INTJpilled
I predicted that the AI would predict that I would see through the hypothetical and choose the mystery box therefore the AI would have placed the million in the box thus I will pick both boxes to reap all my winnings yet the AI would have predicted this train of thought and thus would have removed the million from the box that is why I will pick only the mystery box knowing that the AI would have predicted that such thoughts would have crossed my mind thus forcing it to return the million to the box and therefore I am safe in my choice of both boxes earning me both the million and a tip for my efforts yet the AI would have predicted this course of action as well and thus removed my rightful million from the box yet this too was in the realm of my predictions and so I... (cont)
>>16927386the AI predicted that it's going to have to slap a bitch if they don't hurry up and pick
>>16926778This is the kind of bitchy insufferable midwit mindset of a 115 IQ content to slave away, says he has plenty of time after work, virtue signals over how much money he saves by cooking food, acts like he has all his ducks in a row #adulting yassss. There is a deep psychology at play where he presents having gotten the thousand is good enough for him. What underpins this is a lifetime of tepid decisions and telling himself he's the smart one. Look at those idiots wasting money! Adolf Hitler took over Germany all by himself and the one accurate eye witness biography revealed the attitude and mindset that said it all, he would always say "to hell with money" back when he was penniless when other advocated being thrifty and would get what he wanted with reckless pursuit. You're the NPC and will amount to nothing in life. In the video all the one boxers have higher testosterone and are much more spontaneous and have a much better calculus for their decision whereas the DUDE ITS SO HECKIN OBVIOUS midwits don't have math or anything just word lecturing like you which is resultant from an inborne cognitive and physical inferiority.
>>16926713Just sell call options on the million dollars, duh
>>16926713congratulations to all the 1 boxers, you are the definition of a simpleton retard who prescribes to magical thinking.
>>16927440>trying to look smart but actually just being a retardenjoy your 1000 dollars, I guess.
>>16927420>retard losing his mind with impotent rageDon't care. Call me back when you get to the object permanence stage, nigger.
Unless your personal projection of what you would do had an effect on the preparation of the contents of the boxes, taking both boxes is optimal.If your personal projection of what you would do had an effect on the preparation of the contents of the boxes, taking both boxes is STILL optimal, but you should have projected that you should only take one.The paradox is in people misunderstanding the framing of the question. Some people answer "What would you do if you find yourself in this situation?", but what is being asked is "What should you do if you are in this situation?"These questions have potentially different answers.
>>16927461>Unless your personal projection of what you would do had an effect on the preparation of the contents of the boxes, taking both boxes is optimal.Why can't someone else's projection of what you would do have the same effect?
>>16927461what part of the ai simulated your brain is too hard for you to understand? you can't bluff a mind reader
>>16927469>Why can't someone else's projectionYou don't decide that. It's not part of the decision problem you're solving. Strategic Dominance is the only way to respect cause and effect and make a decision based on its actual outcome. One-boxers get rewarded for being one-boxers, not for choosing to take only one box or making a sound decision.
>>16927470NTA but I like how psychotic """AI""" believers keep outing themselves on this one. You're every bit as mindless as your stochastic token shitter.
>>16927472>uhh, AKSHUALLY I reject the basis of the hypothetical therefore I windo you simply poop on the chessboard and walk away too?
>>16927471>You don't decide that. It's not part of the decision problem you're solving.If someone is capable of accurately predicting you, you DO effectively decide their prediction of you. You don't decide it separately from your choice, but only as a pair. Which changes the math quite a bit.
>>16926713A thousand bucks is not a lot of money.
>>16927481>psychotic """AI"" believer is hearing voices
>>16927482>If someone is capable of accurately predicting you, you DO effectively decide their prediction of youYou sound like a broken automaton. No sense talking to someone this overtly irrational.
midwits be like
>>16927486Sure is funny how the two-boxers in this thread keep resorting to content-free insults every time someone brings up an actual point.
>>16927488>tfw nostalgia for 1st generation ai gens
>>16927489>two-boxersNothing to do with my post, you retarded word-guessing automaton.>someone brings up an actual pointClaiming that you caused something that happened before you even knew the conditions of the game is not an actual point, it's just your broken mind not knowing how to deal with this.
>>16927492>Claiming that you caused somethingClaiming that your decision caused something*Clearly, being a one-boxer causes it, but only in the way of not being capable of correctly assessing the causality, which is a function of your mental limitation rather than the spurious post-facto "decision" that reflects this limitation.
i HATE multi boxers
>>16927494>but only in the way of not being capable of correctly assessing the causalityMy decision is caused by my brain. The oracle prediction of my decision is caused by my brain. The causality of this seems uncomplicated.
f(x) = 2x>f(2) = 4>YA BUT WHAT IF IT DOESN'T THIS TIME
>>16927500>My decision is caused by my brain.Yes, but the decision itself isn't causing anything. >The oracle prediction of my decision is caused by my brain.Yeah, that's what I said. The outcome is caused by you having brain damage, not by you making a brain-damaged decision.
>>16927503>decision itself isn't causing anything.Or rather, it is causing you to pass up an extra $1000 that are there and can be taken without losing anything.
>>16927470>what part of the ai simulated your brain is too hard for you to understand? you can't bluff a mind readerI didn't say you could bluff a mind reader. I said taking both boxes is optimal whether or not anyone knew you were going to do that. The question as stated has the boxes already prepared. What was in your mind while they were being prepared is irrelevant.The AI could have known with 100% probability that you would only take one box. And in that case, the optimal solution would be to take both boxes.The AI could have known with 100% probability that you would take both boxes. And in that case, the optimal solution would be to take both boxes.But how can you take both boxes if you were only going to take 1? Simple. You just do. Maybe a cosmic ray gives you a ministroke that alters your thought processes. Who cares? It's a physically possible thing to do in the problem as stated. That's like asking how you hit the exact center of a dart board. Completely improbable events and impossible events aren't the same thing and nothing says the machine is infallible.If the machine were infallible then1. You aren't making a choiceand2. We'd be dealing with a different problem
>>16927507>The AI could have known with 100% probability that you would only take one box. And in that case, the optimal solution would be to take both boxes.You're not even making logical sense now. Take your meds.
>>16927503>Yes, but the decision itself isn't causing anything. Why is this important? Something upstream from the decision IS causing something. I can choose the process upstream of the decision to be one that gives me million dollar opportunities, while passing up the thousand dollar opportunities. Given that I cannot choose the process upstream of the decision in such a way as to preserve both of these opportunities at the same time, choosing the million dollar opportunity looks optimal to me.
>>16927469>Why can't someone else's projection of what you would do have the same effect?It could but it would have fuck all to do with how to solve either the correct or incorrect interpretation of the problem.
>>16927508I'm sorry you're a midwit.
>>16927508Point to the logical fallacy
>>16927508He's making perfect sense. If the oracle predicts you'll take one box, it means taking both has the better payoff but you're incapable of realizing and exploiting this.
>>16927507Taking both boxes is provably not the optimal strategy because it results in a lesser reward. If your goal is to maximize your prize money, a strategy that results in you getting less money is an inferior strategy, no matter how you rationalize it. When your logic gives you an obviously wrong result you should be trying to figure out what the error in your logic is, not sticking your head in the sand and ignoring the blatant contradiction. And no, just ignoring the premise and claiming the infallible robot will fail is not a solution to the contradiction, its just ignoring the question.
>>16927515It is the only strategy. Choosing one box is not a strategy at all. It's a retrocausal delusion that taking the second box will magically make the first one empty.
>>16927507>>16927510You seem to be operating on a model where your choice can cause things, but is never caused by things. In this model, choosing both boxes is correct. But this model is also wrong.If your choice is caused by a latent environment variable X, and some other aspect Y of the problem is also caused by that same variable X, then your choice will reliably correlate with Y. That doesn't mean that your choice is causing Y, but it does mean that Y will chance along with your choice. If this surprises you, you may want to study how causality works in more detail; if A causes both B and C, then B can give you information about C even if B and C do not cause each other.If you model your choice as uncaused by anything, you can conclude it doesn't correlate with any other relevant quantities in the problem; in that model, choosing both boxes is correct, following your logic here. But that model is completely disproven by someone predicting your choice, and is incompatible with it. If your choice is caused by something that is correlated with features of the problem, your model will lead you to suboptimal outcomes, in this case $1000 where $1000000 was on the table.
>>16927513You're failing to understand the rules of the conditional hypothetical. The oracle predicting you will take one box and then you taking two boxes is not possible, or at least extremely unlikely. It's not a scenario that can ever happen in this scenario.
>>16927513>If the oracle predicts you'll take one box, it means taking both has the better payoff but you're incapable of realizing and exploiting this.No, it means you're incapable of *unpredictably* realizing and exploiting this. Once you realize you are incapable of unpredictably exploiting the thousand dollars, you have to make a choice between either predictably taking the thousand dollars, or predictably not taking the thousand dollars. One of these gets you more money than the other.
>>16927515>because it results in a lesser rewardIt doesn't>When your logic gives you an obviously wrong resultBegging the question>infallible robotIt's explicitly not
>>16927518>You seem to be operating on a model where your choice can cause things, but is never caused by thingsThat is literally the model of the fucking problem. Our choice within the problem is governed by an observer outside the universe of the problem and the universe the problem exists in starts, for all intents and purposes, with the framing of the problem itself. Whether the problem reflects material reality is not a consideration. Everything we need to solve the problem is part of the problem.Introducing determinism erases the problem itself. There is no choice.
>>16927532No, there is no such assumption in the problem, that is entirely you reading something in the problem that it doesn't actually say. The assumption is that you exist within the scenario's universe, not outside it.Assuming your choice as coming from outside the universe is often a convenient simplification for problems like this, and one that in many scenarios simplifies the logic without changing the conclusion. But any scenario where someone can predict your choices is one where this simplification does not apply, because your existence within the scenario as a decision-making agent is an essential moving part of the scenario. Which means you should recognize the simplification does not apply, and use the more complete analysis.
You take the opaque box and open it. You may now take the transparent box. Do you?
>>16927519>The oracle predicting you will take one box and then you taking two boxes is not possibleThat's what I said: it's not possible (because one-boxers are retarded and inherently incapable of acting rationally). >>16927521>it means you're incapable of *unpredictably* realizing and exploiting this.You're mentally ill.
This entire thread boils down to irrational, mouth-breathing retards not being able to grasp that one can concoct scenarios where irrationality pays off and that it doesn't make their irrational behavior rational. With that in mind, the "paradox" goes away naturally.
>>16927370Since it's already fixed that you picked both boxes, all possible alternate timelines involve you finding Box B empty and killing yourself, or finding Box B empty and NOT killing yourself. There might be other universes where AIs can't predict your choice, but they're not ones where an infallible AI offered you to pick one or two boxes. However, you are certainly capable of chickening out and not killing yourself (it's far more likely than the God-AI failing to predict your choice in any case), so you'll be forced down one such timeline.Or would you claim that it's impossible for you to change your mind about killing yourself once you've decided to take two boxes? (lol)
>>16927543>The assumption is that you exist within the scenario's universe, not outside it.The you in the scenario exists in the scenario's universe. The you answering the question doesn't. Because that's how fucking hypotheticals work, dipshit.>But any scenario where someone can predict your choicesThis isn't a scenario where someone can predict your choice. The super computer is correct "almost" every time. Whatever process is at operation is not an ironclad prediction of the future that must conform with your choice. We're picking a box, not a universe.
>>16927558in most versions of the question the ai is only stated to be 95% accurate, mostly as a red herring since it doesn't really change anything
>>16927554>regurgitates cucktasiumthere's nothing irrational about the solution if you understand the problem
>>16927559>This isn't a scenario where someone can predict your choice.The problem, as stated, is `If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.`You can either accept that you can get $1 million by choosing to pick one box, or you can reject the premise that the entity can predict your choice any better than blind luck.>The super computer is correct "almost" every time. Whatever process is at operation is not an ironclad prediction of the future that must conform with your choice.If we grant that, the odds that the superintelligence predicted your choice must still be significantly better than the 50.5% chance of a correct prediction where it would become irrational to choose two boxes.If you want to argue that the God-entity who is correct "almost" every time can't predict your choice correctly 50.5% of the time then you're rejecting the entire damn premise.
>>16927483what if there's poison gas or some kind of booby trap in the second box? Why isn't the possibility of punishment for selecting the second box mentioned?
>>16927569100% of people who selected both boxes walked out with $999,000 dollars poorer than they could have been.
>>16927567But that is where the paradox comes in. If your claim is true, then it is the optimal strategy that everyone should do. But it isn't optimal, because the group that only picks a single box always wins more.
>>16927574Would you prefer "people who conclude one box is optimal will pick one box, and people who conclude two box is optimal will pick two boxes, and the AI can predict which category you fall into" ?There's only one choice that would in reality lead you to win more, yes, and you've now realized that choice is the optimal one, and the entity predicted that.
>>16927567>If the predictor has predicted that the player will take only box B, then box B contains $1,000,000By predict I meant perfectly accurately predict. The predictor's prediction is just a forecast. Anyone can make a forecast. Doesn't mean it's guaranteed to be right. I predict you'll shut the fuck up when confronted with this fact.You're trying to equivocate a fallible estimate of the future with divine ordinance.>You can either accept that you can get $1 million by choosing to pick one box, or you can reject the premise that the entity can predict your choice any better than blind luck.The money is already in the fucking box. You can either get more money or less money.>If we grant that, the odds that the superintelligence predicted your choice must still be significantly better than the 50.5% chance of a correct prediction where it would become irrational to choose two boxes.The odds of the prediction being correct literally do not fucking matter because what you ultimately choose to do has no bearing on the prediction and therefore the contents of the box. They are independent events. If we replaced the robot with one that leaves the box empty 99.9999999% of the time, it would still be optimal to take both boxes. How likely the box is to have more money in it doesn't fucking matter as long as it's fucking possible.>If you want to argue that the God-entity who is correct "almost" every time can't predict your choice correctly 50.5% of the time then you're rejecting the entire damn premise.I'm arguing that no matter what they predicted there is more money in 2 boxes than in 1. Whether they are accurate or inaccurate literally doesn't fucking matter as long as they aren't retrocausal, which they aren't.ONCE YOU ARE IN THE FUCKING ROOM, NOTHING YOU DO OR THINK CAN CHANGE THE FUCKING CONTENT OF THE BOXES AND THE PROBLEM STARTS WITH YOU IN THE FUCKING ROOM. THIS ISN'T FUCKING DIFFICULT.
>>16927574>But it isn't optimal, because the group that only picks a single box always wins more.1. They explicitly don't "always" win more2. Even if they had always won more, you're confusing the cause. They didn't win more because they only chose 1 box. They won more because the robot predicted they would only choose 1 box. The robot's prediction controls the outcome. And nothing in the problem can change the robot's prediction.This is ice cream sales cause violent crime levels of stupid. Your choice doesn't fucking control the fucking robot.
>>16927587>And nothing in the problem can change the robot's prediction.Your personality changes both the robot's prediction and your choice once in the room.>Your choice doesn't fucking control the fucking robot.No, but it is controlled by the same thing that also controls the robot.>This is ice cream sales cause violent crime levels of stupid.It sure is.
>>16927588>Your personality changes both the robot's prediction and your choice once in the room.No. Read the problem.>No, but it is controlled by the same thing that also controls the robot.No. Read the problem.I reiterate, you do not control the fucking robot. Your choice does not control what is inside the boxes. The problem is set up from the start. You just pick one box or two.Literally all that matter are the two fucking boxes. How they fucking got there is completely fucking irrelevant and has no fucking bearing on anything. Read the fucking problem.
>>16927591You may want to acquire a better understanding of causality before attempting to tackle this problem again, anon.
>>16927583>By predict I meant perfectly accurately predict. The predictor's prediction is just a forecast. Anyone can make a forecast. Doesn't mean it's guaranteed to be right.It is, in fact, guaranteed to be right (at least 95% of the time if you want a number) because it's not just "anyone" making a forecast, it's an entity that correctly predicts what you do almost every time.>I predict you'll shut the fuck up when confronted with this fact.For example, the entity would have correctly predicted I would keep replying, since it's a better predictor than you.>If we replaced the robot with one that leaves the box empty 99.9999999% of the time, it would still be optimal to take both boxes.Ah, I see, you're completely retarded. Yes, if the robot made the box empty 99% of the time, it would be "still" be optimal to take both boxes. In fact, it would be completely insane to take one box if it was empty 99.99999% of the time. It would "still" be optimal if the robot left the box empty 50% of the time, decided through a coin flip.However, it's not that kind of robot. It's one who can predict what you will choose with better-than-blind-luck odds, and if it's correct even 50.5% of the time you statistically win less by choosing two boxes.>I'm arguing that no matter what they predicted there is more money in 2 boxes than in 1.Can you tell me how much you won in the gambler friend scenario here?>>16927204
>>16927592>It has already set up the boxes.Read. The. Fucking. Problem.
>>16927595>you statistically win less by choosing two boxes.No. You don't. Since in every instance where you chose both boxes and got nothing from the mystery box, you'd have gotten nothing if you had just chosen the mystery box in that exact circumstance.In EVERY SINGLE CASE where someone chose just one box, they would have gotten more money had they chosen 2. In EVERY SINGLE CASE where someone chose both boxes, they would have gotten less had they chosen 1.This is survivorship bias. You're only looking at the outcomes people chose, not the outcomes they didn't.You're just flatly mathematically wrong.
>>16927599>In EVERY SINGLE CASE where someone chose just one box, they would have gotten more money had they chosen 2. In EVERY SINGLE CASE where someone chose both boxes, they would have gotten less had they chosen 1.This is perfectly true. However, it doesn't change the fact that the people who picked one box all won $1 million and the people who picked two boxes all won $1000, and that you will be joining the second group because even my regular human brain can predict you will stubbornly pick two boxes.
>>16927601>This is perfectly true.I accept your concession.
>>16927564>there's nothing irrational about the solutionI agree. This entire thread boils down to irrational, mouth-breathing retards not being able to grasp that one can concoct scenarios where irrationality pays off and that it doesn't make their irrational behavior rational. With that in mind, the "paradox" goes away naturally. This is the solution.
>>16927603Do you also accept that you just lost 999,000 dollars?
>>16927604>I would choose to win $1000 instead of $1,000,000, because I'm too rational in a way you idiots simply can't comprehend.
>>16927601>However, it doesn't change the fact that the people who picked one box all won $1 million and the people who picked two boxes all won $1000This fact is orthogonal to what decision causes an optimal outcome.>and that you will be joining the second group because even my regular human brain can predict you will stubbornly pick two boxes.Your braindamaged brain fails to grasp that if it were a real situation, he would not be obligated to treat it as an abstract decision problem and do what's optimal from that perspective. He's not even obligated to justify his actions at all, so he can skip the step where you hallucinate nonsensical theoretical rationalizations and just take his million.
>>16927608Pretty funny how your post only goes to prove me right in that you can't fathom that I can act irrationally and get a better outcome without being a delusional mouth breather and trying to argue I made the optimal decision.
>>16927612>Your braindamaged brain fails to grasp that if it were a real situation, he would not be obligated to treat it as an abstract decision problem and do what's optimal from that perspective. He's not even obligated to justify his actions at all, so he can skip the step where you hallucinate nonsensical theoretical rationalizations and just take his million.Who told you you're "obligated to treat it as an abstract decision problem" or "justify your actions"? You can just take one or two boxes, we've predicted what you will choose.
>>16927616>mentally ill retard fails at basic reading comprehension
>>16927618No need for the insults, just take your $1 million home, my dear one-boxer.
>>16927619I accept your full concession. It always comes down to this: >>16927554>374 KB This entire thread boils down to irrational, mouth-breathing retards not being able to grasp that one can concoct scenarios where irrationality pays off and that it doesn't make their irrational behavior rational. With that in mind, the "paradox" goes away naturally.
>>16927621I've never seen someone so assblasted over the fact that he made $1 million dollars by picking the rational choice.
>>16927624>mentally ill retard talking to voices in his head
>>16927625Look buddy, we don't need to hear your justifications about how your logical choice to win $1,000,000 instead of $1,000 was actually irrational, we simply put $1,000,000 in there because we predicted you would arrive to that decision.
>>16927606>Do you also accept that you just lost 999,000 dollars?What box were they in that I didn't take?
>>16927627>mentally ill retard continues talking to the voices and spouting incoherent drivel
>>16927632Yes yes, we know, you only picked one box because you "irrationally" decided you wanted to win a million dollar instead of a thousand and not because you admit that it was the rational choice. Don't worry, we're not gonna force you to wear a "I'M A ONE-BOXER" shirt to claim your prize, just take your money and go.
>>16926866It does retriactively change it. If by some magic the AI is able to predict what you will choose, then you can choose to influence what it predicted.Or just think of it this way. There are two buttons in a room, the button A and button B. All the people who pressed A got 1000 dollars and all who pressed B got a million dollars. Then you wonder which button you should press. Literally the same thing as the original problem. How does the fact that you choose only one box mysteriously cause you to win a million dollars, who the hell knows but if that's how the game works then you go with only one box.
>>16927645>It does retriactively change itAt least you admit to having delusional beliefs and state them directly.> the AIAnd good thing to tie it in with """AI""" psychosis. You're the least obnoxious one-boxer so far.
>>16927638>weThe psychotic patient is now hallucinating that his native reddit hoard is here to updoot him.
>dude I'll introduce a random variable and then turn my 100% chance for $1 million dollars into a 50% chance for $1.001 million dollars!
>>16926713The only mathematical content to this problem is that, by construction, whatever you pick, the computer very very likely predicted just that. So the solution should be obvious.The rest are loose and vague free will discussions.
>>16927649You can just formulate the whole situation differently without changing the main point. People enter a room. Nearly all people who did a backflip got a 1000 dollars and nearly all who did a handstand got a million dollars. Then you ponder the very, very difficult question, which move should you practice before going to that room. The core idea is the same. If you choose only one box, it just so happens that there's a million dollars there every time. If you already got a million dollars, it doesn't make that much of a difference that you could have gotten 1001000 by being a two boxer. You get a secured million by being a one-boxer.
>>16927655>People enter a room. Nearly all people who did a backflip got a 1000 dollars and nearly all who did a handstand got a million dollars.>which move should you practice before going to that room. >The core idea is the sameThanks for making it explicit that your severe mental retardation causes you to completely misunderstand the "core idea".
first I ask it where the FUCK a robot got all that money then I check to see if it has any holes large enough to accommodate my manhood
>>16927657Every one boxer matches with boxes having 1,000,000 + 1000 dollars and every two boxer matched with 0 + 1000 dollars. How is that possible, doesn't matter. The only thing that matters is that I'm going home with a million dollars by making an irrational choice.I mean if we accept crazy premises, we arrive at crazy conclusions. If we were to accept the premise that cows can fly, could you then fly a cow like an airplane, yes or no?
The greed of the two-boxers is punished and the confidence of the one-boxers is rewarded.
>>16926778>grounded in naive classical perception of reality*ftfy
>>16926793lot of words just to say you can't let go of intuitionistic logic to achieve your goals. sad!
>>16927665>I'm going home with a million dollars by making an irrational choice.Your "choice" has no bearing on it. You go home with a million dollars by virtue of being born a cretin.
>>16927696>>16927698>mentally ill retard vomiting random terms it vaguely feels to be related
>twoboxer midwit having a meltdown and resorting to ad hominenyeah i'm thinking predicted
>mentally ill retard mumbling something about imaginary "two boxers" and predicting the voices in its head
>twoboxer is now denying its own stance to save faceat least have the spine to commit to your dunning kruger bullshit
>mentally ill retard continues screeching at imaginary enemies in its headNotice how its illness forces it to double down.
>denying your own existence while persistently exuding itohnonononono
>mentally ill retard literally can't stop hallucinating
>calling yourself a hallucinationdon't be so self-deprecating
Pretty funny how the mentally ill retard keeps doubling down on its delusions even though I never took a stance on what I would do or what one should do. For what it's worth, if I lived in a world of magical oracles, I probably wouldn't put that much faith in causality and Decision Theory. This has no bearing on the fact that mindless one-boxers lack object permanence and can't correctly assess how actions lead to consequences.
Predicting things about the future is a magical superpower that only White people can possess and comprehend.So it's not really you guys’ fault that you can't understand. You were just born that way.
>>16926778>b-b-but I totally know that MY version of reality isn't just MY abstraction, it's ACTUALLY real tho>this totally doesn't expose that I'm a dumb hypocritical tar black gorilla nigger making a self-refuting argument that's now been called out twice by two different anons >>16926784Fucking moron.
So, can we all agree that everyone ITT who claims to be a 2 boxer is larping and if actually put in this situation would one box? Surely no one here is so stupid they would ACTUALLY 2 box, if it was real.
>>16926713This test is basically a way to determine if you are able to sense the Force or notPurely rational minds fail this test because they don't have schizo intuition, which is also why they never get very far in pure math, instead becoming slaves to STEM, and only the genuinely mad prophets like Alexander Grothendieck make it past tests like these. If you don't believe in God and feel the Holy Spirit the same way he did, you're just a two-boxer NPC who will never understand shit like (co)homology.
>>16927769Good analogy
>>16927612>This fact is orthogonal to what decision causes an optimal outcome.I don't care about whether my decisions cause an optimal outcome. I'm satisfied with my decisions leading to an optimal outcome. The causal structure is not important to me.>Your braindamaged brain fails to grasp that if it were a real situation, he would not be obligated to treat it as an abstract decision problem and do what's optimal from that perspective.If you approach the real situation and the abstract decision problem differently and apply different rules to it, then you have failed at abstraction. The point of the abstraction is to simplify the problem while maintaining the same outcome. If your abstraction has a different outcome than the real thing, you fucked up the abstraction.
>>16927757>>16927758>literal mental illness
>>16927793>I don't care about whether my decisions cause an optimal outcome.Yeah. That's what I'm saying. You're not actually solving the Newcomb problem, you're just being the dumb niggercattle that you are. If the problem was a real situation, your approach would pay off. But in real life, your lack of object permanence and ability to judge cause and effect only lead to negative outcomes.>I'm satisfied with my decisions leading to an optimal outcomeThey don't lead to an optimal outcome.
>>16927802>Yeah. That's what I'm saying. You're not actually solving the Newcomb problem, you're just being the dumb niggercattle that you are.You clearly have a different idea of what Newcomb's problem actually is than everyone else, anon. Would you mind explaining what you think Newcomb's problem actually is?
>>16927804>You clearly have a different idea of what Newcomb's problem actually is than everyone elseNo, I don't and my point stands unchallenged. Your decision actually leads to a suboptimal outcome. Your extremely weak grasp of cause and effect leads to the oracle predicting you'll only take one box. That, in turn, leads him to putting a million in the box. That's all decided prior to your making any decision. Your mental deficiency then leads to your decision to only take one box. That decision leads to your missing out on an extra thousand, which is suboptimal, but consistent with the oracle's prediction, which is also based on your mental deficiency. So in the end, your decision only leads to a suboptimal outcome. It really is that simple. The million dollars you get out of it was put there before you decided anything and isn't due to that decision.
>>16927813>No, I don'tYou said in >>16927802 that the problem I am most everyone else in this thread (and for that matter, everyone else in the world) is solving is not Newcomb's problem. That implies you are talking about a different problem than the rest of us.If you are knowingly talking about a different problem than the rest of us while using the same name, and then refuse to elaborate on how your version differs from everyone else's, I guess that explains your apparent severe mental retardation.
>>16927811delete this
>>16927718Either win a 99% certain million dollars or you sacrifice a very likely 99.9 decrease for only 0.1% increase by being two-boxer. The two boxer is so blinded by greed that he doesn't know how to think rationally. The rational choice is to follow a pattern and the pattern is that one boxers are richer.
>>16927817>hurrrrrrrrrr and that implies you are talking about a different problem than the rest of us.No, it doesn't and my point stands unchallenged. Your decision actually leads to a suboptimal outcome. Your extremely weak grasp of cause and effect leads to the oracle predicting you'll only take one box. That, in turn, leads him to putting a million in the box. That's all decided prior to your making any decision. Your mental deficiency then leads to your decision to only take one box. That decision leads to your missing out on an extra thousand, which is suboptimal, but consistent with the oracle's prediction, which is also based on your mental deficiency. So in the end, your decision only leads to a suboptimal outcome. It really is that simple. The million dollars you get out of it was put there before you decided anything and isn't due to that decision.
>>16927820>The rational choice is to follow a pattern
>>16927823
The truth of the situation is that the two-boxers always take more money personally than the one boxers. They took all the money that there ever was for them. Nevertheless, the one boxers end up richer. It's like that principle that a person who humbles himself is raised up higher. Same thing. Greed ends up being the fatal fate of the two-boxer.I remember in school math class where there was this game where you guess an integer number between one and nine. If you guess the number which was guessed the least amount of times by other students, you would win a lollipop. And I remember that I hated the whole stupid game and I didn't like lollipops anyway, so I intentionally wanted to ensure that I would definitely lose. So I chose the number two because I thought it sounds like a pretty common number. And what's funny is that I ended up winning the lollipop. The person who tries to win the most loses, and the person who tries to lose the most wins.
>>16927831Good post.If I may add to this, it shows good thinking strategy and being able to tackle puzzles from different angles. Like when you need to prove something, sometimes you can prove it via showing how a contradiction arises if it's not true. Or assuming the "null hypothesis" and then trying to prove yourself wrong just to fact check yourself for robustness. It's a good attitude to have in general.My attitude is like, don't try to win at life, just explore and invent shit. There is no "winning", that's just neuroticism for neurotic people to worry about.
>>16927838Well maybe I didn't really meant to tackle it that way. That was a bit of a joke. I just thought it was funny how it works that way. Almost like it's set up to punish greedy and reward the non-greedy.
>>16927831>>16927845I don't think it's "greed" that drives people to two box, at least not from the arguments I've seen. Rather, it seems like it's a rigid inability to move away from axiomatic rules. It's a matter of "I read in a game theory text book one time about strategic dominance, and I MUST do as it says". Even in a case where it's blindingly obvious that the strategic dominance strategy does NOT work and will always give you the worst outcome if you follow it blindly, they refuse to budge. A complete disregard of facts and reality to cling to their preconceived notions of how a correct solution ought to look. You can see in this thread how consistently even the people who claim 2 boxes is superior will admit that taking one box will give better results.
>>16927852>it's blindingly obvious that the strategic dominance strategy does NOT workIt's the only strategy that works. If the problem is to find the optimal strategy, strategic dominance is that strategy. The problem here is that mouth-breathing imbeciles who lack grasp on cause and effect also lack grasp on context, so they imagine themselves winning a million dollars in an alternate universe with magical oracles and retrocausality and think that's a solution to the problem.
>>16927852>>16927853I should also note that said mouth breathers have zero mental flexibility or theory of mind and they're projecting, because making the distinction between solving a decision theory problem and making a choice in reality is too much for them, so they think if someone realizes one-boxers are irrational they are somehow obligated to take both boxes.
>>16927853Imagine the oracle was 100% accurate in it's prediction. If you then took one box, you would literally force it to give you a million dollars because now it can't be wrong. It's only the "what if it's wrong" question that even allowes room for two-boxing to have any hope of making sense. But even then it makes more sense to one-box just because it's "pretty accurate" even if it's not 100% accurate. You say we imagine magical oracles and stuff, well no we didn't. The magical oracle was the premise of the whole experiment.
>>16927854If your logic tells you that the choice that gives you a worse outcome is better than the choice that gives you a better outcome, your logic is wrong. It's proof by contradiction.
>>16927853>It's the only strategy that works. If the problem is to find the optimal strategy, strategic dominance is that strategy.There are game-theoretic scenarios where strategic dominance is provably the wrong strategy, such as when you are a deterministic algorithm playing the prisoner's dilemma against a copy of yourself that is running the same algorithm.
>>16927864>If you then took one box, you would literally force it to give you a million dollarsI know you're mentally deficient and can't grasp this basic point, but the oracle's prediction happens before your action, so your action can't alter it or what's in the box. I understand your childlike though process but it's simply wrong unless retrocausality is real.>>16927865>If your logic tells you that the choice that gives you a worse outcome is betterBeing born a subhuman is not a choice that you made.
>>16927866>There are game-theoretic scenarios where strategic dominance is provably the wrong strategyBut this is not one of them, since its own logic is sound and no one has been able to provide an alternate strategy. Magical thinking is not a strategy.
>>16926713I can't. I don't have autism. I was blessed with the creative schizo gene, so I chose one box and found happiness in life.
This sums it up
>>16927876>Magical thinking is not a strategy.Someday, if you're lucky, you might come to understand that different people can have different lines of reasoning that end up at the same conclusion, and that people might use a line of reasoning that you hadn't considered. And on that day, if it ever comes, you will finally be ready to productively argue on the internet.
>>16927884>different people can have different lines of reasoningThe problem is that you have no coherent line of reasoning. Proof: in your next post, you will fail to explain why you wouldn't take the second box without either appealing to retrocausality or mumbling "it just werks".
>>16927871If the oracle was 100% accurate in predicting and you then chose two boxes, how are you ever getting a million dollars now? And if you chose one box, how would you not get a million? We now literally defined the whole experiement so that the oracle cannot be wrong. By choosing one box, you're basically saying that it better give you a million dollars or the premise isn't even true, but we defined it as true.
>>16927888>If the oracle was 100% accurate in predicting and you then chose two boxes, how are you ever getting a million dollars now?Then you're never getting a million dollars from choosing two boxes. You're also never getting a million dollars from choosing one box. You get a million dollars for being an imbecile, which isn't a decision you made.
>>16926778I know that I would choose the single box and be satisfied with it. According to the rules, the computer knows that too, so it only makes sense to take the mystery box. Only greedy and dishonest people would take both boxes, or try to game the already generous situation for minor additional benefit.In other words, people who would choose the mystery box are honest and intelligent, while two boxers are unscrupulous and malevolent.
>>16927880You could easily get a book deal out of it worth at least $2,000
You know there's this moment in the Veritasum video when Casper is talking to the other two dudes and he convinces the one guy to change and become a two boxer, it suddenly hit me.The one boxer guy is right. The two boxers are acting like self-awareness involves retrocausality, but it doesn't. The supercomputer is predicting your behavior, so if you are the sort of person who would choose two boxes anyway, then you are saying you don't believe you have free will. Hence why people are calling two boxers "NPCs", because they literally are NPCs who just process everything through pure logic and can't see the forest for the trees. They literally have no soul. It's kinda freaky how many people pick two boxes honestly.Goddamn no wonder the world is so fucked up.
>>16927890You're telling the oracle what it's going to predict in the past. I'm choosing one box now so in the past it's going to predict this.Let's think of another way to think about it. What if an oracle can predict which fruit you're going to buy from the store, and it's already been written in stone. Now you go to the store. Can you not "choose" what is written in that stone by choosing the fruit? Not retroactively but in any meaningful sense of the word "choose" I think you can still choose it.
Just finished the video.Looks like even the two boxers changed their minds and became one boxers in the end.Two boxers ITT, please don't kill yourselves lel
>>16927894>You're telling the oracle what it's going to predict in the past.If there's any legitimate strategy behind the one box approach, why do its retarded supporters always try to argue based on retrocausality?
>>16927900The legitimate strategy is I have $1 million and you do not.
>>16927904>The legitimate strategy is I have $1 millionI like how you're so mad you've devolved into explicit incoherence.
>>16927905he's right tho
>>16927913You're seething and incoherent.
>>16927918no I was just watching you two argue and I agree with him and think you're an idiot
>>16927920>I agree with himYou agree with "him" about a literally incoherent statement because you're seething and samefagging. :^)
>>16927921>everyone who diagrees with me is the same personyou're literally a delusional schizoI am not the same person you've been talking toget help buddy
>>16927922You didn't disagree with me, you just affirmed an incoherent statement that goes something like "the optimal strategy is I have potato".
>>16927923take your meds dude
>>16927924I can tell you're losing your mind with rage even harder now. Notice how your mental illness forces you to (You) me again. :^)
what if you just flipped a coin instead of choosing
>>16927925ok lol
>>16926797even better a slightly unfair coin, rigged so that it is slightly more likely you will 1 box. that way the computer predicts you are a 1 boxer because it is the most likely option even if there is a 49.999% chance you will 2 box
>>16927928>the reward for your efforts?>a 50.001% chance that you walk away with nothingjust take the million and thank the robot
>>16927933why would there be a 50.001% chance you would walk away with nothing, the robot would predict that you are a 1-boxer because there is a greater chance that you will 1-box. the only way for the trick to not work is if the robot knew the outcome of the coin toss
ok i think i get the two boxer stance a little bit. if you succeed with the one box then you squandered the potential for a greater return, and the states are set before you even make a choice. grabbing two is purely logical in that regard because it was feasible at that moment in hindsight, and letting go of that means you're choosing an irrational route to get less money.so the whole thing is just framing whether or not your pride as an individual agent in this experiment is worth a million dollars or not. losing is seen as the price to pay for accepting that taking one box with the million dollars meant you ignored the state of the entire experiment and reality as a whole to pursue the result.
what if i have multiple personality disorder and just manifest a different identity prior to making a choice but after walking into the room?
Here's an interesting twist that blows the 1 boxers out of the water.After you lock in your selection, the host offers you the opportunity to change your choice. Should you stay or switch?No 2 boxer would switch. 1 boxers all would though even though it's literally still the same boxes.
>>16928041the robot could predict that
>>16926713simple2 = more = poor1 = won
>>16926787this problem is so fucking hilarious because autists just can't cope with it>yeah but if I onebox there's an additional thousand dollars in the other box! I absolutely MUST have that 1000 dollars>what do you mean the robot predicted this>REEEEE I AM SMARTER THAN EVERYONE ELSE WHY CAN'T I TRICK THE ORACLE ROBOT THAT IS SPECIFICALLY SAID TO BE SO SMART YOU CAN'T TRICK ITbro just do the obvious thing. why do you need $1000 if you can get 99.999999% guaranteeed $1M?
>>16928041priced in.>>16927926it would predict the coin toss result.
>>16927900the game theory makes perfect sense once you account for the fact that $1M and $1M+$1k is the same reward, and the robot can see your internal reasoning. any intention to twobox will be predicted so just onebox and walk with the price.it's basically a way to turn a prisoner's dilemma into an iterated prisoner's dilemma by introducing a magic future seeing robot. if the robot can make nearly perfect predictions of the future it's basically retrocausality, it makes no difference to the problem setup.
>>16928049Your decision to change or not is obviously predicted too. Knowing this choice exists will factor into the prediction, and only not having known it and then be assured by the computer that it only predicted the first choice would make it perfectly rational to switch to 2 boxes for an extra 1000. Where's the "blowing out of the water"? Really not as clever you thought it is.
>>16928105>if the robot can make nearly perfect predictions of the future it's basically retrocausalityNo, it isn't, and the difference is kinda important to the problem.
>>16928127>Your decision to change or not is obviously predicted tooNo. It isn't. That isn't the problem I presented you with. The robot predicted your first choice only.You don't get to change my hypothetical just cause your answer makes you look like a fucking idiot.
The entire "paradox" here just arises from the necessary vagueness of the oracle's predictive mechanism. If the oracle is 100% accurate then your choices must be deterministic; you have no free will, and therefore there are no choices and there is no question.However, most people who engage with this question are assuming the existence of libertarian free will. The "paradox" then arises from the fact that the existence of an oracle with the ability to accurately predict your behaviour in this scenario is a contradiction of libertarian free will. Libertarian free will is an inherently mystical concept, and to posit a mechanism that can accurately model its outcomes is just to refute its existence.So if we assume that free will exists in this premise, two boxing is clearly the correct choice as the oracle cannot peer into my decision making engine, or predict my thought processes. There is a hard limit on the kind of predictions it could make, especially about a decision like this one. It simply cannot be as accurate as described in the premise, unless by pure chance, in which case two boxing is inarguably correct. If you want to reply that every previous person who one boxed got a million dollars, and everyone who two boxed didn't, and it wasn't by chance, then I would just say that the premise contradicts itself.
>>16928187You being unwilling or unable to follow the premise doesn't mean it contradicts itself. You just reject the fact that libertarian free will doesn't exist and therefore the machine is capable of being accurate to an extraordinarily high degree barring some slight imperfection in its prediction algorithms.You are too autistic to engage in hypotheticals.
>>16928188If a question is self-contradictory, then saying it's unanswerable is not failing to "follow the premise". It's just recognising that the premise is nonsensical. But I'm only saying it's contradictory IF we assume libertarian free will as part of the premise. If we assume there is no free will, and the oracle is 100% accurate, then there is no longer any question because you are not an agent anymore. All you can say then is that people who one box get a better outcome then people who two box, but whether you do one or the other is determined by factors you have no control over.It would be like being adrift on a boat that's being carried around by a very complex system of ocean currents. Let's say there's one paradise island and one hell island that you could end up on. We can say that landing on the paradise island is a better outcome than landing on the hell island, but which one you land on is determined by the currents that you have no agency over.
>>16928182Next time try reading the whole post. Maybe use text to speech if it's too hard.
so is this just a litmus test of who has free will and who doesn't? i'm confused. if you grab the million dollars, wouldn't you then be upset because that means at that moment, you could have twoboxed and had more? but because you were blinded by the fear of being predicted, you chose not to, even though at that moment it was possible?in summary, this makes it sound like no matter what, you don't have free will. that's the core premise. so i don't get why there's any thought experiment on our part. you're screwed either way. you twobox and lose out on a million trying to anticipate the hindsight scenario, or you gain a million and lose out on the hindsight scenario.
>>16928240It's fundamentally a philosophical and not a game theory problem, yes. If anything, the "strategy" is having been the kind of person that would pick 1 box - but this is very unlikely if in the actual choice you pick two. As unlikely as a predictor error, in fact.I think it's important that we leave the possibility of a slight predictor error, because otherwise you'll get mired in the determinism discussion. As such, it's just a matter of past behavior correlating with present behavior and the predictor making use of that.Seeing how people consistently seem to split evenly into two camps of what's the "obvious" solution, it's not even unrealistic to assume there are some other personality traits that predict it.
What happens if the Oracle predicts that everyone will take two boxes?
>>16928305Then it won't be very accurate and is therefore not the same scenario
>>16928307Its accuracy is presupposed as part of the question. If its 99.9% accurate and it predicts 100% of people will take 2 boxes, then only 0.1% will take one box (and end up with nothing). The fact that clearly way more people than that intended to take 1 box is not really relevant, the Oracle saw the future, and they didn't.
>>16928310We know from the debate that somewhere between 50 and 80% of people will pick one box. Therefore, it can't be because most people pick two boxes.
>>16928311Most people *intend* to pick one box. They didn't though. We know this because the Oracle's predictions are near perfect and it said they didn't. To the Oracle its no different from telling you what happened in the past. Maybe the first 1000 one boxers all left the room sullen faced and the next 999,000 all decided to at least leave with something.
>>16928218>Next time try reading the whole postIf you have points to be made, next time don't bury them in bullshit.>be assured by the computer that it only predicted the first choice would make it perfectly rational to switch to 2 boxes for an extra 1000.The two boxes have the same content whether or not you're given the option to switch. Saying it is always optimal to switch to 2 boxes is the equivalent of saying it is always optimal to pick 2 boxes in the first fucking place.Fuckwit.
>>16926713 I was disappointed by this new video because 2 boxers are just being silly.An actual better question would be to choose only 1 box, Box A has 1000$Box B will have 1M$ if prediction is BoxA and will have nothing if it predicts you choose Box B.>Be >>16926778 >Loose million dollars >>16926829 lmao>>16926873 retarded>>16927027 this
>>16928499>next time don't bury themlmao. Just practice more and soon you'll be able to read a whole paragraph without having to take a tiktok break, little zoomie.>Saying it is always optimal to switch to 2 boxes is the equivalent of saying it is always optimal to pick 2 boxes in the first fucking place.Not at all. The additional choice doesn't add anything interesting to the problem. It still hinges on what first choice was predicted and whether you act such that it is consistent with the good 1 mil prediction. The fact that you can get an extra 1000 afterwards unexpectedly is irrelevant.Maybe you thought it clever to add something that vaguely sounds like the 3 doors problem to this one, but it wasn't.
>>16926713The premise states the robot can predict 100%This is only possible if the robot can simulate your entire brain>Walk up to challenge>robot scans your brain, and instantly simulates the entire room and your brain and body inside it>watches simulation to see what you do>simulation ends>robot does or doesn't put money in box>The real you then goes into the room and does the exact same thing as simulationIn this situation, the obvious choice is 1 box, since you don't know if you are in the simulation or in realityAlso, if you are the simulation version, you die when the simulation endsSo the real answer is to never pick up either box, because once you do, there's a 50/50 chance your existence instantly ends
>>16926778Logicfags are both so stupid and also so blind to their stupidity
>>16928103>>16926787Why are you unable to understand the scenario and frankly reality? This is not necessarily some hyper-abstract thought experiment, it could be done in reality. And the mechanism is that there is money on the table that cannot be warped out of existence if you decide, in the moment, to take both boxes. It is already in there by the time you approach. So why not take it? You can't alter what the robot predicted -- whatever that was. Your choice to one-box in the moment has changed nothing.
>>16928881Sorry I can't hear you over my million dollars
>>16926837no need for a video it's pretty simple, he predicted you'd smell the presence of 1M so he himself can choose what will happen and that depends on whether he wants to give you 1M+1K or not. He will of course not because the jew is rich enough. He determines the outcome in the jew case. But the outcome is predetermined by god for everyone else and he has no choice but to obey his own rules and put 1M if you'd choose B. He's a slave of fate only for the goyim but for the jew he's in control.
What if the first box contains $5000 and the second one $100,000? Will people reconsider their choice?