# Acausal Reasoning

## Acausal Reasoning

I'm sure at least some of you know this already (I've reached the point where I've spent so much time discussing this with everyone I know that I have no idea how much other people know about this. I'm sorry if this is very basic.), but there's a problem in game theory called Newcomb's Paradox which I think is very applicable to time travel in Unichat. (I have

It goes like this (a tav, a resh):

Imagine there is a being called a Predictor. It's a person, or an AI, it doesn't particularly matter. The point is that in every one of the last 100 times someone did what you're about to do, it was able to correctly predict their choice.

So, you go into a room. In this room are two boxes: box A and box B. Your choice is whether to take just box A, or both box A

Box B always contains $100. Box A contains either $1,000 or nothing. So far, this seems simple.

Hopefully that all made sense. If not, just look it up. There are plenty of explanations online. I'm afraid I'm more used to explaining this in person than with text, so there's a good chance I wasn't as clear as I thought I was.

There are two major schools of thought on this problem. The first says that your current actions can't affect what's already happened, and the payoff is always higher by $100 if you take both than if you take one (since this cannot affect whether there is any money in box A, and whether or not there is, you know there is in box B). That this strategy pays worse than the other (or at least has each of the last 100 times) just shows that the Predictor is rewarding irrationality.

The second says "Hang on. You're saying that of the two choices, the rational one is the one that leads to the

(My apologies to those of both schools. I have just butchered an awful lot of very careful reasoning.)

The more applicable version of this paradox is one where box A is transparent. Now you can see whether or not the Predictor has put in the money.

I would argue this changes nothing, because the point is still to make the only stable version of you the one that gets more money.

What I mean is that there are two possible states, in each of which you have two choices.

Let's say you go into the room and see $1000 in box A. You then take only box A. This is stable so everything's fine.

However, let's say that if you were to go into the room and see $1000 in box A, you would take both boxes. Here we have a problem, so clearly the Predictor

Now let's say that you go into the room and see nothing in box A. If you now take both boxes, then

If instead you would take only one box, then only the situation with more money is stable, so that is what you'll get. (I'm guessing that if you made both choices unstable, the Predictor wouldn't let you play or something.) So I'd argue the optimal solution is still to only take one box in each situation.

So, you walk into the room, and see that there is no money in box A. We've just decided that the optimal outcome is if under these circumstances you would take only one box, so that's what you should do. But every time before, the Predictor's been correct. So what's going on?

Well, how does the Predictor know what you're going to choose? One simple way for it to know would be to simulate you under both circumstances to find out. Let's assume that this simulation is perfect so done only once.

So back to you in the room with an empty box A. If you take only the empty box,

Well, this suddenly sounds very familiar.

I hope this helped someone think about time travel. I hope it was at least

*no*idea why I haven't brought this up before now.)It goes like this (a tav, a resh):

Imagine there is a being called a Predictor. It's a person, or an AI, it doesn't particularly matter. The point is that in every one of the last 100 times someone did what you're about to do, it was able to correctly predict their choice.

So, you go into a room. In this room are two boxes: box A and box B. Your choice is whether to take just box A, or both box A

*and*box B.Box B always contains $100. Box A contains either $1,000 or nothing. So far, this seems simple.

*However,*the Predictor decided, before you went into the room, whether it thought you would take one or both. If it thought you would take just box A, it put $1,000 in there. If it thought you would take both, it put nothing in there.Hopefully that all made sense. If not, just look it up. There are plenty of explanations online. I'm afraid I'm more used to explaining this in person than with text, so there's a good chance I wasn't as clear as I thought I was.

There are two major schools of thought on this problem. The first says that your current actions can't affect what's already happened, and the payoff is always higher by $100 if you take both than if you take one (since this cannot affect whether there is any money in box A, and whether or not there is, you know there is in box B). That this strategy pays worse than the other (or at least has each of the last 100 times) just shows that the Predictor is rewarding irrationality.

The second says "Hang on. You're saying that of the two choices, the rational one is the one that leads to the

*worse*outcome? That's not right.". It argues that the rational choice is to be a person who*will*take only one box, because by being that person you cause the Predictor to put money in box A. Of course, the only way to have been a person who will take only one box is to*only take one box*, so that is what you must do.(My apologies to those of both schools. I have just butchered an awful lot of very careful reasoning.)

The more applicable version of this paradox is one where box A is transparent. Now you can see whether or not the Predictor has put in the money.

I would argue this changes nothing, because the point is still to make the only stable version of you the one that gets more money.

What I mean is that there are two possible states, in each of which you have two choices.

Let's say you go into the room and see $1000 in box A. You then take only box A. This is stable so everything's fine.

However, let's say that if you were to go into the room and see $1000 in box A, you would take both boxes. Here we have a problem, so clearly the Predictor

*cannot*put money in box A. This is an outcome you want to avoid, so you should take only one box. (I would argue.)Now let's say that you go into the room and see nothing in box A. If you now take both boxes, then

*both*possibilities are stable (assuming you took my earlier advice. If not this is the outcome you get, which is clearly worse than it might be.). This means that you can't know which outcome you'll get, so your expected return isn't (necessarily, the Predictor might have a rule saying that if both possibilities work then put money in, but there's no reason to trust to this) as high as it could be.If instead you would take only one box, then only the situation with more money is stable, so that is what you'll get. (I'm guessing that if you made both choices unstable, the Predictor wouldn't let you play or something.) So I'd argue the optimal solution is still to only take one box in each situation.

So, you walk into the room, and see that there is no money in box A. We've just decided that the optimal outcome is if under these circumstances you would take only one box, so that's what you should do. But every time before, the Predictor's been correct. So what's going on?

Well, how does the Predictor know what you're going to choose? One simple way for it to know would be to simulate you under both circumstances to find out. Let's assume that this simulation is perfect so done only once.

So back to you in the room with an empty box A. If you take only the empty box,

*you must be in the simulation*. You will cease to exist as soon as you walk out that door. If you take both boxes, there's a half (or maybe a third) chance that you're real, and will keep existing. So in order to be sure of getting the optimal outcome, you have to be willing to do something which will cause the specific version of you that is acting to cease existing, so that another version of you ends up with a better outcome.Well, this suddenly sounds very familiar.

I hope this helped someone think about time travel. I hope it was at least

*comprehensible*. Like I said before, I'm used to instant feedback on whether or not I'm being clear. Please do tell me if any part of this (or all of it) wasn't.**FallenLeaves**- Posts : 16

Join date : 2017-08-16

## Re: Acausal Reasoning

Interesting.

(Not that I get what's in the box if you play to

(Not that I get what's in the box if you play to

*take over the world*. Sounds like a whole lot of trouble to me. And anyway, it's probably much cheaper and easier to arrange being the ruler of the planet for the rest of your life by talking to mental asylums, drug dealers or VR developers nearby. Same gains and zero losses for a tiny fraction of the cost. Whatcha wanna do tonight, Brain?)**CONNECT 1200**- Posts : 29

Join date : 2018-01-28

Page

**1**of**1****Permissions in this forum:**

**cannot**reply to topics in this forum