An impossible non-problem

I was writing an essay (aimed at non-rationalists) where I used the following definition:

"By an epistemically rational agent, I mean an agent that reasons in such a way that its beliefs will on average be maximally true (that is, it will have beliefs which correlate maximally with reality). The definition refers to the way it forms its beliefs. Even a epistemically rational agent can form false beliefs if it is systematically fed false information (that is, information which does not correlate with reality), but it will not form false beliefs because of flaws in its reasoning.

I take it as a given that there exists an objective universe, though I do not say that any agent is necessarily capable of arriving at the objective truth (that is, beliefs that fully correlate with reality). An epistemically rational agent, however, will (on average) arrive closest to the truth."

I then set out to explain and justify this definition further: to explain why rationality is and why it matters. Here, however, is where I ran into an annoying problem. It seems to me impossible to solve in theory, while being no problem at all in practice.

As a special case, consider Boltzmann brains. The laws of physics apparently allow for the spontaneous formation of arbitrary brains, with arbitrary content. With a large enough universe, we should expect this kind of brain formation all the time. A fraction of the brains will turn out sane (for the brief moment that they happen to exist, in the vacuum of space), and in fact, a tiny fraction of brains will happen to turn out as very close copies of the brain that is currently writing (or reading) this post. Then there will also be brains whose beliefs bear no correlation with reality (as would be expected, given a random process of brain formation). Some of them will be x-rationalists who just happen to have entirely faulty memories of the world, and have on that basis formed techniques which are completely mistaken, but which would be x-rationalist techniques, if their memories only happened to be accurate.

Now, the Boltzmann brain dilemma is easily solved. We need not worry about being brains spontaneously created in the vacuum of space, for if we were, we'd probably fall apart within the next few seconds or so. In that case, it wouldn't matter one bit how good our rationality techniques were. Furthermore, the theory allowing such brains may even turn out to be flawed.

But we should consider the least convenient world. Any number of other theories, from the Simulation Argument to Tegmark's Multiverse to the Dust Hypothesis, have similar implications. The core dilemma is this: we are trying to build techniques for verifying that our beliefs are maximally correlated with reality, and by this gain real-world benefits. But in order to show that our chosen techniques really do correlate with reality, or that such a correlation does have real-world benefits, we have to appeal to the way the universe is. But no rationality technique allows us to know anything about the way the universe is.

We can try to use math and Bayes' theorem to demonstrate some basic information about how to collect and process information. But we only know that math works because we have seen that it is grounded in reality (in other words: math is true, because we have chosen its axioms so that they correspond with reality). It is not inconceivable to picture a reality where 2 + 2 was sometimes four and sometimes five, if it were run by an AI that randomly chose to create an extra object to make the total five each time that two and two things of the same kind were put together. We can try to say that "in order to draw a map you have to leave your room and observe the territory" and that is true based on what we know of our universe, but there could be a universe where correct information was spontaneously inserted in the minds of people who sat in their rooms thinking.

Even worse, math itself says that we cannot be certain of being rational. There are theorems showing that no agent can be maximally intelligent in every possible universe: for any agent, you can construct a universe that will simply fail to follow its expectations. This, of course, doesn't mean that you couldn't construct an agent that *would* act intelligently in that universe.

The most annoying thing about this problem is that to any "common sense" thinker, it can be dismissed outright as irrelevant. But the principles of x-rationality tell us that we cannot dismiss a problem simply because it is inconvenient or sounds absurd. The least convenient world is one where all the techniques of rationality have worked perfectly so far, but will all completely and entirely fail two seconds from now, as the "if (time > xyzzy) then ChangeTheLawsOfPhysicsTo(OnesThatAnnoyTheHellOutOfRationalists)" condition in the simulation gets triggered. We could try to appeal to, say, Kolmogorov complexity and point out that the specification for such a universe is longer than a universe where the laws don't change... but how are we to know that the set of all possible universes isn't defined so as to make the more complex ones the most common?

Obviously this problem is not new. (Emphasis added to catch the attention of those who thought this was old hat and were starting to doze off.) Variations of it have been discussed for several hundred years, perhaps most famously by Descartes. But I want to bring up again because it seems like a case where rationality becomes inconsistent: in order to save rationality, we must momentarily abandon rationality. [I'm not aware] of any reply to them that would essentially amount to anything better than "this could be true, but it'd be too inconvenient if it were, so we must pretend that it isn't". Just about everything ever posted on OB and LW (to exaggarate only slightly) says this is fallacious.

But if we reject the argument that it's just too inconvenient, then we cannot believe in rationality actually being helpful, because there's always a chance that we're just being deceived. This seems to be true for any argument you can come up with in order to save rationality. Normally I'd [reject arguments of the "but there's still a chance, right" kind], but in this case I'm lost at what probability to actually assign to the chance that we're actually living in an inconvenient universe. Several theories actually imply that there's a high probability for that. An often used, common sense response is to ignore those chances you can't meaningfully assign a probability to, but what if we're living in a universe where that systematically produces the wrong results (or will produce two seconds from now, if it hasn't before)? Aaaaaaargh! *one Boltzmann brain somewhere, existing in an unfortunate universe where brains are as volatile as Hollywood cars, explodes*

In my essay, I've currently resorted to cheating - I mention that in defining rational agents, "I will limit myself to agents operating only in our universe, and not bother with arguments about agents in hypothetical universes unlike ours". Obviously this is a cop-out, and one which I feel undermines the whole argument: while the essay seeks to establish some explanations used for defending religion as objectively false ("objectively false" defined as ways of thought that an epistemically rational agent would never have), that goes out of the window if it's fully plausible that an agent that I'd define as epistemically rational would operate in a universe where it arrives at worse beliefs than irrational agents. To me, this seems to make it impossible to truly defend rationality as any better than any kind of irrational belief. And yet in practice, I have no problem doing so, even though this is irrational by rationality's standards.

In case this was not obvious: my issue is not with the traditional problem of Descartes' evil demon as such. I ordinarily would have no problem with the thought that we just have to take some things as a given. The thing that I'm concerned about is that "we just have to take some things as a given" seems to directly contradict everything rationality-related that we've been discussing so far. The assumption of the least convenient possible universe, for instance, seems to lead to disaster. This seems to suggest problems in our rationality techniques.

Also, it makes it harder to (intellectually honestly) justify to people why they ought to be more rational, and that doing so really does make a difference.