Coining the Faceless Wind


Long ago in a philosophy class my teacher touched upon the well-known thought experiment called “the brain in a vat,” in which an imaginary subject’s brain is placed into a tank of something approximating cerebrospinal fluid and hooked up to a supercomputer that feeds it artificial stimuli that is comparable to kind the “real world” would provide. At its most basic level, the experiment brings into question what is “real” or “true” since the mind (we assume the brain is the mind, here) in the vat, by definition, is unable to determine if it is in the “real” world, or merely a brain in a vat. These kinds of theories were popularized through cyberpunk fiction and movies like Ghost in the Shell and The Matrix, which in turn affected the way we think about computers and the Internet.

Though I am not really equipped here to discuss the real implications of the possibility of a brain in a vat, I thought that another interesting area of inquiry might be how some (evil?) demiurge might construct such a mechanism using what we currently know about real-time virtual reality– or, in other words, video games.

As I see it, there are two main approaches to achieving the brain in a vat scenario.

The first is summed up by The Matrix, wherein all of humanity are brains (plus their bodies) in vats, and interact in a kind of shared hallucination. The video game equivalent of this is probably a massively multiplayer online game, like World of Warcraft, or a more freeform online world such as Second Life, where each avatar is piloted by a human brain through the world of the game and interacts with other brains in their own vats through the various methods that the game provides.

It doesn’t take long for the technical issues of this approach to become manifest. Every brain would need its own powerful client machine to “render” and feed the brain stimuli; the client machine would also need to connect to a central server in order to reconcile each client’s version of “reality.” Coordinating the actions of many brains in vats together– people jostling each other in a crowd, for example, or playing in a band, or multiple cooks in a kitchen– would tax a system of any known design enormously. Complex physical simulations would need to be carried out either by the central server and propagated instantaneously to all clients or calculated completely deterministically on each client simultaneously. Such coordination is extremely difficult to achieve even with the relatively simple information that networked games must share today. (The only way I can think of that this might be possible is if the system does not actually run in real-time, and that each “frame” of reality we experience is actually the result of days or weeks or centuries of calculation on the part of the demiurge’s computer system. But one imagines him getting as impatient as we do with poor frame rates.)

There are other problems with a Matrix-like scenario, too: how would such a system handle human births and deaths? Is our pet dog or cat a smaller brain in a smaller vat somewhere? The line between what is simulated versus what is “real” blurs in many ways in this arrangement, making a consistent illusion of reality difficult to define, let alone manage.

The second major brain in a vat scenario is the solipsist one: you are the only brain in a vat, and the vast world you perceive is generated solely for your benefit. Serious philosophers usually recoil from solipsism, and for good reasons. One is that there’s no place good to take it (the world is false– okay, so now what?). Another is that it doesn’t feel right morally; we must accept the existence of other beings in order to behave well. Believing that others do not really exist leads one down the road to psychopathy.

The games that best encapsulate this form of the brain in a vat are in my opinion Bethesda Softworks’ recent role-playing games, The Elder Scrolls IV: Oblivion and Fallout 3. In these games the entire world is created solely for the entertainment of the singular player, who is the only “character” possessed of agency in their vast worlds.

As a game developer, the solipsist model seems much easier to manage. After all, one of the most important things a game does is decide what not to do: the game’s renderer decides what not to draw, the sound system decides what doesn’t need to play, the artificial intelligence decides what enemies do not need to think. The complex graphics, soundscapes and interactions we enjoy are due to these judgements– if our consoles actually tried to process in full what was going on around us they would chug to a halt.

We have all heard the koan about the tree falling in the forest and its sound, or lack thereof, in the absence of observers. In terms of a video game, the answer is obvious that if there is no observer present there is no reason to calculate the observable property (armchair quantum mechanics enthusiasts may bring up the Heisenberg uncertainly principle here). In terms of technology, a singular brain in a vat could be much more easily convinced it is in a “real” place, especially if it has never known any other world.

Between these two extremes there are some compromises available, such as the idea that perhaps only two or four or six brains exist in vats– though situations like that seem more the province of science-fiction plotting than a serious possibility.

None of this idle speculation is to suggest I believe the world is indeed simulated, so I’ll close with a paraphrase from Borges: the world, unfortunately, is real; I, unfortunately, am Matthew.

5 Comments