Why Not Just Use Unreal?


“Why not just use Unreal?”

I don’t think many people who cover the industry actually understand why not– even if they say they do. Ever since Gears of War came out in late 2006, it’s possible to read in magazines statements like, “the game runs on Unreal Engine 3, so it should look quite sharp when it’s released” or “imagine what a studio like Bungie could do if they were using Unreal technology.” The assumptions are clearly made, and they are wrong. Is the general gaming press really this uneducated and gullible?

Use of the Unreal Engine doesn’t equate to good graphics and the look of a game isn’t simply a matter of what engine is used to drive them. Firstly, and foremost, is the matter of art direction and style. Great artists will make a great-looking game no matter what technology is used to bring it to the screen. Older technology doesn’t necessarily mean worse art: beautiful games on the Super Nintendo still look great today. Pushing a game on the merit of its graphics technology (something of which both developers and PR departments are equally guilty) is missing the point unless we can clearly show how the features are in service of the creative vision of the title– the fiction and the experience.

Each console holds within it a fixed amount of exploitable power and memory. In the press one always reads about how developers need to “figure out” how to actually tap this power. Presumably, once we developers finally puzzle our way through the console’s architecture, our games will make some kind of staggering leap in how they look or play. This is, of course, a misconception: given a competent engineering team, the biggest problem isn’t how to unlock the power of a console– it’s what to do with it once unlocked, and how to allocate this power such that it serves the creative vision of the game in the best way possible.

For example (and speaking very generally), an action game where the player moves down a series of corridors and eliminates two or three different kinds of enemies has the leeway to place extremely high-resolution textures on the walls and floors. But if, instead, you wanted twenty different kinds of enemies in the same situation, the memory and processing power to drive all the different kinds of enemies (with their own models, textures, animation sets, AI behaviors, etc.) would have to come out of something. At that point, we come to a decision: is the fun of having twenty different kinds of enemies all at once worth the cost of reducing the texture resolution on the walls?

These are the kinds of decisions developers make– if the player moves in a linear way, we can load more graphics and sounds and scripted events to throw at him, but if he has an open world to explore, the game might be more entertaining in the long run. If we decide the game is going to have a single, unified look and location, we can re-use textures throughout the game; if we decide to set the action in multiple, very diverse areas we will have to create and load those new assets somehow.

Now, developers get the most challenge (and fun) out of trying to have it both ways, out of trying to transcend what everyone thought were the limits of technology: huge open areas with incredibly high-resolution textures and models, or a hugely diverse world where everything looks and sounds great. But achieving the perfect utilization of a console’s power is each developer’s own challenge to overcome by themselves because we each make our own games. Epic accomplished it with Gears of War, but that doesn’t mean Unreal technology is the best choice for any game.

“Why not just use Unreal?” Unless you were Mark Rein, I’d find that insulting.

2 Comments