Author Archive

I’m Not Dead, I Just Post Elsewhere (PART TWO: THE POST ELSEWHERING!)

Hey, guys.

School, and life, have been amazingly hectic. My posting on Polygon has come to an end. I do stuff on NeoGAF every once in a while, but the best place to find me, without a doubt, is on my new Tumblr. My update frequency isn’t going to skyrocket–I’m stuck developing game projects at school–but it is going to slowly begin increasing as the semester moves on. For the most part, it will be freelance work I do–I’m working on my second piece right now.

In fact, I might be updating you guys as to my development projects progress. That means pictures!

I may also be posting a more traditional wordpress blog with a friend after a while, but that’s not a guarantee. There have been rumblings of podcasting, but only rumblings… for now. If it happens, however, that’s going to start in the summer.

For now, school and work dominate my waking time.

I’m Not Dead, I just Post Elsewhere

I post here now.

Immersion (Why Games Are Special)

(Originally posted here; has 13,123 views)

I read a forum thread somewhere recently—I want to say NeoGAF, but I can’t find it ’cause my registration’s pending so I can’t access search—that talked a bit about words and concepts we’d like to see removed from gaming. It was a pretty fascinating topic, and I was happy to see that the used-to-the-point-of-meaninglessness word “visceral” and the anti-game “cinematic” were frequently cited. It was perfect timing, then, for Kirk to post an article highlighting a video arguing against the use of the term “immersion” in video games the next day.

I disagreed rather vehemently. I still do, which is why I’ve spent several hours (as opposed to my normal twenty minutes) to prepare a response.

Before I get into this, I must warn you that I might be someone harsh on Mr. Abraham and those who agree with him. He’s gotten so much fluffy praise from people who consider themselves to be on the forefront of games criticism (a field which, from what I’ve read, is incredibly circlejerky and not nearly as knowledgeable on the subject as it thinks it is) that I think some harshness is in order.

Anyone who believes that “immersion” is a term that should not apply to gaming, or that ideas involving immersive design should be removed from video games is frighteningly wrong. Not only that, but the argument that “immersion” is a bad term, or that games should not be made with immersion in mind are as dangerous to the medium as attempts to ban it.

Guess I should back myself up, huh?

I’ll be covering two main points, because it appears that these guys either fail to understand what immersion means or genuinely want the concept of immersion to die.

Let’s start with the English language.

Okay, so, first things first, a little English language primer (thanks to squibsforsquid‘s responses to my initial response to Abraham’s video):

The English language is incredibly nuanced. Words that seem to be identical to each other can actually have subtly different meanings that aren’t covered by others. “Immerse/Immersed/Immersion” is a great example of this. A simple dictionary lookup reveals it to be something along the lines of “engrossed” or “attention-grabbing,” but if that were the case, then one would wonder why similar words and phrases would not suffice. Why does “immerse” and its various forms exist?

The answer lies in its other definition: to be submerged entirely in a body of water.

Imagine, if you will, that the English language is all the food in a grocery store. Words like “engrossed” and “immersed” are like varieties of lettuce. Sure, you might think that iceberg and romaine lettuce are both leafy green veggies, so they can be used interchangeably, but nothing could be further from the truth: indeed, romaine has a radically different texture and moisture than iceberg (I prefer the darker, bitter taste of romaine, personally, but some people like the cool crunchiness of iceberg).

An English-language example of this would be the substitution of “good” for the word “like.” What we like is something inherently personal and subjective—it’s something that matches up to our own personal standards of enjoyment. What is good is something that compares favorably to set standards—usually ones external to us, like cultural standards. Saying something is “good” does not inherently mean that we like it; likewise, saying that we “like” something does not necessarily mean that it is a good thing.

Similar terms are not identical ones.

Immersion isn’t simply “paying a lot of attention to a thing.” There’s more nuance to it than that. Merriam-Webster’s example, “We were surprised by his complete immersion in the culture of the island,” hints at a level of integration into something. When someone says “he was immersed in the water,” they’re talking not talking about being engrossed with water, they’re talking about going under.

The people who first used the term “immersion” when applied to game design didn’t choose the word lightly. There’s a reason that the immersive sim genre of video games is called the immersive sim and not “engrossing games” or something else. “Immersion’s” unique texture within English makes it a term uniquely suited to discussing an element of video games that other mediums don’t have (you can pay attention to any medium; you can only be immersed in something interactive).

Any game can be engrossing—Tetris is engrossing, for instance—but few games can be truly immersive. Few games can make their players a part of the world within them.

This is an important point, because immersion, in this sense, is something that’s entirely unique to video games. Nothing—no movie, no play, no book—can be truly immersive the way a video game can be.

Basically, to sum things up so far, “immersion” is a term that isn’t always used correctly. When referring merely to the act of being deeply involved in a game, yes, immersion is an improper term, but we should not remove it from our gaming lexicon entirely, because it’s a term that accurately describes one of the primary elements of what separates video games from other entertainment mediums.

Where am I getting this from, you ask?

Right, so, let’s jump back to 1974. Gary Gygax and Dave Arneson (sorry, Dave, but while you take alphabetical precedence, Gary wins for having alliteration and an x in his name, which just makes him cooler) created this game called Dungeons & Dragons.

It was a role-playing game.

I’m not talking about stat-based adventure JRPG stuff, either. I’m talking about a true role-playing game (speaking of role-play, there’s another thing that will confuse you if you try to find a dictionary definition—understanding the use of the word, specifically regarding its origins and relationship to improvisational theatre, is key to understanding what is and isn’t a role-playing game). Basically, they created an instruction set for how to role-play.

The goal was to empower players to have adventures in worlds of their own creation, a radical departure from other games (sports, Milton Bradley-style board games, etc). At the same time, it wasn’t a performance thing, like theater. It was just “hey, let’s explore a world!”

The rules behind DnD served the purpose of making sure players didn’t get overpowered or do absurd things. You don’t actually need a turn-based system, stat points, party members, and so on and so forth to have an RPG, it just makes things a bit easier for a GM to handle.

Jumping forward a bit, we hit 1981 and two games, Ultima and Wizardry. It was effectively the birth of the video game RPG; other games had preceded them (I once read that a computer game called DnD showed up in 1975), but these two games were the watershed moment. Ultima and Wizardry used incredibly limited technology at the time to try to emulate the RPG experience.

A necessary digression: when Japanese developer Yuji Horii saw Wizardry for the first time, he got really excited by the prospect, and, apparently being unaware of the purpose of Wizardry’s mechanics, cloned a lot of the ideas and created Dragon Quest, the game from which all JRPGs since have descended. Most of the time, things don’t work out quite this well and new genres aren’t created, but in the JRPGs case, things worked because Horii is a boss. The lesson here is that you shouldn’t go creating a game unless you understand why the mechanics behind it exist. This is also the reason why regenerating health is used in a lot of games it has no business being in.

While the JRPG gained popularity and became its own thing (and confused a bunch of people as to what the RPG actually is), Western devs were still quietly making their own RPGs, but with added computer power. Instead of making turn-based, top-down games with various battle systems, they were focusing on evolving the genre, making it distinct even from the pen and paper games which had birthed it, while at the same time, keeping the spirit of the RPG intact.

Now, I should point out that video game RPGs are still absurdly limited! Computers cannot improvise the way that GMs can. That said, there are some areas where they excel… and that’s where Looking Glass comes in.

If you understand one thing about the history of video games, it should be that no game studio on the planet will ever be more important than Looking Glass Studios was. These guys pioneered first-person games, sandbox games (what, you thought Shenmue or GTAIII was the first sandbox game?), flight simulation (when they died, the flight sim industry died), stealth games, and a bunch of other stuff. Their employees have gone off to help invent the Xbox (forever transforming the gaming landscape and eliminating Japan’s stranglehold on the console industry), work on Guitar Hero and Rock Band, revitalize The Elder Scrolls (heavy immersive elements in those games), create Deus Ex, work for Valve, and so on and so forth.

Oh, and one of the first games they ever made was Madden, so there’s that.

Perhaps their most important contribution to game design, however, was immersion.

The Looking Glass guys, in the early 90s, had a revelation: they could use simulation elements to add new life to their worlds! From this, the immersive sim was born.

Basically, you take that core idea behind role-play (I want to be someone in another world) and use computers to create a world players can interact with. That’s really all there is to it. You make the game in first-person, to reiterate the fact that the player is his or her character. You create levels that feel like real spaces, then populate it with complex AI that can do more than just fight. If you can, you try to throw in elements like physics, good graphics, a high degree of interactivity, and so on and so forth. You also cut down as many abstractions as possible (abstractions in a game context are basically just mechanics that provide a simpler way of approaching real-life ideas—such as turn-based gameplay when a computer can’t handle a real-time approach).

What we’ve found is that immersive games, provided they are easy enough to get into (Deus Ex, for instance, inundates players with information in its training level and summarily throws players into the deep end with Liberty Island; this is a bad way to do things), actually have a huge draw and significant lasting appeal. Some recent examples of immersive games include STALKER (more than 4 million units sold—not bad for a Ukrainian studio with next to no marketing), Fallout 3, and Skyrim. Other games, like Assassin’s Creed and Dark Souls, use immersive elements to enhance their experience.

People love these games. They love being able to enter a new world and interact with it. They love emergent gameplay—why else do you think GTA is such a popular series? Skyrim was successful because it facilitated exploration. Crysis was unique because it allowed deeper physical interaction with the world. STALKER’s advanced AI and player needs (eating, for instance) helped its players sink completely into the role of the amnesiac Marked One.

Far Cry 2, flawed as it was, got the love it got because it let players treat the world as an actual world. Yesterday, I read about someone who stacked up cars in Far Cry 2, blew them up, set fire to a field, caused the base he was attacking to catch on fire (which burned some of his enemies alive and confused others), and then walked in and took what he needed without anyone realizing he was there.

(I realize that I could probably write an entire essay on the power of emergent gameplay and why Dwarf Fortress and STALKER are the greatest games ever made, but I’ve got enough stuff to talk about as it is).

Immersion is the future of video games.

I realize that “the future of video games” is a phrase that gets used a lot, primarily to describe whatever trend is currently popular (Facebook games, iOS games, casual games, motion control, you name it), but I’m using it in a slightly different context: I’m talking about progress.

Most people don’t really think about the future advances in tech. What can Kinect really do for us? What does Goal-Oriented Action Planning AI do to enhance video games? What doesprocedural generation mean to video games? How does the RPG fit in with all this? What can we do with interactivity, that sacred ideal that elevates video games beyond all other mediums by eliminating passivity?

The people arguing that games shouldn’t be immersive are as ignorant as the people who argue that Role-Playing Games are nothing more than stat-based adventures. These people want to hold the industry back—to keep it at some larval stage where they’re most comfortable. Maybe it’s out of fear (after all, I don’t doubt that bards objected strongly to novels, nor do I doubt that novelists objected strongly to the medium of film), or maybe they just… really enjoy stat-based adventure games or strategy titles or what have you (I know I do!); I don’t really know their motives.

What I do know is that they’re trying to fight human nature.

Don’t believe me?

Let’s go back to the beginning.

The Epic of Gilgamesh is one of humanity’s oldest surviving works of fiction. It’s a massive adventure story. Fast-forward to ancient Greece and Homer; note the vast influence of his works (basically all of Western fiction owes its existence to Homer and Plato/Aristotle/Socrates). Jump ahead even further, and take a gander at the increasing believability of fiction (Shakespeare, particularly), as well as the increasing accessibility of entertainment. Check out how the integration of music and storytelling in the 1500s led to the birth of the opera. Pay attention to the rise of global exploration during the Renaissance, as well as the scientific leaps and bounds made by a formerly-repressed society. Study the emergence of 19th century literary criticism, as well as the explosive popularity of novels. Read up on the birth of film, radio, television, comics, and their subsequent popularity.

What do these all have in common?

Well, I was hoping to have a word for you, but I don’t. Curiosity, maybe? Discovery? Newness? Escapism? None of these really quite sum up what I’m trying to get at, so I’ll put it like this: people only enjoy the mundane so much. At some point, every single one of us is going to seek out new experiences. We crave new sensations. We savor them. Experiencing the new is one of the primary motivating factors of human existence.

Humanity, as a whole, has a fascination with the new. When we look back at fiction, we can observe humanity’s fascination with the idea of exploring other worlds. CS Lewis’s Narnia adventures cover this. Lev Grossman’s The Magicians explores it too (fun fact: his brother apparently worked at Looking Glass). Fantasy and science fiction stories sell like crazy. There’s a reason that films like The Girl With The Dragon Tattoo didn’t do nearly as well as Avatar. One is mundane. The other is not.

The fact of the matter is that we, the human race, are a bunch of insatiably curious creatures who constantly desire new experiences. Discovery is humanity’s raison d’etre (oh yeah, I can be just as pretentious as the self-styled game critics; ce que je dis, je le dis dans une autre langue, donc, ce que je dis est profond?).

So what’s the future going to be like?

We are creatures driven by discovery. Why do you think Skyrim did so well? Why do you think New Vegas failed? The former facilitated discovery and exploration; the latter was too focused on being a good RPG to care about the world it had created.

The future of games is going to capitalize on this. Arguing that we should eliminate the concept of immersion in games, that the immersive sim should be dead, or anything else along similar lines, is like arguing that we shouldn’t have voice acting and ought to stick with scrolling text. It is an argument that says “games should not be more than they already are!”

Modder Robert Yang may consider immersion to be a fallacy, but he’s mistaken: the future of video games really is the holodeck. All those things I mentioned earlier—Kinect, procedural technology, better AI, and so on and so forth—are the tools that are slowly pushing us towards that end.

…I haven’t even begun to talk about the real-world benefits of creating immersive games. Someone smarter than me could surely go on at length about the possibilities of immersive simulations that allow people to live through various simulated events for… a wide variety of reasons. Someone training to be an EMT could be forced to go through a triage situation, with accurate simulations of panicking people, secondary threats, sensory barrages, and so on and so forth. Researchers could study crowd dynamics (using more advanced AI than anything presently available) in the aftermath of a disaster in order to better understand how to design environments to protect against them. The military already uses immersive sims to save training costs. There are a ton of non-entertainment applications for immersion. Saying we should kill the concept is horrifying, because it’s so limiting.

…and so we come to the conclusion.

There will still be room for the [insert any unimmersive game here] of the world. I’m not saying that they should die; there’s nothing inherently wrong with them. Instead, I’m looking at this in a long-term perspective—not the next week, or the next month, or the next year, but the next century of game development. Games are… going to become something else. Traditional video games will still exist, but this new thing, this transportation to another world… that’s the future. Saying we should kill the concept of immersion and only give credence to attention is a terrible idea.

Considering the way they seem to feel about immersion, it would appear that Ben Abraham, Robert Yang, and Richard Lemarchand don’t just misunderstand the term, but want the legitimate usage to die as well. While I don’t know a lot about Abraham’s personal philosophies, Yang’s made his pretty clear in his Dark Past series of blog posts—he thinks the immersive sim should die. Lemarchand’s philosophies are made clear by the games he creates, and
Do I sound upset?

These guys seem smart—really, they do—but by failing to understand the nuance of the word “immersion,” they seem primed to damage the medium.

Look, I may be just a poor college student (I can’t even afford a good school) who is trying to learn game design while his school falls down around his head (seriously, I’m not kidding about the good school thing). Unlike Lemarchand and Yang, I’ve never made a video game in my life. I’ve worked on some other forms of RPG before, and I’m trying to work on an indie game right now, but I obviously don’t have the body of work behind me that these guys do. I may never have the body of work behind me, at the rate things are going.

…but… I feel like they’ve got it all wrong. If they’re the guys who tell us where games should go—if we follow them—I know we’ll be worse off for it.

They scare me.

(Also, in case anyone is wondering, yes, this is one of the reasons I prefer Western to Japanese games. Japan tends to prefer to design more abstract, non-immersive games, which is a totally valid method of expression, but not one I personally enjoy)

On Art and Smart Games

(Posted here)

Hey guys, thought I’d write another longish #speakup post; this one’s about the list of artistic games on Brainygamer. I wrote it as an open letter to Michael Abbot, who runs the place. I cut out the introduction for Kotaku, since you guys already know who I am, hopefully.

I think something’s missing. See, the thing Clark was getting at–and the thing that precious few people fail to understand–is that he’s not talking about just “art.” He uses these qualifiers, like “true art,” or uses words and phrases like “puerile” and “intellectually lazy.” Those qualifiers are very important, because he’s referring to a divide in art that’s rarely (maybe never; I’ve never actually seen anyone bring this up) mentioned: high and low artCitizen Kane was, arguably, the first high art film. Most of what had come before were merely adaptations of other works (Wizard of Oz, Gone with the Wind, Ben Hur, etc), and while there had been a few stepping stones (like Metropolis, M, and The Cabinet of Dr. Caligari), Citizen Kane was the game changer. Most of the people who say that games “don’t need a Citizen Kane” don’t really understand what Orson Welles did to the cinema landscape. After Kane, everything changed.

Before I get into the high/low art thing, however, I need to back up one quick second and define art:

Art is a thing that is created or performed with the primary intent to stir up emotions within the audience.

In this way, many things are art. For every The Four Seasons, there is Baby. For every Jane,there is a Twilight. Actually, if we go by Sturgeon’s Law, for every one good thing, there are nine bad things, but whatever. The point here is that we see a distinct schism in art. Some things are timeless and will spawn endless discussion centuries after their creators have passed on, while others are transient, their laughable, short-sighted attempts at profitability greatly robbing them at artistic merit. The former are high art; the latter, of course, are the opposite.

The question should never have been “can games be art?” It should have been “can video games be high art?”

We then run into another problem: broad generalizations. The simple fact of the matter is that we can’t ask whether any one medium is art of any kind, because there are a lot of little differences. Baraka and Casablanca are art, but an instructional video telling you how to interact with customers or an advertisement for cold cereal is not. The Scream and Starry Nightare art, but the handicapped sign on a bathroom stall and a full-page newspaper advertisement with pictures of cars at low, low prices are not. The same is true of games: some are, some aren’t.

Perhaps the best description I’ve heard of games is actually Wikipedia’s: games are “structured play.” It gets right to the point and encompasses every game type, from board games, like chess, to sports, like basketball, to video games… except… well… video games are a bit broader than all that. There’s a reason no one says pente or basketball (the performance art that is the Harlem Globetrotters aside) are art–they call them games and sports. Unlike Risk or ōllamaliztli (sorry), video games use a lot of artistic elements, and I’m not talking about the craftsmanship of board pieces or illustrations on cards or anything. Some tell stories. Some exist more to craft mood than anything else. These video games are more than just games–they’re hybrids; instead of being merely tools that structure gameplay, video games combine elements of other art forms, like storytelling, with rules-based systems, and you get video games.

As technology has progressed, however, things have gotten really weird. Some might argue that, at some point, they stop being games and become something else. We’ve added all these simulation elements–instead of enemies adhering to specific rule sets, with rigid, turn-based battles, we’ve got things that try their hardest to simulate actual encounters. Games like STALKER aren’t really games anymore–they’re entire worlds to explore. Some games use their mechanics like a sculptor might use his tools, shaping an artistic experience out of it. Somewhere along the line, some video games moved beyond just structured play and got into something more.

In other words, some video games are art, some aren’t.

This leads me to modify the question: “Can some video games be high art?”

And, since we’re asking that question, we’re going to want proof one way or the other, so the next question that follows is: “are any games worthy of being called high art?”

The answer to the first question, I think, is yes. There is very little academic discussion centering around games-as-art, and what few attempts are made tend to be weak attempts at justifying one’s love for a particular title. Most of the “intellectuals” (oh yes, scare quotes seem very well deserved) who debate games are little more than educated fanboys, and they rarely seem to be educated about the right sort of things. I’ve encountered more enlightening discussion of game and game story through random commenters I’ve met (we get into these cool discussions about Aristotelian philosophy and the strengths and weaknesses of the medium and how the medium doesn’t lend itself well to traditional storytelling) than I have reading about games by people who fancy them serious critics.

Now, you may have noticed by now that I haven’t mentioned intellectual stimulation at all. There’s a good reason for that: when Clark talks about intellectual stimulation, he’s not talking about puzzle or strategy games. He’s talking about the intellectual stimulation that comes from artistic merit–the part where we start critically discussing things.

This brings me to my primary criticism of the list: most of the submitters don’t seem to know what they’re talking about. A ton of games are on there only because “they make me think a lot,” which, again, isn’t what Clark is looking for. It provides an idiotic counterpoint to his claim. Many of the submissions are riddled with spelling errors and barely give reasoning beyond “I really like it and it moved me.” Movement is all well and good, and listening to wubstep makes me feel something on an emotional level, but that certainly doesn’t make it high art, which is what Clark very clearly wants.

You have basically two kinds of art games: narrative and mood. Narrative games would, of course, be games with the primary intent to tell a story. In a perfect world, the game mechanics are subservient to the story (the “gameplay > story” fallacy is a really big subject I could get into, but I don’t have the time; maybe later?), functioning as the language or technique that conveys the story, but more often than not, people focus too much on the gameplay and not enough on the story, which is the equivalent of a novel with a lot of very nice words and a story not worth telling. Mood games are… things like STALKER or Shadow of the Colossus. They are the ambient music and the abstract paintings of gaming.

Many of the games on the list have no right being there. Uncharted 2 possesses the narrative depth of Transformers 2: Revenge of the Fallen. Mass Effect 2 is a poorly-written (there’s no second act where the team gels, jarring the suspension of belief) white supremacist gameHalf-Life 2 is a mess, its narrative structure oddly centered on Eli Vance. Red Dead Redemption is apredictable unevenly-written (FBI man’s random speech, for instance; characters waxing eloquent at random), ludonarratively dissonant game. I’ve given up watching television shows that are better-written than Heavy Rain, like Lie to Me. Strategy and puzzle games don’t really belong there at all because of the whole sports thing (in fact, Starcraft 2 is kind of the majority of the esports scene).

I’ve noticed that some people mention how a work is referential as if that makes it an intelligent work, but merely being referential isn’t what makes a work good. The other day, someone, when discussing Metal Gear Solid with me, argued that it was excellent because it featured science, science fiction, and real-world events. I’m sorry, but there’s more to art than that. Metal Gear Solid is a narrative joke, and no one with any knowledge on the subject of good storytelling could honestly call it a work of art on that front. Yes, it does a lot of interesting things and plays with the medium, but there are many films with excellent traits that fall flat on their face where script is concerned, which prevents them from being considered high art. Transformers 3, for instance, has some fantastic direction, camera work, lighting, and special effects, and is one of the best uses of both IMAX and 3D filmmaking ever, but that doesn’t save it from being low art.

A lot of people, no doubt, will feel defensive about this: that’s good. The current arguments for why some of these games should be considered the pinnacle of the medium are weak, and an intelligent defense of a great many of these games needs to be made. I can’t really do a lot to back of my claims in the interest of time and space, but I’d gladly do so at a later date. Also, I assume there will be a number of people who read this and get really upset; people tend to be more invested in games than other mediums, presumably because of a level of some sort of involvement bias (I realize this isn’t a real term, but, as far as I know, there is no term to describe the cognitive bias where you spend time with a thing you enjoy and become reluctant to admit its faults because you feel as though you’re admitting that you wasted your time), which, I think, is part of the reason gamers seem to be significantly more prone to anger and fanboyism than fans of other mediums.

Games, even the most-loved, highest-rated games out there, deserve a lot more criticism than they get, especially when it comes to narrative, and that is what Clark is upset about. As I look at the video gaming landscape, I see a small number of games (ten, at present count) that might be considered the Gone with the Winds and Wizard of Oz’s of the world. Clark believes he’s found it in Braid and Journey, but the only that strange, weirdly-insular, self-involved field (the same few critics seem to pop up over and over again, for one thing) that calls itself games criticism really seems to care. I see no Citizen Kane of gaming, or even a Watchmen, but I hope we get one soon.

So… um, yeah.

Basically, I’m disappointed in the list as it stands. I feel like it would benefit some filtering.

Also, I think you only need one entry per title, but maybe that’s just me.

~DocSeuss

PS – As a synesthete, I’ve always found Rez to be a bit lacking. Still qualifies as a mood game, I guess.

Why Half-Life 2 is Broken, and Why Valve Can’t Make Half-Life 2: Episode 3

Why Half-Life 2 is broken, and why Valve can’t make Half-Life 2: Episode 3

(it’s not performance anxiety)

Reposted from here.

Everyone seems to want Half-Life 2: Episode 3. Some people have even jumped the gun and just want a straight-up Half-Life 3 (personally, I’m with them; a new timeskip and a new enemy would be nice!). Valve’s said that performance anxiety is the problem, but I doubt that’s true–these are the guys who can release a four-hour game with only two multiplayer modes and have it score an 89 on Metacritic and then whip around and release a cheap sequel the next year and still be the most-loved game company on the planet. They could release anything and it would score well on Metacritic.

So why can’t Valve release Half-Life 2: Episode 3? It’s not possible. Valve can’t make another Half-Life game, not as they are.

The joker in me wants to say that it’s all because of Eli Vance, but in truth, he’s a symptom, not the problem.

I should probably explain that: See, Half-Life 2 doesn’t have a very good story. Generally, a godo story will introduce the audience and protagonist to the world, deal with a conflict, and then resolve that conflict. Each act will feature new goals for the protagonist to pursue, all building up to the conclusion.

Half-Life 2 doesn’t actually do that. The game starts brilliantly, introducing the player to a fascinating world, but it quickly falls flat on its face with a ten minute unskippable cutscene. Interestingly, your next objective is to go see Eli Vance, who, it is said, will explain everything, like why you’re here in the first place. My first thought was “yeah, yeah, yeah, just a minute, I’m teleporting this miniature cactus!” My second thought was “wait, if he’s going to explain why I’m here, why did you just spend ten minutes telling me all about how Breen won’t let people make babies and stuff?”

So you blast through the first act and finally make your way to Black Mesa East, where Eli Vance talks to you for five minutes or so, telling you what everyone previously told you, and then sends you outside so he can get kidnapped. Then, the game’s best levels, Ravenholm and Highway 17, happen… but they happen so you can get to Eli, which you did during the first act. Of course, Eli gets kidnapped for the second time, and the game’s third act features you trying to meet him yet again.

Keep in mind, you still don’t know why you, specifically, were brought to the world. “Fight the Combine” is never stated to be the reason, and it becomes fairly clear that everyone can hold their own without you, so apparently, there’s some big secret reason that you’re here. Throughout the entire game, you never actually learn why you were brought to City 17. In fact, Half-Life 2: Episode 1 is spent helping Eli escape from City 17 (and then pursuing him), and Half-Life 2: Episode 2 is spent… getting to him. Then, just as he plans to tell you what you’ve been wanting to know over the course of two games… he dies.

Eli is the only reason you do anything in the Half-Life 2 games, and he’s dead. The Princess Peach of the Half-Life 2 series is gone. Now you’ve got nothing.

Of course Valve could easily write their way out of things (G-Man knows why you’re there! I bet that brain-eating slug does too!), so the problem isn’t Eli himself, it’s Valve.

Valve loves to hype their organizational structure, but to be honest, I think it’s kind of broken. To tell a story, you need an author. Working by committee doesn’t really work. Portal 2 succeeded because its levels were pretty divorced from the narrative; it is a puzzle game that has a story running simultaneously. The two rarely work together. The plot is simplistic and the story only has three living characters, so it’s pretty easy to do.

Half-Life 2 is a different beastie because it’s got a plot and a bunch of characters to contend with. That plot takes place in an actual world, and the gameplay’s more than just a simple puzzle game. That means that there’s a lot more to things.

One of the most important things to understand about stories is how selfish they are. They can’t be second to anything. They simply don’t work that way. As I mentioned earlier, stories by committee rarely work (check out most comic book events, for instance). So… you kinda have to have an author or two or a director or someone–you’ve got to maintain that vision, or you have a hundred different people all working every which way being inefficient and telling a story that isn’t very good.

In other words, you get Half-Life 2.

It’s much easier to work on a multiplayer project. There’s no need to focus on creating a cohesive story and all the elements required to make that work. You just program the game, create the assets, and run with it. It’s significantly easier than trying to tell a story with a large group of people who have no real leader.

Perhaps that’s why everything Valve’s released since 2007 has been a multiplayer game. Both upcoming titles, DOTA 2 and Maybe it’s why Portal 2 is probably going to be Valve’s last single-player game. Speaking of Portal 2…

Portal 2 worked because at its core, it’s just puzzles. A story-based game requires significantly more than that. With a pure puzzle game, the storytelling is basically disconnected from the gameplay. It’s just “puzzles increase in complexity and we add new mechanics.”

Half-Life 2 is far more complex than Portal, in terms of gameplay and puzzles. It has a much richer toolbox, with human characters who must do actions, enemies who have their own actions, far more varied environments, vehicles, and so on and so forth. The amount of things that can occur in Half-Life’s toolbox and how they can play out are far more rich than Portal’s. In terms of toolboxes, if Portal is See Spot Run, Half-Life 2 is Animorphs; one hasn’t got a lot of tools to work with, while the other’s got many.

Right now, Valve has too many chefs in their kitchen. Portal 2 only worked because they had very few ingredients.

I think their anarchic style of development is pretty interesting, but its primary weakness really does seem to be storytelling. They can’t make Half-Life 2: Episode 3 because they are too big and too unfocused to do so. Eli’s a symptom of the problem–Valve didn’t really know what they were doing, so kept shoehorning him in as the series’ primary objective without really realizing it, and they never really figured out why Gordon was there in the first place. The game’s storysimply does not matter.

So how can Valve get out of this mess?

They can do a few things:

First, just get it over with. Get Half-Life 2: Episode 3 out the door and be done with it.

Second, limit the number of people on the project. I did just cover the whole “too many chefs in the kitchen” bit a moment ago, so that should need no explaining.

Third, and most importantly, work on Half-Life 3, but cut out all the stupid story stuff. As I’ve demonstrated, Half-Life 2’s story kinda sucks, so backing away from it and moving towards a more experience-based game would be a good thing.

Ultimately, the series doesn’t need a story. If you don’t believe me, you might want to check out a little game called Half-Life. That game has no story. It’s an experience. You, the mute protagonist, travel through a bunch of levels solving puzzles and fighting monsters. That’s it. There’s no story there, just a journey through a world.

If Valve wants to make another Half-Life, they should go back to basics. Their development style doesn’t lend itself well to storytelling, but simply creating an enjoyable world with fantastic enemies and set pieces? That should be no problem at all. Several times, I’ve surveyed people, asking them what their favorite levels in Half-Life 2 were: with just one exception, everyone mentioned either Ravenholm or City 17–levels where Gordon was on his own, not locked in a room having a story told at him. People love Half-Life’s loneliness. They say they like its characters, but when it comes to what they actually enjoy, they prefer playing without them.

Make another Half-Life, Valve, not another Half-Life 2. That’s how you get out of this mess.

Or, y’know, use traditional development methodology.

Why Bioshock 2 is the Art Game You’ve Been Looking For

I find it strange that Bioshock gets a great deal of love. I didn’t used to, back when it first came out, because it is a clever, unique, and interesting game, with a lot of cool ideas, but then I played two very important games: System Shock 2 and Bioshock 2.

System Shock 2 revealed Bioshock for what it was–a nice-looking, but shallower representation of the Shock ideal. Bioshock was a simpler creature, lacking the vim and verve of its spiritual predecessor. When you stripped away Rapture, there wasn’t much left. Gone were the guns that broke, the inventory management, the reasons to go back to previous levels and have a look around. Gone were the big ideas, too, and the characters that drove them. The gameplay had tightened up significantly, but even though powers were easier to use, they tended to be far less interesting. The dearth of enemy types hurt the game as well.

Still, it was unique, and it did make a rather interesting point about video games: choice is created by the developer, everything is fake. You are a slave to the game. You do not have total freedom. You are a puppet, dancing at the developer’s whims.

I can forgive Bioshock for not having the best combat ever. Half-Life doesn’t have the best combat ever, but it’s still pretty fun, after all. I can forgive it for not having an inventory system, because they did a pretty good job making the game without it. I have a harder time forgiving the lack of good characters, but Rapture and Andrew Ryan alone made for an interesting world. Putting the game on a numerical core, I’d still give it a solid 9 out of 10 because the <i>experience</i> transcends its many weaknesses.

But… Bioshock 2 is by far the better game.

Wait; let’s back up a bit. Wasn’t Bioshock critically acclaimed? Didn’t a lot of people talk about how great that point was? In fact, wasn’t the largest criticism about Bioshock 2 the fact that it didn’t need to be made because Bioshock was so perfect?

Okay, yes, a lot of people did talk about how great the point was, and they did go on to say that Bioshock 2 didn’t need to be made because Bioshock was complete as it was… but… saying that a sequel to a great game doesn’t need to be made? That’s a really uncommon criticism. In fact, I don’t think I’ve ever heard someone say “this great game doesn’t need a sequel!” It’s just not a thing people do. So why did Bioshock warrant this claim? Was it really so perfect? Was it the best game ever made, so perfect that it could cure disease, kiss infants, and make you smarter just by thinking about it?

I think not. Generally, a very good game is a game that most people, once they’ve played it, like. They might not have heard of it, it might not have sounded interesting at first, but if it’s truly good, then most of the people who pick it up are going to really enjoy it. Bioshock is rather interesting because the response to the game seems to be rather cool. There’s a surprising number of gamers who actually didn’t enjoy it all that much.

In fact, most common complaint I heard about the game went: “I love Rapture, but the game isn’t very fun.”

It’s interesting to note that Ken Levine, when first revealing Bioshock Infinite, said something along the lines of: “Bioshock wasn’t about Rapture, it was about exploring new worlds.” It makes sense, then, that the biggest appeal of Bioshock would be the discovery of Rapture. Likewise, it makes sense that people might not be hyped for Bioshock 2, even if they claimed to love its predecessor. In truth, the appeal of Bioshock was, by and large, the discovery world–that idea of being under the sea for the first time, the newness wonderful early-60s aesthetic, that first appearance of those freaky men in sdiving suits highlighted by neon, creepy little girls trailing behind them excitedly talking about angels. It’s no surprise, then, that Bioshock 2 didn’t garner the hype that its predecessor did; Rapture had been done, but the game behind it was mediocre at best. No one really wanted to play Bioshock, they wanted to discover something new.

In this way, it makes perfect sense that the biggest argument against Bioshock 2 was that people had already played Bioshock: the appeal of Rapture had worn off, and people had actually begun to dread the idea of playing Bioshock again.

But, you see, Bioshock did need a sequel. While the gameplay itself might have been lackluster, the point it made–this idea that choice is illusory, that freedom isn’t real, that the developer need not be burdened by the medium’s strength, interactivity–was a bad one. It’s sad to me that the developers at Irrational feel this way; indeed, the worst part of Bioshock was the part where it revealed itself to the player, removed all choice, and turned itself into an empty, linear experience. The game was genuinely interesting when it offered you choices, but when the folks at Irrational decided that they’d have enough and decided to remove choices from the game, it became far less interesting.

Any artist will tell you that the best art is that which plays off its medium’s strength. A film built entirely around reading words on a screen isn’t a film worth watching. Likewise, a sitcom that tries to use filmic storytelling isn’t going to work because film’s pacing doesn’t allow all that much to happen in half an hour. Gaming’s strength is its interactivity–as soon as you can interact, that means that the gameplay is choice-driven. A developer who chooses not to capitalize on that strength, instead going for the “choice is fake!” route, does a disservice to the medium.

People like to say that choice is an illusion, but that’s only true if there are no consequences. Over the weekend, I played Back to the Future parts 4 and 5. I had the choice to pick various dialog options, but only one of them was the “correct” option. If I tried to tell Citizen Brown that a character would live a happy life in the future, Marty would invariably say something stupid, the dialog option would be removed, and I’d have to pick whatever option was laid out for me. That’s the illusion of choice. One example I’m fond of using is an ice cream store. A store claiming to offer hundreds of flavors, but truly offering only vanilla, is offering nothing more than the illusion of choice. A store offering a limited selection of different flavors, however, is offering choice, no matter how limited that choice may be. If something changes, then you have, in fact, made a real choice, regardless of the size of the consequence. Maybe it’s simply the difference between chocolate and vanilla.

That Bioshock would effectively argue “there are no real choices in gaming! This is all it can be!” is, then, rather sad. It’s a myopic take on the medium. It’s an inherently limiting idea. This is where Bioshock 2 came in. Where Bioshock said “hah! gotcha! choices are fake,” Bioshock 2 assessed the situations and went for something significantly different.

In Bioshock 2, You are Subject Delta, an early-model Big Daddy, bonded to a little girl, Eleanor. You were killed by Sophia Lamb, Eleanor’s mother. Resurrected after the downfall of Rapture, you wake to discover that Sophia has been turning Eleanor into some sort of superheroic savant, capable of bringing Sophia’s dream of a Marxist family to the world. You need to get to Eleanor. In a way, you are her slave. The entirety of the game is built around making your way to Eleanor to free her. It does not appear you have much choice in the matter–without her, you will die and the world will be doomed. Unlike Bioshock, the maps are actually linear. You appear, at first glance, to have even less choice!

…but…

You meet Grace, Stanley, and Gil, Sophia Lamb’s lieutenants. At each juncture, you have a choice. You can kill them or you can walk away. One of them was a pawn, another was misguided, and another was a key figure responsible for your slavery. Each one tries to kill you, and, as such, it could be argued that each one deserves to die.

I chose not to.

When I died, at the end of the game, and Eleanor absorbed my consciousness into her own, a profound thing happened: she chose to be a better person. She, with the powers of a goddess and the upbringing of a Marxist, realized the power of choice. She realized that we each needed to choose for ourselves the kind of person we would choose to be. She learned that from me. She could have forced the world, kicking and screaming, to be remade in her image, and maybe some would have thought it a better place, but Eleanor realized that it wouldn’t truly have been. I showed her the value of freedom.

Where Bioshock argued that choice in games could be nothing more than an illusion, Bioshock 2 made the counterpoint that, no matter how limited the choices may be, they can have a profound impact on the world of the game, and that is true choice. The value of the choice is not based on the audience’s willing to believe–it’s a burden placed upon the developers. There is nothing that says choice must or must not matter.

Bioshock 2 is a game that capitalizes on interactivity, the element that separates video games from visual media like film and television. It offers a metatextual counterpoint to its predecessor, Bioshock, in addition to making a point about how our choices affect others (unlike other media, as a game, Bioshock 2 actually allows us to see how our choices have an effect), and it does so while featuring better gameplay and storytelling than its predecessor. If you want to argue that games are art, Bioshock 2 is one of the best examples you could possibly use.

Why First-Person Stealth is Best

(Originally posted here; has 16,434 views)

I’m done playing third-person stealth games.

I can’t do it anymore.

I want to, believe me, but… yeah. No. I can’t. I love stealth. I love the idea of stealth. I love sneaking through a level, either ghosting it or taking out everyone without being noticed. There’s a feeling of empowerment there that comes with solving the puzzle that is a good stealth level.

Look, have you ever played a game that you’ve broken? I’m talking about a game like Skyrim, where you mod a sword to have 9999 damage so it kills everything in one hit, completely removing the challenge from the game. I’m talking about cheating.

Like many of you taffers, I once believed that third person was the only way to do stealth. I thought that it was the only way to figure out whether you could move, because as soon as you get into an AI’s line of sight, they’ll notice and start looking for you, and that really makes or breaks a stealth game.

I love the genre—whether it’s Assassin’s Creed or Splinter Cell or Hitman or whatever—but little did I know that they were all doing it wrong. Harsh words, I know, but bear with me.

Recently, an old stealth game was rereleased after a thirteen-year absence from store shelves. It was called Thief, and it was developed by the guys who went on to make games like Deus Exand Skyrim.

Unlike most stealth games, it was in first person.

How did they get around the line of sight stealth problems, you might ask? Well… they didn’t. See, line-of-sight is actually horrible. In real life, stealth doesn’t work that way. Line-of-sight is a method that’s used only because it’s incredibly simple to create. It is, in fact, rather lazy. A third person camera basically exists as a gameplay abstraction designed to keep the player from giving their position away whenever they want to see if they can move.

In real life, you could listen to the position of people, poke your head around corners without being noticed, and hide in the shadows without being seen. In a game where all stealth is based on line-of-sight, you can’t do that, so you have to be in third person, or it sucks.

…well…

What if you made a stealth game where you could listen to the position of other people, poke your head around corners without being noticed, and hide in the shadows without being seen? That would be a lot better, right?

Turns out it is.

It adds a whole new layer of challenge to stealth. It requires intelligence to play. Sound becomes a fantastic method of level navigation. It means you don’t need to cheat and look around corners unrealistically, because now you can hear guards snorting or sneezing or chatting or whistling or just even walking.

Do you have any idea how amazingly badass it is to hide in the shadows right in front of a guy, step out like Batman himself, and stab him in the face? It’s incredible! There’s no feeling like it in the world (besides being Batman!).

I can’t go back to third-person stealth after this. There’s no depth to it—not challenge beyond an arbitrary, unrealistic, and unforgiving line-of-sight issue beyond the occasional “DONT MAKE NOISE!” component.

Thief is the best stealth game I’ve ever played.