Home About EIS →

Game Design as a Science

In my recent PhD thesis proposal I described how I am going to tackle game design as a new domain for automated discovery.  A key piece of this is figuring out game design might be explained as the kind of knowledge-seeking effort you might expect from science or mathematics.  With successful systems performing discovery (such as Simon Colton’s HR system which contributed some new and interesting knowledge in pure mathematics) and new projects beginning to automate the process of exploring a space of games (such as Julian Togelius’ Automatic Game Design experiment), the idea of mashing these together into a “game design discovery system” seems quite attractive to me.

But hold on a second, game designers don’t often think of themselves as scientists or mathematicians any more often than painters or musician do.  What’s going on here?

Think about the process of exploratory design.  Suppose you know about some established game mechanics and you’ve seen them used over and over by others.  You’ve got a reasonable model of how these mechanics manifest in play and how players relate to them.  Looking for a bit of that prized “innovative gameplay” juice, you take two mechanics you know about (but have never seen together) and mash them up into some grotesque-yet-playable artifact.  You do this not because you know it is going to result in something desirable but precisely because you don’t know what the result is going to be.  It’s an experiment, and you can’t way to see what your friends will do when they play it.

Given your experience with this new artifact and the knowledge of what went into it (as well as your previous experiences building other artifacts with those blocks), you improve your understanding of design, updating your design theory if you will.  Bold experiments set you up for big surprises.  In the context of exploratory design, you have to do some serious creative thinking.

Maybe you really had some other goal when doing this (fame, fortune, or just producing an existence proof that games that include mechanic X without including mechanic Y are too predictable), but that action you took of making the wild combination of known-but-unrelated elements could equally be explained by a thirst for knowledge — that you did it as a literal experiment in a larger discovery process.  Recombination isn’t the only action that fits this model, so do naming and detailing new mechancs and showing where they were unknowningly used in the past (proposing a new element in a taxonomy of mechanics) and coming up with a rule of thumb that helps you predict how one particular class of players might behave when such a mechanic is included in the game with suitable representation (proposing a model that links observable gameplay properties to unobservable design elements).

Certainly, a true understanding of player actions is likely to be as complicated as understanding general human thought, so that’s not going to work in a automated discoverer.  What is more realistic, I claim, is to look at how particular players play particular games.  Maybe, in the context of symbolic rules and mechanics there will be approximate explanations for the choices various players make that is much easier to describe concisely.  That is, I want my intelligent game designer to literally reason about the way an audience reacts to the artifacts it produces (painters and musicians do this too), but I want to scope this reasoning to constructs that are reasonable to represent on a computer.  By feeding human-generated (as well as machine generated) play traces and the symbolic knowledge used to construct the games that were played into the familiar data-in-model-out machine learning tools, I hope my system will come to understand this grounded notion of gameplay.

My system will focus on on the production of symbolic knowledge, but it will produce games too.  The construction of playable games will be the primary medium by which the system is able to experiment, drawing information about human players in from its surroundings.  Unlike an interactive genetic algorithm, the system won’t blindly generate piles of artifacts and ask human players to play all of them (nor will it ask for a super-compressed fitness evaluation in the form of a 0-to-5-star-fun-rating), it will mostly play its own games, given its working knowledge of how different kinds of players act, and reserve the attention of human players for games which represent edge cases or unknown regions in the system’s current theories.  The difference between the results of self-play and the observations of its human friends should guide experimentation towards regions which the system should explore to shore up its personal theory.

Ok, astute audience members, what say you to this?

This entry was posted in Academics. Bookmark the permalink. Both comments and trackbacks are currently closed.

7 Comments