Among those doing computational media work, the concept of “operationalization” — as Nick Montfort discusses in this video from the Media Systems gathering at UC Santa Cruz — involves the formalization of theories from the humanities, arts, and social sciences and the implementation of these in a computational system, where they can be effective in new ways and “tested” in certain senses. This has proven a very powerful approach. For example, the entire field of 3D graphics could be seen as operationalizing arts knowledge about visual perspective and other knowledge from the visual arts. Or, more specifically, Facade (generally seen as the first interactive drama) is explicitly operationalizing concepts from arts and humanities theories of dramatic writing.
But operationalization is not that often practiced. One might speculate that this is, perhaps, due to a lack of respect, or a difficulty in translating, between the types of knowledge found in these theories and the types that go into engineering system building. However, at the end of this video, Michael Young suggests what I believe to be a more compelling explanation: The work of operationalization almost always involves novel scholarship. Narrative theorists, for example, generally stop short of the data structure level. And few system builders are actually prepared to do novel scholarship in the humanities, arts, or social sciences — so operationalization remains an unusual approach, despite its demonstrated power.
Of course, there are some people who are deeply qualified to combine novel theoretical contributions with innovative system building. Montfort is a prime example. In this talk he discusses two particular projects, on different ends of the complexity scale.
The first is Curveship, a large project which operationalizes parts of a particularly influential humanities understanding of literature: Genette’s Narrative Discourse. Using Curveship, Montfort demonstrates the seminal Adventure in styles ranging from the typical interactive fiction style (second person, present tense) to “memoir” (first person, past tense) and “retrograde” (narration of the future). But creating Adventure in Style was not the end goal of his project. As Montfort explains:
At a higher level, what I’m doing in creating a system like this is to see what of Genette’s ideas about narrative are susceptible to a computational implementation. What can I put into a system like this? His ideas about voice, mood? His ideas about frequency, speed, order? What things can I simulate and what things are harder to simulate?
This not only offers us insights into what might be possible for computational narrative, but also offers the potential to provide new insights about the theories engaged, or new insights in their domain. One thing Montfort learned making Curveship is that what Genette calls “narrative distance” can actually be built up from other narrative aspects that Genette discusses. It is not independent, as the theory tends to suggest.
The second project Montfort demonstrates is Through the Park, a “degree zero” operationalization of ellipses — showing how data must be shaped to match with process, and simpler processes often require more careful crafting of data (in order to account for the things that would be modeled and operating in a more complex system). Montfort shows how even the simplest possible implementation can create a meaningful experience, provide insight into how a technique works, and present limitations that can guide future investigation (e.g., Through the Park‘s inability to handle anaphora and reference).
Showing works of these very different scales is connected to Montfort’s work promoting the idea of doing initial explorations with small, simple systems of the sort that computer science doesn’t normally support. As he points out, you can’t get a PhD doing a system built in a day, but should you really start investigating an area with something built in three years? A technical report from Montfort’s lab also advocates this approach: “XS, S, M, L: Creative Text Generators of Different Scales.”
If you have thoughts, please feel free to discuss them in the comments here or on Twitter with the #MediaSystems hashtag. This is also a talk that generated engaged discussion during the presentation, with Janet Kolodner and Ken Perlin (whose talks will be posted in future weeks) and others alternately joking with Montfort and drawing out greater specificity from him. One very useful aspect of this back-and-forth is that it showed the difficulties in discussing computational media work: Is it a demonstration of a system? Is the system itself the demonstration of a concept or approach? Is it an artwork, meant to be evaluated on such terms? Is it a tool, instrument, plaything? Montfort’s talk alone included works of different sorts, and works that spanned multiple categories, usefully illustrating this issue.
Like Alex McDowell’s talk posted last week, PDF slides are available on the main Media Systems page for this talk. Watch for Ian Bogost’s talk here, continuing the theme of operationalization, next week!
This material is based upon a project supported by the National Science Foundation (under Grant Number 1152217), the National Endowment for the Humanities: Exploring the human endeavor, the National Endowment for the Arts, Microsoft Studios, and Microsoft Research.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the National Endowment for the Humanities, the National Endowment for the Arts, Microsoft Studios, or Microsoft Research.
About the author: Noah Wardrip-Fruin is a Professor of Computational Media at UC Santa Cruz and the author of Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Read more from this author