Home About EIS →

Ken Perlin on “Interdisciplinary Media Technology Research” (Media Systems)

I am happy to announce that we are publishing the final four videos from the Media Systems gathering — and that the final report, “Envisioning the Future of Computational Media,” is now available through print-on-demand!

We’re kicking off the last group with a talk from Ken Perlin, offering a vision of how computational media can become integrated throughout the curriculum, as something both written and read. Unlike most of our talks, this video focuses on the screen, where Perlin goes through a series of high-speed, interconnected demonstrations. He begins with a discussion of enabling believable interactive characters — arguing that a key is characters who can carry out intelligent performances of their roles, based on the kinds of high-level direction that can be provided by an AI system or by audience interaction (e.g., with a game controller). He shows two prototypes of characters like this, able to give engaging, coherent, grounded performances in real time. These simple characters arose from research deeply combining procedural computer graphics with the arts, particularly animation and puppetry (Perlin regularly collaborates with puppeteers).

He next shows a split-screen interface that allows both reading of a particular section of a book and a view of the entire manuscript. Using Pride and Prejudice as an example, Perlin shows how simple buttons can be used to let students ask “distant reading” questions of the sort popularized by digital humanities, such as looking at the patterns of mention (and collocation) for key terms such as the names of major characters and locations. He shows how the code can easily be exposed and modified for creating new buttons, and how doing this in a live, shared document could enable new kinds of classroom conversation. Such capabilities are the foundation of his approach.

Next Perlin demonstrates the same kinds of connections between code and media views for three-dimensional objects. The first version of an object can be created with a gesture — a mouse gesture, or an embodied gesture detected by a Kinect-style sensor — and this object can then be viewed both as an object and as code, with bridges back and forth. These bridges include code changes making live updates to the visual representation of objects, with widgets showing up on the visual representation of the object one is editing in code, and with changes made using these widgets producing live updates in the code. This extends not only to objects but to animation, with the ability to change shapes and blending operations while animations are happening, giving the impression that the model is being updated every animation cycle. This not only invites experimentation and refinement using both code and visual modes, but builds deeper understanding of the connection between the two. It makes code a powerful path for media creation — enabling iteration, scaffolding, and incremental movement to deeper engagements with the code level.

Returning to Perlin’s foundations in the extended book, he shows how these connected visual/code objects could be embedded within it. The reading and writing of visual/code objects — from smart, engaging characters to revealing visualizations — can take place in the same context as, and interact with, bodies of text. In short, Perlin showed a working demonstration of a platform for collaborative reading and writing of this new sort, defining a potential future in which we stop asking how students will learn programming and the vocabularies of computational media, and instead make these an embedded part of learning every subject.

If you wish to discuss the ideas in Perlin’s talk further, please leave comments here or take to Twitter with the #MediaSystems hashtag. Also, please check out our previously-posted videos and watch for Pamela Jennings’s talk, coming next!


This material is based upon a project supported by the National Science Foundation (under Grant Number 1152217), the National Endowment for the Humanities’ Office of Digital Humanities (under Grant Number HC-50011-12), the National Endowment for the Arts’ Office of Program Innovation, Microsoft Studios, and Microsoft Research.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these sponsors.

This entry was posted in Academics and tagged . Bookmark the permalink. Both comments and trackbacks are currently closed.