NOBODY EVER HAS THE
Have you ever wanted to adjust how much information you know about a character's thoughts in a book?
Imisphyx V is a series of conversations that take place between an ensemble of characters in a novel. You control which characters speak, and whether or not you see their thoughts. The installation uses reactivision and the TUIO library in Processing to experiment with non-linear storytelling. I built this piece as a capstone project for Golan Levin's advanced studio course, Interactive Art and Computational Design, in Spring 2013.
Content adapted from my coursework blog.
Also available as a journal paper with extended discussion of technical specifications.
This is the fifth iteration of Imisphyx, an invented word that, in Greek, would mean something close to “fake pulse.” The first version is the original novel manuscript; also a work in progress. Imisphyx II is a choose-your-own-adventure vignette set in the same universe, written in Twine, and available to read over in the writing section of my portfolio. Imisphyx III lives there too, and is a tangible story based on a set of Polaroids of artifacts, with associated police evidence sheets filled out by one of the novel’s antagonists.
At its core, Imisphyx tells a story about roughly ten characters, living in a universe where a chemical extract has been developed that synthesizes faith—i.e. it can cause irrational zealotry or utter despondency in an otherwise stable person in a matter of hours. The thing about such a premise, however, is that a story only ever amounts to what somebody believes happens. So, depending on what characters you choose to believe as a reader, the narrative can change radically.
It turns out it is difficult to write a novel that morphs as it is told and with who is telling it, particularly when you are limited to pen and paper. But the desire to get these characters out of my head has driven me to alternative forms of storytelling.
Imagine a play where the audience votes on which actors appear on stage at any given scene, and what props they can use. The reader can choose to experience the story out of chronological order, or character by character. In addition, each character has multiple states; an internal (first person PoV) state, where the reader can see thoughts, a conversational (third person PoV) state, where one only sees what a character says out loud, and a biography state that displays basic facts.
An initial concept design for my interactive storytelling table, outlining the basic states of character objects.
An initial concept design for my interactive storytelling table, outlining the basic states of interaction objects, as well as timeline controls.
Another example scene using the same basic story controls. This one involves two characters from Alfred Bester's novel, The Demolished Man.
An example scenario combining the different objects discussed in the designs. In this case, two characters are conversing in 3rd person mode, while a third character observes on the sidelines in 1st person mode.
An second scenario, where three characters are conversing in 3rd person mode, but an attempt to add a fourth character results in an error because that character doesn't exist in the scene at that time.
3 - 5
A Quick Proof of Concept
My intent all along was to build the concept using a reactable and the reactivision library, but it took some time to gather all the necessary hardware and understand the code well enough to bend it to my will. In the meantime, I created a quick on-screen prototype in Processing, Imisphyx IV, which I used to organize the story content, smooth the timing of the dialogue, and experiment with different animations. In this version, a maximum of two characters could be "loaded" at a time, reducing the elasticity of the story.
Above: A screenshot of my Processing based prototype, showing two characters engaging in a conversation after being "loaded" into the opposing slots. Objects for the characters to talk about exist at the bottom of the screen. Each character also has a small biography. Below: Inside the table as I was building it. A wide angle projector coupled with a carefully angled mirror delivers animations from my computer to the plexiglass surface of the table, while infrared cameras track the movement of the character objects across the surface.
Building the Table
Once I’d worked out the general timing of the story and animations, I moved on to assembling the hardware I would need to truly bring the interactions to life. I built an interactive table that uses an infrared camera to track fiducial markers on a semi-opaque plexiglass surface. Simultaneously, a projector displays an image that reacts to the presence and geometry of each marker. The reactivision computer vision framework is used to track the fiducials, and sends OSC signals that can be read and interpreted by any number of programs. Here, I used the TUIO library for Processing to receive the signals and draw the resulting projected image—in this case, lots and lots of text.
A set of glass disks printed with fiducial markers represented five characters from the Imisphyx universe. Readers can select up to three of the disks and place them in slots on the tabletop. For every combination of characters, the reader is presented with a dialogue, stripped of any context. The conversations can be read in any order, repeated and sequenced as the reader tries to figure out what is happening. Ultimately, the goal was to include rotational tracking for each fiducial marker, transforming every character-disk into a knob-like device. By turning the knob, a reader could begin to reveal the internal thoughts of a character during each conversation, their physical actions, position in space, and any number of other details.
The TUIO library makes calculating the angle of each marker very simple. However, because of time constraints in the course, I was not able to a) compile and format all of the text to be used, b) include code for drawing the text to the table, or c) appropriately attack the design challenges that come with trying to dynamically display large quantities of text (and at less-than-optimal resolution). Nevertheless, with an extended timeline, there is nothing to say that the complete system described above could not be built.
Above: A demo of the TUIO library, showing how each fiducial marker can be individually identified by the computer's camera and translated into digitally drawn objects with unique labels and behavior.
An exhibition of this and other course projects took place at Carnegie Mellon University’s STUDIO for Creative Inquiry on May 2nd, 2013. Both internal and external guests were invited to interact with the projects and give feedback. Comments on the table were generally very positive. Many people enjoyed the aesthetic appeal of the glowing table with bright colors, and the way the glass disks felt in-hand. I found that the majority of the audience felt motivated to read the text, and really liked the proposed concept of diving deeper and deeper into a character’s psyche, even though that part of the experience is not yet implemented. However, it also became clear that the image quality didn’t quite allow for the small font sizes I used, and people spent too much time squinting against the bright light of the projection. There also seemed to be slightly too much text on the screen, and people were often uncertain what different blocks of text meant, and how they related to one another.