quicktime of installation video (9.5MB)

FM

Project Description

FM is an interactive, dual location video installation. In FM, two separate spaces are joined together by a third space created by the video projections in each space. When a participant first enters one of the installation rooms, they are presented with an image that only hints at the existence of a connected second space. Only traces and outlines of movement are visible in the image. A camera in each location is used to determine participant's location in each space. The cameras are set up in such a way that there are on-to-one mappings of location from one camera to the other. When participants occupy the "same" location in both spaces, a visual communication channel opens up in the video projections that corresponds precisely to the intersection of the two participant's bodies. At first, the intersecting body parts of the two participants are blended together in a straightforward manner, but as the size of the intersection increases, the bodies of the participants a warped in time, creating a combined spatio-temporal communication zone.


Work

For this project, I managed the production side of things, directing and integrating the various bits of visual design and software production. I created a video filter for jitter called xray.jit.timecube that allows an arbitrary temporal mapping of a video signal as well as a jitter patch that renders a video image as if the scene were viewed through ground glass. I then integrated patches designed by the group for background subtraction and motion detection into the main project patch.


Future

In order to take this project the next step from prototype to working installation, issues concerning lighting of the installation space and the speed of the video processing software will need to be improved. For the lighting, we need to experiment with different setups and segmentation algorithms in order to find a balance between algorithmic complexity, quality of the segmentation, and speed of the processing. Currently, we are using a simple background subtraction algorithm with thresholding along with a rudimentary lighting system. The algorithm could be improved by adding rudimentary morphological analysis so that pixels can be said to belong to a shape. This will reduce errors in borderline thresholding cases where pixels within a thresholded form are not counted as outside it simply because their difference in value from the background is below the threshold. Instead, the pixels would have a sense of their neighborhood and based of a combined metric of distance from the background and neighborhood relations could be said to be part of the shape or not.

The prototype for this project was implemented in Jitter. Jitter is essentially a sketching language for video and is optimized for maximum flexibility in order to quickly see how certain video ideas will look. In order to turn this project into a high quality installation, the sketches made in Jitter will need to fleshed out some more and then converted into C/C++ where more precise control over data flow and data structures can be exploited to achieve higher video resolution and faster frame rate.

One idea for extending the visual language of the interaction is to render the spatio-temporal warping as a 3-dimensional form and not just a remapped 2-dimensional structure. As the interaction progresses, the 3-dimensional form would gain more depth and complexity, providing a large and more dynamic interactive space for participants.