Waterworld

Done in collaboration with Thitiphong Luangaroonlerd, "Waterworld" was an interactive audio-visual experience where a user could physically explore the depths of the ocean through an aquarium and a hand-held lens. By sinking a viewing lens into the water and freely moving it about, the installation attempted to bridge the imagery and sounds of life below the ocean with a person’s actual experience of being immersed in the ocean.

Depending upon the depth and position of the viewing lens, a different video was unmasked and a varying soundscape was triggered. As a user sunk the lens deeper into the aquarium, the video and audio would change accordingly corresponding to imagery and sounds from deeper beneath the ocean surface.

A Kinect camera placed below the aquarium acted as the sensory input device. Waterworld takes advantage of the Kinect’s unique depth mapping capabilities to determine the height (i.e. depth) of the viewing lens. OpenFrameworks (C++) was used as the creative coding paltform to input the data, perform the video computation, manage the projection mapping, and emit the sounds. A big thank you to Zach Lieberman for his OF Projection Mapping workshop and examples. A PD sketch was interfaced with OpenFrameworks over OSC to produce the audio soundscape. FInally, speakers and a projector, both placed below the aquarium, were used as the output devices for the sound and video.

“Waterworld” played on the notion of depth in terms of the user’s interaction as well as the technology used to execute the experience. But, by hiding all forms of computation and forcing the user to physically place their arms into the water, the hope was to recreate the natural and beautiful experience of freely exploring the depths of the ocean.