Module III with Jitter

[Multimedia, Max] (Duration: 10:48)

December 2023

Module III with Jitter is an interactive audiovisual composition that examines the relationships between motion, brightness, and sound synthesis in real time. The screen is divided into nine regions, each uniquely rearranged to form a visually fragmented yet cohesive interface. Among these, four areas are mapped to digital sound synthesis, responding directly to live video input from the camera. Grayscale luminance data, processed through jit.rgb2luma, serves as the core parameter for driving sound synthesis. Movements or objects in front of the camera———especially those in white or near-white tones———are tracked to alter the luminance values, evoking specific sonic responses. I employ hands, facial expressions, and a white napkin placed in my mouth as responsive elements. I strategically use their proximity to the camera to generate different digital sounds and create sound variations. An additional layer monitors the entire screen, where near-total darkness triggers distinct audio responses, offering a striking contrast to the more vibrant sounds of brighter scenes.

I use faders to adjust the pixel size of the visuals, balance audio layers, and control the speed of a tone generator in static states. This nuanced control shapes a fluid, multisensory experience where sound and motion converge into an expressive, evolving narrative. The performance highlights the tension between the physical and virtual worlds. Actions in the natural environment contrast with the visual effects and sounds presented in the virtual environment, blurring the boundaries as the pixel size increases.