Feedbørk is a an audiovisual performance piece that maps video feedback recursion to algorithmically generated music, spatialized in 8-channel surround sound. It was developed and composed by my good friend Mike Rotondo and I for the Stanford Laptop Orchestra.

The performers each hold an iPad with a real-time front facing camera feed. When the two screens and cameras are pointed at each other, visual feedback begins to spiral it's way down the screens. The output of one of the iPads is projected to a large screen and the projection itself is also used for feedback at the end of the piece.

Each screen has a white border around it, and computer vision is used to track the depth of the recursion by detecting the number of borders on the screen at once. This information is mapped to reverb and delay to simulate audio "depth". Overall screen brightness is also used as a parameter mapped to a warm synth pad. The performers can tap to play chords, pan to play a melody, and multi-finger drag to produce different density, stuttering, and half-time effects on the generative drum sequence.

The gesture and visual information is routed via OSC to a computer running the ChucK Audio Programming Language. There are multiple audio threads running generative synthesizers that interpret the data as parameters for sound. In particular, I am proud of the generative drums which sounded even nicer spatialized over 8 speakers!

Check out the academic paper on many-person musical instruments.

Check out the code, if you dare.

The Feedbørk iPad sandwich in action. The left performer interacts with the stationary iPad screen while the right performer manipulates the recursion itself by moving the second iPad screen around.

Here my co-composer Mike Rotondo points his iPad at the projector while I tweak a few parameters behind the scenes.

Footage from the 2011 CCRMA Transitions performance of Feedbørk.