TAI Special Interest Group: Balloon Session

03 March 2011

In my regular lecture “Tangible Auditory Interfaces SIG” at Medialab Helsinki yesterday, we decided to build something. One of the students suggested to do a site-specific work for the studio space. Since the room is so high (approx. 8m), her idea was to hang balloons into the room and let them make sound (sound excerpts of the session, see below).

So she brought balloons and we hung them into the space (for a first iteration only in approx. 2.5m height).

In a first very brief session, we listened to the sounds captured by two contact microphones that we attached to two of the balloons.

After that, we sat down to discuss how we’d like the sonic add-on to sound.

We decided that the balloons should behave somehow socially: If they are touched, they should sound happy, whereas they’d sound lonely when left alone. We identified several moods that we aimed to integrate into the setup. They are documented on the paper shown above.

Since we only had approximately 45 minutes left, we quickly implemented a very basic filter to capture the sounds and use their amplitude as an envelope to shape grainy sounds rendered on the eight-channel system. Each speaker had its own frequency range, filling the room either with a very dense, or a dry soundscape, depending on the chosen parameters.

This is the implementation:

Ndef(in, {
SoundIn.ar([0, 1])

Spec.add(freqFac, [0.1, 2, exp]);
Spec.add(wet, [0, 1, lin]);
Spec.add(revTime, [0, 5, lin]);

// Create a difference between happy and sad
Ndef(modifier, {|freqFac = 1, decay = 0.1, wet = 0, revTime = 1|
  var src, dusty, amp, in;

  in = HPF.ar(Ndef(in).ar, 42);
  in = HPF.ar(in, 50);
  amp = Amplitude.ar(in, 0.001, 4*decay);
  dusty = {
    Ringz.ar(Decay.ar(Dust.ar(100), 0.01)
    * BrownNoise.ar, ExpRand(1000, 5000) * freqFac, decay)
  src = amp * dusty;
  SelectX.ar(wet, [src, AdCVerb.ar(src, revTime: revTime, nOuts: 8)])

We played around with this setup a bit, trying to come up with happy and sad versions of the sounds. However, this was hard and we decided to implement a short mock-up automation that changes the parameters of the balloon sounds independently every now and then.

Ndef(decay, { LFNoise1.ar(0.125).range(0.04, 0.4) })
Ndef(freqFac, { LFNoise0.ar(0.125).range(0.01, 2).lag(2) })
Ndef(wet, { LFNoise1.ar(0.125).range(0, 0.04) })
Ndef(revTime, { LFNoise0.ar(0.125).range(0, 3).lag(2) })

These are recording excerpts from that session:

After 2 hours of extensive work, we were very tired but also quite happy to see, how far we can get in such a small amount of time. Of course, we did not reach the intended goal at all, and, I have to admit that we also lost sight of the original thought of creating a site-specific work; however, we gained experience in rapid prototyping, gathered lots of ideas on how to proceed, and, last not least, had a heck of a lot fun.

Ideas for further development include:

  • use visual sensory elements to capture movement (in order to differentiate between happy and sad),
  • experiment with other microphone types including small mics inside the balloons,
  • more balloons!
  • since the idea to have a wind-chime type of installation did not work, we should hang them higher
  • experiment with a combination of hanging and floating balloons
  • add some grains or water into the balloons to alter their sounds

If you have more ideas on how to proceed, please write them into the comments!

Photos are made by us.