Fielding
2017 by Till Bovermann
Fielding invites (non-)human beings to shape and form an artificial soundscape. It consists of semi-autonomous Nodes that are wirelessly interconnected and share a common acoustic basis.
-
Non-human performers influence Fielding by means of environmental sensors (temperature, moisture, light) built into the nodes. The sensed values determine the current states of parameters implemented into the node’s sound sources and filters. Their on the parameters is determined by external factors (see below).
-
Human performers shape Fieldings sonic qualities by forming sentences of a sonic alphabet. This structural information is send to all nodes, whereas the sensory information coming from the plants remains local.
Fielding’s structural foundation is based on an alphabet, a finite set of sonic elements.
Each element is represented by a single character of the set [ e, a, u, o, i, t, f, l, g, n, d, 0, 1, 2, 3 ]
.
// four parallel synthesis graphs
-- 0nd1dd1 1bn a0n 0aldlldld1
// variation
-- 0nd1dgd1 1bgn a0gn 0aldlggldld1
In this configuration, the digits 0
, 1
, 2
, and 3
are variables into which sound can be fed and read from.
d
stands for a short, time-dynamic delay-line.
Implementation Link to heading
Fielding is implemented in SuperCollider and based on System ∿ Encounter. While its sound synthesis is based on the same principles as Systems ∿ Encounter, the interaction with the environment is different.
Two types of nodes are implemented:
- Sensor nodes react sonically to their environment by means of sensors that measure temperature, humidity, and light. The data is used to modulate the sound synthesis parameters of the node. Each sensor node is based on a Bela board and runs a SuperCollider server.
- The Control node is operated by a human performer. It sends structural information to the sensor nodes and thus shapes the overall sound of the system. The control node runs on a laptop with SuperCollider and Systems ∿ Encounter installed.
Aesthetic Possibility Space Link to heading
Vocals always represent sound sources, whereas consonants are always filters.
// example configuration
$a -> \grains $f -> \waveloss $d -> \delay
$e -> \sine $l -> \allpass $0 -> \variable
$i -> \noise $g -> \rLowPass $1 -> \variable
$o -> \plnk $n -> \highPass $2 -> \variable
$u -> \mi $t -> \chopper $3 -> \variable
The actual outforming of the sonic elements are linked to external factors such as the date of performance, the moon phase and other planetary configurations. Their momentary configuration determines which sound patterns are mapped to the characters.
Credits Link to heading
Alongside lots of dedication, Fielding is build on the shoulders of such great projects as Steno, bela, BeagleBooard, and SuperCollider.
- Affiliation:
- Composition fellow at the Institut für Elektronische Musik und Akustik - IEM in Graz/Austria, and an artist residency at iii, the Hague.