Sonification Contest

Call

ICAD15 Sonification Contest - ICAD in Space

For the ICAD15 sonification contest, we have teamed up with the Graz-based Space Research Institute (Institut für Weltraumforschung, IWF). The IWF has provided us with two sets of data, and you are invited to submit a sonification during the conference, using either of these data sets.

Terms of the Contest

  • pick one of the two scenarios / data sets

  • you are free to chose the approach and tool to create your sonification

  • your end product must be a stereo sound file (AIFF or WAVE, 16-bit, 44.1 kHz) lasting no longer than 3:00 minutes

  • you must provide a plain text file of no less than 150 and no more than 250 words that explains how the sonification was implemented

  • you must upload sound and text file via the conference's ConfTool page no later than Friday July 10th, 17:00 CET. Please use the following link: https://www.conftool.net/icad2015

  • a jury formed by the IEM and the IWF will pick the winner of the competition and announce her/him at the ICAD dinner on Friday evening. Here the best entries will also be presented to the audience.

  • only one entry per conference participant may be entered. Members of the jury are exempt from the contest. The vote of the jury is final.

  • a prize money of 200 EUR will be awarded

  • please notify Hanns Holger Rutz at rutz(at)iem.at about your intent to participate in the contest, so we can generate an early estimate about the number of submissions! If you have any questions about the terms or the data, please do not hesitate and contact us at this e-mail address.

Information on the Data

A dedicated area of the IWF's research is the Earth's magnetosphere. The magnetic field around the Earth is dynamically changing over time and is exposed to the so-called solar wind, a plasma stream from the Sun, leading to a characteristic structure. On the side of the Earth that faces the Sun, a “bow shock” forms the outer boundary between the magnetosphere, where the solar wind is slowed down from super- to subsonic velocities. Closer to Earth we find the “magnetopause,” the boundary where the pressure of the solar wind is equal to the pressure of the Earth’s magnetic field. On the side of the Earth averted to the Sun, there is a “magnetotail” which is an elongated magnetic field structure created by the interaction of the solar wind and the Earth’s magnetic field. The following figure illustrates this structure. For more details, see https://en.wikipedia.org/wiki/Magnetosphere.

The magnetosphere is observed through different missions that send groups of satellites into space, equipped with sensors that register the magnetic field. For each satellite, we have a time series of the magnetic field, a vector with the three spatial components X Y Z and the magnitude given in nano-Tesla (nT). For the ICAD15 contest, the IWF provides us with data from two scenarios:

  1. the ESA's Cluster II mission, consisting of four satellites (four time series of sensor data). Their trajectory is indicated by the red ellipse in the illustration above. The selected time window is 2007-10-27, 06:00:00 UTC until 2007-10-29 00:00:00 UTC. Here, the magnetic field is relatively calm below the magnetopause, until the satellites move towards the centre (around 10 o'clock). The following plot shows this data: 


  2. the NASA's Time History of Events and Macroscale Interactions during Substorms (THEMIS) mission, consisting of five satellites. Of these, data from three satellites is provided that fly across the front of the magnetosphere. Here, the magnetic field can be interpreted as “strings” that are “played” by the solar wind, bringing the magnetopause into oscillation, at the same time displacing it out- and inwards.

 

The data is provided in the form of WAV audio files accompanied by a short meta-data text file that outlines the resolution and time frame of the data. You can download these files here:

sonif_contest_data.zip

Literature

  1. (Cluster Mission) R. Nakamura, A. Retinò, W. Baumjohann, M. Volwerk, N. Erkaev, B. Klecker, E. A. Lucek, I. Dandouras, M. André , and Y. Khotyaintsev, “Evolution of dipolarization in the near-Earth current sheet induced by Earthward rapid flux transport”, in: Ann. Geophys., Vol. 27, pp. 1743–1754, 2009; http://doi.org/10.5194/angeo-27-1743-2009

  2. (THEMIS Mission) F. Plaschke, V. Angelopoulos, and K.-H. Glassmeier, “Magnetopause surface waves: THEMIS observations compared to MHD theory”, in: Journal of Geophysical Research: Space Physics, Vol. 118, pp. 1483–1499, 2013; http://doi.org/10.1002/jgra.50147

Partners

Contributions

WINNER: Andrés Pérez-López

MAGNETIC SPACE is an artistic approximation to the Cluster magnetic data The sonification is entirely produced in SuperCollider, with a small post-processing with Ardour.

The most predominant sound comes from the magnetic data. The absolute value of measured magnetic field, as well as its first derivative, is mapped to the dusty, wind-like sound, which resembles the solar wind.

There is a second, blip-type sound. Extrapolating the concept of the observer's importance and implication in the measurement, the sound informs about the position of the four satellits, by locating a satellites' sound representation in the virtual acoustic space. The x-axis is arbitrarily chosen to be in the front of the listener, and z-axis is vertical positive direction. The sound is actually produced for binaural listening (Cipic subject number 65), but stereo-panning will probably also work. Spatialization is provided by 3Dj, the live sound spatialization framework I'm presenting at the conference (http://github.com/andresperezlopez/3Dj).

Download soundfile: MP3

Natasha Barrett

Sonification of ‘Cluster’ dataset. 

Charging the Magnetosphere with Crazy Sounds

I ignored the satellite motion path and focused on B xyzt for C1,C2,C3,C4. My sonification reveals features less clear in the graphs. I use parameter mapping in 3D, rendered in binaural. A stereo version for loudspeakers is provided. 

Data was decimated 1/20 and sonified using the following selection of ‘Cheddar’s’ features: 

Sound:

A short sound is triggered for each data point in a granular style sonification where each grain is uniquely controlled by the data. Each satellite is allocated it's own sound (a bee, a balloon hit, a rock crack, a metal scrape). 

Parameter mapping transformations:

- T: mapped to amplitude, grain size and grain window shape.

- Z: mapped to pitch shift.

- XYZ determines spatial location in the sonification. The input spatial data is scaled to an audio spatial world of 10x10 meters with distance amplitude and frequency attenuation. After tests I removed Z as it was less clear binaurally and you wont hear it on stereo anyway. This served also to enhance the XY plane. 

The virtual listening location was placed in the centre of the motion activity that occurs half way through the data, for the following reason: the opening has a clear burst of energy, but by placing the listening location away from this point we hear interesting turbulence and circular activity as the spatial information approaches and departs. It’s also a nice musical-structural feature ☺

Download soundfile: MP3

Alberto Decampo / Hannes Hoelzl

The Scrubbifyer is intended for exploring multichannel time series.

Intended use is roughly like this:
1. pick a time loc in the file,
2. pick a range of context,
3. zigzag-scrub the data range as audio at or near 48kHz
4. use it as on ampmod signals on multiple time scales:
    e.g. slowed down by e.g. 4, 16, 64, etc
5. explore interactively by controlling
center, range, modDepth, numMods, timeScales
6. store interesting settings as presets


The litte demo file is not a piece, it just shows some parameter sweeps:
1 scrub trhu file from beginning to end
2 change range around location from small to large
3 open modulation lowest pitch/rhythm drops by ratio of
1:5, 1:25, 1:125
4. raise playrate by factor of 5, so same pitch as before emerges



/// later:
possible app interface:
* show its data graph - soundfileview, with time scales
* playback button
* file select button,
    loading x y z mag file loads the other 3
multitouch interface:
x = timeLoc, y = timeRange, press = amplitude
multiple fingers could play multiple areas,
or use x y to change secondary params (timeScale, modDepth,etc)





// code

~dataSR = 22; // real sampling rate
~numChans = 2; // stereo for now

~bufxyThemis = Buffer.read(s, "data/thc_fgs_gse_B_xy.aiff".resolveRelative);

~plotter = ~bufxyThemis.plot;
~plotter.interactionView.mouseDownAction = { "yo".postln };
~plotter.interactionView.mouseMoveAction = { |uv, x, y|
    var width = uv.bounds.width - 32;
    var normx = x - 16 / width;
    Ndef(\scrub).set(\location, normx);
    uv.refresh;
};
MFdef(\draw).add(\locRange, { |uv, x, y|
    var width = uv.bounds.width - 32;
    var normx = x - 16 / width;
    [x, y].postln;
    Pen.line(x@0, x@uv.bounds.height);
});
~plotter.interactionView.drawFunc.addFunc(MFdef(\draw));

~bufxyThemis.play;

Ndef(\scrub).clear;
~numChans = 2;
Ndef(\scrub).ar(~numChans);

Spec.add(\buf, [0, 20, \lin, 1]);
Spec.add(\location, [0, 1, \lin]);
Spec.add(\range, [0, 0.1, 4]);
Spec.add(\playRate, [0.2, 5, \exp]);
Spec.add(\modDepth, [0, 4, \amp]);
Spec.add(\timeScale, [2, 16, \exp]);
Spec.add(\numMods, [0, 3, \lin]);
Spec.add(\ampComp, [1, 100, \exp]);
Spec.add(\leakCoef, [1, 0.9, 5]);

s.scope;

Ndef(\scrub).gui.skipjack.dt = 0.05;

(
Ndef(\scrub, { |buf = 0, location = 0.05, range = 0.01, playRate = 1,
        modDepth = 1, timeScale = 5, numMods = 0, ampComp = 1, leakCoef = 0.9995|

    var lag = 0.5;
    var bufFrames = BufFrames.kr(buf);
    var centerFrame = location.lag(lag) * bufFrames;
    var frameRange = range.lag(lag) * bufFrames;
    var foldRange = frameRange * [-0.5, 0.5];
    var timeScales = (timeScale ** (0..-3));
    var playRates = playRate.lag(lag) * timeScales;

    // foldedPhasor / bufFrames;
    var phasors = Phasor.ar(0, playRates,
        centerFrame - frameRange,
        centerFrame + frameRange
    ).fold(*(centerFrame + foldRange));

    // make the sounds and modulators
    var sndAndMods = BufRd.ar(~numChans , buf, phasors, 1, 4) ;
    var sndAndModsLeaked = LeakDC.ar(sndAndMods,
        leakCoef ** (playRates.reciprocal));
    var snd = sndAndModsLeaked[0];
    var mods = sndAndModsLeaked.drop(1) * ampComp.lag(lag);
//    mods = Normalizer.ar(mods, 1, );

    // turn the deeper modulators down
    mods = mods * (numMods - (0..3)).clip(0, 1);
    mods = (mods * modDepth + 1).product;
    mods[0].poll;
    snd * mods;
}).play;
)

fork {
    Ndef(\scrub).play(fadeTime: 2);
    Ndef(\scrub).resetNodeMap;
    2.wait;
    (50..950).do { |i| Ndef(\scrub).set(\location, i / 1000); 0.01.wait };
    1.wait;

    Ndef(\scrub).resetNodeMap;
    2.wait;
    (50..950).do { |i| Ndef(\scrub).setUni(\range, i / 1000); 0.01.wait };
    1.wait;

    Ndef(\scrub).resetNodeMap.set(\ampComp, 30);
    2.wait;
    (0..1000).do { |i| Ndef(\scrub).setUni(\numMods, i / 1000); 0.02.wait };
    5.wait;

    2.wait;
    (500..1000).do { |i| Ndef(\scrub).setUni(\playRate, i / 1000); 0.02.wait };
    2.wait;
    Ndef(\scrub).end(3);
};

s.makeWindow;

 

Download soundfile: MP3

Robert Höldrich

The absolute magnetic field values of four Themis data streams are sonified utilizing Augmented Audification (i.e frequency shifting aka single-sideband modulation whereby an instantaneous deviation of the modulation frequency from its nominal value is exponentially controlled by the signal to be sonified, yielding a pitch modulation).

Dropouts and low-frequency components have been removed from the raw signal values, at first. The sonifications of one run of the four satellite signals are superimposed in a time-synchronized way, yielding a total duration of 38 seconds. Event  1 starts after approx.  16 seconds.

The nominal frequency shifting  value of the earth-orbiting satellites A and D are 500Hz and 1000Hz, frequency shifting values of moon-orbiting satellites B and C are fixed to 2kHz and 4kHz. The amount of complementary pitch modulation is proportional to the satellites´ distances from the earth.

The listener is positioned between sun and earth, looking towards the earth and perceiving the satellites´ movements as frontal revolving trajectories  including variations in loudness, stereo panning and Doppler shift.  Satellite A and D revolve in opposite direction (contradicting the real movement). 

The sonification is embedded in a very soft early-morning summer soundscape with birds starting to chirp where the twitter density is modulated by the satellite´s distance to the sun.

Download soundfile: MP3

PerMagnus Lindborg

This sonification of the Cluster data is made by a hack of the "Locust Wrath" sonification system. The mappings are:
t -> attack
x -> compression, ringing
y -> tessitura, partialspread, detune, presence
x -> vibrato speed, vibrato depth
each coupling with its own (very quickly tuned) transfer function.

i don't the binaural mapping works as i would have liked it, but this seems to be due to extremely slow spatial position of the satellites - - the excerpt is only a short "window" in the data, from around sample 55000 to 58000 (quite arbitrary)

it's a thrilling exercise to do this challenge but to be very honest it's not more than a hack which I've made this afternoon - - - so please do not judge my humble results too harshly!!!


Download soundfile: MP3

Alexander S. Treiber

The sonification uses the Magnitude of the magnetic field strength of all four satellites of the Cluster II experiment. From the position data of the for satellites an idealized „central“ orbit has been calculated. The four satellites are sonified of four sourced of noise which are moved in the stereo panorama according to their distance relative to this central orbit.

The pink noise emitted by the satellites is band pass filtered according to a low pass filtered signal of the magnitude of the magnetic field. The amplitude of the noise signal is set according to a low pass filtered signal derived from the change of the magnitude oft the magnetic field.

Forthermore there is a background drone which is pitched according to the change in altitude over earth, at the beginning of the sonification the satellite cluster is rising, lateron, when the cluster is reaching the apogee the sound drops below the audible range and is becoming audible again when the satellites begin to drop.

The calculatoion of the central orbit and the deviations were done in GNU/octave, the main sonification in Supercollider and some finishing touches in Audacity, all on Ubuntu 14.04, so everything was done in an open source environment.

Download soundfile: MP3