Does time exist?
Although it may sound like a deep metaphysical question, when you’re doing data sonification, there’s a very practical reason for asking, “Does time exist?” or, at the very least, “Are these data points a function of time?”
In the case of the Atlas data, the answer is, “not really.” The data set contains over 10,000 independent “events,” one per line, that are not necessarily arranged in time order. That’s why, in the first set of experiments, I tried to leave time out of the picture and instead create a kind of sonic histogram for each parameter. But almost immediately, I wanted to hear how the histogram changed with GeV, so it quickly became an animated sonic histogram that changed over time with GeV value. So those experiments did end up depending on time after all. It turns out that it’s nearly impossible to leave time out of the picture when you’re mapping data to sound. Sound is not a static image that you can scan with your eyes; sound is an air pressure wave that can only be perceived over time.
Some kinds of data sets are time-dependent, but in the case of the Atlas data, the decision of which parameter to map to time is arbitrary…and yet the parameter you select has a huge impact on the sound that you hear.
To continue in the same vein as the previous examples, let’s start by using the value of GeV as time. But in these next examples, instead of gathering the values of a single parameter into bins and listening to all of those bins as the partials of a complex timbre, let’s listen to all five parameters of one event at a time.
GeV as time
For example, since the Higg’s like particle is said to have a resonance at 125 GeV, let’s listen to the interval from 120 GeV to 130 GeV where the time, t in seconds is:
t = gev – gevMin * 10
Each event is assigned two oscillators, and the frequency of each oscillator corresponds to the transverse momentum of one of the gamma particles. Each oscillator has an decaying exponential amplitude envelope and the duration of the envelope is the inverse of its frequency (scaled by 1024 in this example). The stereo position and the frequency of each oscillator changes over the course of an event; the magnitude of that change is controlled by delR. The amplitude of each oscillator is controlled by pTt.
Visually, the magic GeV of 125 is about halfway through the file or around 50 seconds so you can try to listen for anything that seems unique during that interval of time.
|Sound Parameter||LHC Event Parameter|
|Start time||GeV (invariant mass of gammagamma)|
|Magnitude of change from center of stereo field||delta R|
|Change in pitch[i] (downward direction)||delta R|
Once we’ve established a mapping, there are a lot of ways to listen to the same data. For example, we could shorten the durations (scaling by 32 instead of 1024):
Or we could speculate that the Higgs events might be more likely to be those events where pTt is greater than 40. Let’s slice the sound up into the events where pTt is less than 40:
And compare the sound to a slice of events where pTt is between 40 and 222:
Are there other pTt slices that might be significant or interesting to listen to?
Or we could listen to all of the events, but “underline” only those events that meet certain criteria by giving those events a longer duration. For example, in the following sound, if sum of the gammas is between 166k and 222k and the pTt value is larger than 115, we scale the event duration by 2048. Otherwise, the event duration is scaled by 8:
What other criteria might be interesting to try with this “underlining” technique?
As you can hear from these examples so far, there are two phases to data sonification. The first one is exploratory. You try lots of different mappings and, based on what you hear, what you know, and what you’d like to know, you modify those mappings or throw them away and try something completely different to see what you can see (or hear what you can hear). That may also explain why sonification is so often a team sport. A simple mapping of data to audio parameters might reveal some structure; but knowing what is or is not significant, and deciding what might be interesting to try next works best as an interactive, collaborative process.
Once you find a mapping that reveals a significant structure, you can re-use it in a different context, one which is less exploratory and more communicative. If you find a particularly powerful sonification you can use the sound to communicate your findings to other people, just as you might publish a particularly compelling 2d graph to show something significant about your data.
delR as time
What if we map the value of delR to time in seconds? This next example uses (delR – 0.466292) * 20 as the time in seconds, GeV as the amplitude, g1pt and g2pt as the two frequencies, and pTt as the magnitude of the pan. The range of GeV is 100 to 150:
As you can hear, selecting a different parameter for time has completely changed the sound! You can hear that events with small delR values are quite rare and that they tend to be high in pitch (large gamma values).
This sonification tells us something about the likelihood of certain kinds of events with respect to where the particles were detected after the collision. Over time, with increasing delR, you hear more and more events and you start to hear lower frequencies (smaller gamma values). As delR continues to increase, things start to get very dense, with a wide range of gamma values. And, towards the end, for the largest delR values, the pitches start to generally decrease as the number of events decrease.
Is this expected? Are these trivial observations or is there something interesting here?
pTt as time
Using pTt as start time reveals something else about the likelihood of of certain kinds of events. In this sound, the start times are pTt * 0.5, the amplitude is GeV (in the range of 120-130), the two gamma values are the two frequencies, the duration is the inverse of the frequency and delR controls pan magnitude and pitch change.
It seems to be showing the events with the highest pTt (the events that occur later in this example) are also the most rare. For the most part, the highest pTt events also tend to have higher gamma values.
Are the events with higher pTt values more likely to include the events associated with the Higgs-like particle?
(g1pt + g2pt) as time
What if we take the sum of the two gamma values and use that as a start time? The minimum sum is about 70k and the maximum is around 300k, so let’s scale it down by a thousand first (otherwise we could end up with a long example), so:
t = gSum – gSumMin * 0.001.
Now that the gammas are being interpreted as start time, we can map something else to frequency; let’s use GeV. Let’s limit the range of GeV to 120-130 and map it to pitch space as:
pitch = (gev – 60) nn
where nn means units of notenumber (and where 60 nn is middle C). Amplitude comes from pTt, and the stereo position is controlled by delR.
The distribution of events seems to be pretty constant for the lowest quarter of the range, and the pitches (GeV values) sound like they might be equally likely.
At around 30 seconds into the sound (gammaSum ~= 100k ), you can hear a general tendency for the pitches to start going up. Which would be correspond with increasing GeV values.
To reverse the mapping and compute the gamma sum given the time, you can use:
t * 1000 + 70335.5
At about 1 minute, the density of events starts dropping off, and by 90 s they are getting even more rare. Meaning that fewer of the events have gamma sums > 160k.
Since a GeV of 125 might be especially interesting, I added a reference tone so that you could listen for events that match the reference pitch. The high sustained sound you hear in the background toward the end of the example is the reference pitch (which I set an octave higher than the ones we are trying to match). Listen all the way to the end for the rarest events with the highest gamma sums.
In the next example, I tried to focus in on an even tighter range of GeV values: 124 to 126.
You can hear that many fewer events meet those tighter constraints, but the change in density with increasing gamma sum seems pretty similar. The last of these events happens at around 133 seconds (where the gamma sum is about 203335). And the pitch of that last event is satisfyingly close to the reference pitch you hear in the background.
Is any of this significant? Or is it just showing us how rare or how likely certain kinds of events are? Listen carefully and maybe you will hear something important.
Do you take requests?
Why yes, we do take requests. Do you have a suggestion for a particular range of conditions that we should be listening to more closely?
Let us know what you’d like to hear next.