Category: Uncategorized

How we sonify ATLAS data- ‘technical’ notes


A few people have been asking how we actually go about making sounds from ATLAS data. This is a very brief explanation of how we have been doing this.

Different particles pass through the different layers of the detector, which are made up from different materials. Some of the detector is made up of gas, which is ionised when a charged particle passes through it – electrons are emitted by the atoms in the gas, and gather at the charged surfaces. The charge collected at these surfaces produces an electrical current, or a signal. Other sub-detectors are made from metal and silicon, which detect particles in different ways. There is a good explanation of this here.

The detector is interwoven with electronics such that when a certain tiny bit of the vast detector is ‘lit up’, a signal is produced which reflects the amount of energy/position/time.

These electrical signals are digitised, and then all of the signals produced by the detector in that tiny slice of time are considered together.

If enough of the detector ‘lights up’ in a way that we have decided (based on lots of prior experimentation) looks interesting, then the signals are stored. If not, then all of the signals are discarded. This system for ‘triggering’ on interesting events is necessary because of the sheer volume of data being produced.

Those signals that survive the trigger are processed into a format that can be handled by an analyst using computer code,  generally c++.

This is the point at which all physics analysis can begin, and it is also the point where the LHCsound data is collected. The data is processed in such a way that one can say, in the appropriate language “Give me the number of charged particles in this event” or “give me the amount of energy deposited in this event”. The result of this process is a print out of numbers. We choose which numbers to print out arrange these numbers into very simple text files. Each row in the file corresponds to a different event, each column corresponds to a different variable, or aspect of that event (such as number of charge particles or total energy).

These columns of numbers are then read into the compositional software we are using, which is called CDP. We use CDP to make the numerical data into sounds. Within CDP (or other compositional software) we can choose how to map the different observable quantities such as energy, direction, particle type (the columns) to different audible properties such as pitch, amplitude, duration, timbre. We (or you) can do this in any way we (or you) choose.

How to make sound out of anything.


For a few months now I have been rattling on about sonification. So this is an explanation of how it works, with drawings.

For this example I am going to sonify a pub table, because it was where I was sat when I apparently first explained it properly to someone.

Here is a table. A coin is rolling across it. I want to sonify this. I don’t want to listen to the sound it makes rolling along the table, because that doesn’t tell me much about it. I want to make a new sound that tells me all about the coin’s motion. A sound that contains more information that just listening to the coin or just watching it can tell me. So I measure the coins motion:

I’ve just chosen speed, number of turns and end position here, but I could also have used lots of other variables such as type of coin (material), weight, value and so on.

On the table there is also an ashtray, with a half-smoked fag in it. Because this is also on the table and I want to sonify the table, I do the same sort of measurements on the cigarette.

And then I notice an odd-looking insect, so that gets its data recorded too.

So now I have three sets of three numbers. I could go on. I could also record the leaf blowing over the table and the elbows resting on it and the pint of beer gently bubbling on it. But I’m going to stop for now because I want to explain how the numbers become something you can hear.

To “hear” the data we can map physical properties (The Data) to audible properties (The Sound) in pretty much any way we choose. For a physicist, an obvious way to do this might be to map speed to pitch. I think this is obvious for a physicist because both of these things are measured “per second” (pitch or frequency is measured in Hertz, which means vibrations per second). But we don’t have to do the obvious, we can map any physical property to any audible property.

In this example I’m going to map speed to the pitch of the note, length/postion to the duration of the note and number of turns/legs/puffs to the loudness of the note.

Now I have to choose starting positions and ranges. When I do this I have to consider that:

I want the sound to be audible, which limits the range of pitch to something like  20 – 000 Hz for humans, but I’ll play safe and keep it between 100 Hz and 1000 Hz for now. Very high-pitched sounds aren’t very pleasant after all. I’m going to limit the duration range to between 0.1 and 10 seconds, because it seems reasonable that we would be able to hear  10 different notes per second. (In fact, humans can distinguish about 50 notes per second. Here is a nice article on hearing if you are interested.)

I’m going to limit the loudness range to between 10 dB and 80 dB, but I notice that the number of puffs and turns are small numbers and the number of legs is large. There are a number of ways I could deal with this. I could just say that N=3 corresponds to 10 dB and then when N increases by 10, loudness increases by 2 dB. This would give me a 60 dB insect. But this would mean that I would have just 0.2dB difference between an insect with 253 legs and one with 254 legs. What if that extra leg is really interesting? I know my ear is not going to be able to detect a change in volume of 0.2 dB. Which brings me to the other important requirement for mapping:

I want to be able to easily hear small changes in the data; I want an insect running at a speed of 2cm per second to sound significantly different from an insect running at 3cm per second. The cigarette is burning at 2cm/min = 0.033cm/s and the coin is going at 3m/s = 3000cm/s. This means I really want to be able to distinguish speeds that differ by 0.001cm/s .

So I want to be able to distinguish sounds to within 0.001 over a range of 3000. Possible? Apparently the maximum number of frequencies that the human ear can distinguish is a whopping 330,000. By looking at data over a range of 0-3000 with a precision of 0.001, I’m asking my ear to distnguish 3,000,000 different frequencies. I can’t do it. So I should rethink my mapping in this case, now knowing that if I am looking at data which has a large range, I am either going to have to reduce the range or sacrifice some precision.

We’re not so good at noticing fluctuations in volume. We can hear over a range of about 100 dB before our eras start hurting, and can determine fluctuations of about 1dB. This gives us just 100 loudness points to map to (compared to the 330,000 frequencies) which makes me think that volume should be used for a “rougher” physical property, or for a physical property that doesn’t have a wide range.

Duration is a but easier to handle, as I can extend the duration of a note indefinitely if I want to. For this example I might choose 10mm to be mapped to 0.1 seconds and then for every extra 10mm I add on 0.1 seconds of duration

Problems with the LHCsound website


We’re really sorry if you’ve had trouble accessing the LHCsound website or any of the sounds in the last 24hrs. Due to the large amount of traffic we’ve had directed our way from some recent press, our server has had to limit simultaneous visits to our site, causing the problems the site is experiencing. We are currently working on moving our site to a dedicated server or VPS so hopefully normal service will resume soon.

New LHCsound website


We have a new website: www.lhcsound.com thanks to the considerable talents of our artist Toya Walker and Lily’s special combination of (minimal skill+=large enthusiasm) in web programming.

The new website is at least 130% better than the moonfruit effort, with a few new sounds and a convenient sounds library, from which you can download all of the sounds we have made so far in mp3 or wav format.

We’ve had several requests from composers for midi/ numerical data files. These will be available in the next couple of weeks.

We have also started adding explanatory pdfs to the sounds library, aimed primarily at educators and those doing outreach. Feedback on the content of these is most welcome.

If you haven’t seen it already, you can see a Q&A with Lily on the New Scientist blog, and keep your ears peeled for more soon..

I’m off on holiday for a few days now, to a location which conveniently has no internet access.

Happy tinkering.

Listening to the ATLAS detector


Why can’t everyone enjoy the Large Hadron Collider as much as I do?

What do particles sound like? Can we make music out of LHC collisions? Will it teach us anything? I regularly talk to non-physicists about the LHC. The general consensus among the people I speak to seems to be that it is really exciting and interesting, but that the details are incomprehensible. One of my favourite feelings in the world is getting to the end of some really difficult calculations and realising that I have gained some meaningful knowledge about the universe. But not everyone is quite so keen on the idea of spending 7 or 8 years doing maths in order to get that feeling! How to share the love without sharing the pain?

ATLAS is a music box by Toya Walker

ATLAS is a music box by Toya Walker

Sonification means taking data and turning it into sounds while retaining the information in the data. A simple example of sonification is the car parking sensor that informs you of the space behind you via a beeping sound. The distance between you and the car behind you is mapped to the period of the sound, so that small distances produce a series of beeps that are very close together in time.

There are many more complex examples of using for sonification for some form of analysis; helping blind people to see, predicting earthquakes and identifying micrometeoroids impacting the Voyager II spacecraft.

(If you’re interested in finding out more about sonification, I’d recommend reading this sonification report and then trying some of the links from this page.)

Can we sonify the ATLAS detector data in such a way as to make them appreciable to non-physicists? It seems that this is the ideal candidate for sonification- it ticks all the boxes. Collisions data is associated with spatial postion and direction, changing in time and multi-dimensional. Because there is so much going on in the data, physicists often use artifical neural networks (computers programmed to behave a bit like very simple brains). In simple terms, if we were classifying birds we would do so based on their colour, wingspan, beak shape, diet, song, and so on. We can do this fairly easily using our eyes and ears. But what if we were to try and classify something more abstract? We turn to complex ‘black-box’ computer programmes because we have not found another way to deal with large amounts of multi-dimensional information.

Sound seems the perfect tool with which to represent the complexity of the data; our ears are superb at locating the source and location of sounds relative to one another, we can hear a vast range of frequencies and distinguish timbres (different instruments) before they have even played a full cycle. We also have an incredible ability to notice slight changes is pitch or tempo over time and to recognise patterns in sound after hearing them just once. Perhaps using our ears could allow us to make full use of the neural networks between them.

Higgs boson

500 points to the finder of the Higgs boson

LHCsound, the project to sonify ATLAS detector data, is taking shape.

The LHCsound team continues to grow, with sonifications being added all the time by Archer Endrich and Richard Dobson from the Composer’s Desktop Project and some great artwork by Toya Walker.

We have several sounds up on the website now, including a Higgs jet composition (the energy deposits in a fat jet are sonified in terms of their energy, distance from interaction point and angular distance from jet axis), an event monitor (the number of charged particles in events picked out by the minimum bias trigger determines pitch and the timing is time-stretched real time difference between triggers), and various whole event sonifications.

There is a lot of work to be done. Our hope is that other physicists, composers, and real-world people will get in touch with their own ideas.

Lily has finally worked up the nerves for running on the goliath of processing power known as the grid, meaning that we will shortly be able to sonify real 7 TeV collisions data. There is a growing list of physics processes and we would like to sonify, from event shape variables (Lily’s favourite) to Feynman diagrams themselves (Richard’s bold idea).

The website is still a bit rusty and amateurish, as one might expect from a physicist, but be assured we have grand designs for the future!

What does an electron sound like?


I have known a number of particle physicists, myself included, who strongly associate different particles with different colours. To me, an electron will always be blue.
Some colleagues I have asked have also had strong associations between particle types and sounds. Others have just looked blankly at me, perhaps with one eyebrow slightly raised.
The most interesting associations seem to come from thinking about hadronic activity; for this I have heard descriptions such as:
“Hadrons are like a crash in a guitar shop, and clusters are like a conductor farting whilst directing a full orchestra thus making an unholy racket.”

“Hadronization sounds like opening a difficult bag of potato chips. When they hit the detector I imagine the sound in the movie War Games when a missile hits a city (a low computer generated tone).”

“Hadronic showers sound like a man carrying 12 pints on a tray falling down a long flight of stairs – including the swearing.”

These are pretty emotional responses! Some more ‘musical‘ associations seem to apply to simpler objects, for example:

“J/Psi’s are tings on a high triangle.”

“Electrons go twang, like a very high note on an acoustic guitar. Muons are a much deeper twang.”

“Fermions sound like wind chimes, the heavier the mass the lower the tone.”


This is all pretty interesting to me in my mission to sonify the ATLAS data output. I’d be interested to hear about more of these particle-sound associations.


What is LHCsound?


LHCsound: the  sonification of the ATLAS detector data output.

LHCsound is funded by the STFC as a public outreach project, but also has potential as a physics analysis/ detector monitoring tool and as a resource for musical composition and performance.

The project was conceived and preliminary funding acquired by myself (Lily Asquith) and Ed Chocolate, who is an engineer and musician.

LHCsound is currently in the software development stage. The lead developers are myself (Lily Asquith) and Richard Dobson. I am a high energy physicist working on the ATLAS detector at CERN, and Richard is a musician and developer of the CDP (Composer’s Desktop Project) software which is used in this project.

Our website is www.lhcsound.com.