How we sonify ATLAS data- ‘technical’ notes


A few people have been asking how we actually go about making sounds from ATLAS data. This is a very brief explanation of how we have been doing this.

Different particles pass through the different layers of the detector, which are made up from different materials. Some of the detector is made up of gas, which is ionised when a charged particle passes through it – electrons are emitted by the atoms in the gas, and gather at the charged surfaces. The charge collected at these surfaces produces an electrical current, or a signal. Other sub-detectors are made from metal and silicon, which detect particles in different ways. There is a good explanation of this here.

The detector is interwoven with electronics such that when a certain tiny bit of the vast detector is ‘lit up’, a signal is produced which reflects the amount of energy/position/time.

These electrical signals are digitised, and then all of the signals produced by the detector in that tiny slice of time are considered together.

If enough of the detector ‘lights up’ in a way that we have decided (based on lots of prior experimentation) looks interesting, then the signals are stored. If not, then all of the signals are discarded. This system for ‘triggering’ on interesting events is necessary because of the sheer volume of data being produced.

Those signals that survive the trigger are processed into a format that can be handled by an analyst using computer code,  generally c++.

This is the point at which all physics analysis can begin, and it is also the point where the LHCsound data is collected. The data is processed in such a way that one can say, in the appropriate language “Give me the number of charged particles in this event” or “give me the amount of energy deposited in this event”. The result of this process is a print out of numbers. We choose which numbers to print out arrange these numbers into very simple text files. Each row in the file corresponds to a different event, each column corresponds to a different variable, or aspect of that event (such as number of charge particles or total energy).

These columns of numbers are then read into the compositional software we are using, which is called CDP. We use CDP to make the numerical data into sounds. Within CDP (or other compositional software) we can choose how to map the different observable quantities such as energy, direction, particle type (the columns) to different audible properties such as pitch, amplitude, duration, timbre. We (or you) can do this in any way we (or you) choose.

Advertisements

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s