Peter Sinclair: Inside Zeno’s Arrow: Mobile Captation and Sonification

Header Image

Abstract

In the call for contributions for the Locus Sonus symposium#8 Audio Mobility we suggested that it might be useful to consider audio mobility as existing with two poles: the high level symbolization of maps and recorded archives –downloadable data – versus data capture, sounding and sonification. If much hybridization exists in between these poles, this paper clearly focuses on the latter category. RoadMusic is an App that generates music for your drive in real time from data gathered from embarked sensors. Taking this project as an example, I will argue for art forms based on a mediation of the situation, opposing this to other kinds of “augmented reality”, which rather offer a supplementary media layer to a situation. I will consider how the activation of our naturally occurring audio environment takes place through mobility: the praxeology or the sounding of the acoustic space as developed by Jean Paul Thibaud. I will then discuss the way that this influences our listening experience on the move, in particular when the acoustic environment is replaced by an electronic one, as shown by Michael Bull. Finally, I consider mechanisms of musical perception based on the kinetic syntactic theory and consider ways in which these can be used as a reference for generating musical sonifications on the move.

The principal question that I was asking when I started my doctoral studies was: Is there a change in the artistic paradigm when a sonification is related to environment in real-time; whether using data extracted from a situation, to generate sound for that situation, modified the sound’s status. In other words, might it become the actual audio environment rather an interpretation of an environment.

The idea is that rather than it being the artist who, like a recording head, sublimates flux into a fixed and tangible form, it is the artwork itself, which operates with, through and on the evolving situation. The role of a particular kind of artist being to define the way in which this takes place. In such a case, one might consider the system and the user as a cybernetic whole. This principle is applicable to most forms of “interactive” art and the idea has been around for some time and theorized by such people as Roy Ascott as early as the 1967 (Ascott 2006). However, my hypothesis was that there is an important difference between interactivity where the user generates data (deliberately or not) and interactivity that incorporates real-time data from the environment, with the user, in a system. In the first instance, user and artwork are included in a closed circuit (often dependant on a dedicated space such as the gallery); in the second, a probe allows environment to penetrate the system and visa versa. For this principle to work, I proposed that we need some kind of structure that captures and acts upon real-time in its own way. A way which, without being a fixed form, has the capacity to elevate the immediate data to a state, that “makes sense” (or art), in some way that is appreciable for the person experiencing it.

The creative project around which this research was articulated is RoadMusic a device that generates music for your drive from your drive (which now runs on a humble smart phone) and, as my research advanced, it became more and more clear that the mobile aspect of this project was all important.

If I discovered a number of interesting ideas and concepts through my research into sonification by fellow artists (Sinclair 2012), in most cases these where not autonomous in the way they functioned. Rather, the data took on a role similar to that of the program in program music, in other words an added value which gives concept to otherwise abstract music or sound. An example might be John Eacott’s piece Flood Tide (Eacott, 2009) where a live orchestra placed on the banks of the Thames in London interpreted a score generated by the tidal flow of the River below. If this work is both poetical in its conception and quite beautifully executed, full appreciation is, arguably, dependant on awareness of the system being applied.

A notable exception to the idea that data is necessarily exposed as the conceptual basis of the artwork is Christina Kubisch’s Electrical Walks (Kubisch 2008). Here the audience are coiffed with special headphones that audify the patterns of the invisible and normally silent electromagnetic fields produced by various electronic devices in the (urban) environment. I would argue that in this case the artwork is potentially self explicit and autonomous. Christina Kubisch offers maps to her public that indicate spots of particular interest, however, I propose that if one where to wear the headphones without prior knowledge of their usage, one would rapidly comprehend their functionality and ultimately be in a position to appreciate the artistic intention. For this to be the case, though, the user must move around and arguably it is this mobility that generates the actual content of the piece.

Henri Bergson puts us inside Zeno’s arrow from whence rather than observing form, we perceive mobility through intuition (Bergson 1912). It seems evident that once the arrow is immobilised in the target there is little to be intuited – the arrow can only be in a state of passive reception of the exterior, whereas when it is in flight it activates all which is around it by its own passing-through. We might consider that, in a manner of speaking, there is an analogous difference between listening to the soundscape when immobile in an armchair and traversing a sound environment on foot. In the first case, we are listening, observing and in the second, we are participating in the activation.

Bergson’s hypothesis was that all things in the universe are there, “virtually” present but inactive and that we activate them as if directing a beam at them which is bounced back to us. I would venture that this is a vision-centric theory of our relationship to the world. If we take this activation from a sounding/listening position, two important differences appear. Firstly, we literally activate the sound space around ourselves by generating sound waves through our actions, which come back to us as reverberation and echo informing us of the environment all around us (whereas we do not literally generate a beam of light rather we (choose to) direct our regard). Secondly, we share the audio scene with other sound emitting agents, which activate the environment as well (here too there is a difference with light, which is reflected off objects and on to our retinas). Sound is produced by actions in our surroundings (the atmosphere), both ours and those of other things. All reach our ears in the same vibrating air mass to which we must then apply auditory scene analysis (Bregman 1994). It is perhaps more evident to consider our cybernetic inclusion in environment from the starting point of sound rather than from that of vision.

The idea that sound is inclusive in its nature can be found in various philosophical and spiritual models of the world ranging from ancient cosmology: Pythagoras’ Harmony of The Spheres to The Sufi Teaching of Hazrat Inayat Khan, as this extract illustrates:

Since all things are made by the power of sound, of vibration, so every thing is made by a portion thereof, and man can create his world by the same power. Among all aspects of knowledge the knowledge of sound is supreme, for all aspects of knowledge depend upon the knowing of the form, except that of sound, which is beyond all form (Khan 1996, 27).

Henri Lefebvre, who is perhaps better known for his writings on the production of space, proposed a philosophy based on rhythm called Rhythmanalysis (Lefebvre 2004). If Lefebvre’s rhythm is not specifically related to our audio experience, the choice of metaphor is essential to the understanding of the Rhythmanalysis concept. Musical listening and audio perception in general are seldom inherently passive. It is natural to respond to music by moving, singing or playing. To take an obvious example when dancing, our bodily movements and rhythms adjust to those of the music, as if we were entering into “resonance” with it. We become increasingly aware (or perhaps increasingly sensitive, without being aware) of subtle variations or changes in pattern. Thus, from a cybernetic point of view, we can include the human and the music in a single cybernetic system and the augmentation of sensation as a feedback loop.

We might liken this metaphorically to sympathy, defined as “the state or fact of responding in a way similar or corresponding to an action elsewhere” (Oxford). The term applies more concretely to the sympathetic resonance of stringed instruments that vibrate in unison without being touched when excited by an external force (for a full technical explication, see Helmholtz 1885, 36-49). Sympathetic vibration of a string is a particularly pure form of resonance; however, taken figuratively, we might consider this exchange of energy between systems as a basis and/or a model for approaching all types of sound space from the physical reality of acoustics to sonification.

Sounding

Lefebvre includes the individual in the environment through rhythms. These extend outwards from ourselves seamlessly and we are incorporated in them. If for Lefebvre rhythms are not necessarily musical, in terms of acoustics we can also consider the activation of the sound environment literally, through our own actions which return to us as a modified audio impression of that environment through echo and reverberation. According to Jean Paul Thibaud, audio praxeology, the activation of the environment by one’s actions in space, can be considered as our primordial sound producing activity (before language or even vocalisation) (Thibaud 2010).

Imagine, for example, running up a flight of stairs and entering an empty room out of breath. Our panting would return to us with information about our body state and simultaneously, through reverberation, about the space which we just entered (we might add that it also informs us of the past instant of our climb and thus the architecture). In our natural audio mobility there is no barrier between sounds produced by our bodies (voice), those generated at the point of contact between our body and the exterior (footsteps) and those caused by actions external to us. These all mix in the instant of varying pressure that activates our eardrum. Thus, if the door squeaks as we close it behind us, we add to the perturbations of the same mass of air caused by our panting. If as we catch our breath the quiet sound of a ticking clock becomes audible, it will also reveal the space we entered, albeit from a slightly different (audio) perspective.

Before developing how all this can relate to sensing and sonification, I will make a rapid detour via the research of British sociologist Michael Bull. Bull has made prolonged studies into the use of portable music devices (Walkman and more recently in-car listening and iPods) (Bull 2010). He proposes that mobile audio devices construct a “post-Fordist” soundscape, which operates to filter-out random urban sounds. The age of Muzak is past, we are no longer willing to accept being washed over by an anonymous blanket of sound, and the new audio practice is one of empowerment as the iPod user re-appropriates the sound environment. The idea – that we cannot close our ears – no longer holds true when wearing headphones. You can close your ears to your surroundings and simultaneously immerse yourself in an audio environment of your choice, simply by inserting “earbuds”. This immersive quality of headphone listening influences the way in which people use their mobile players. As Bull puts it:

This mediated experience of listening to something through headphones gives you direct access to the world and your own emotions, so it’s a mediation that paradoxically conceives of experience in its immediacy. Music for many users has become such second nature, that it ceases to be recognised as mediation. (Bull 2011).

The iPod is used aesthetically to reconstruct the meaning of the visual scene. Bull proposes that this form of mimesis is the opposite to the flanerie described by Walter Benjamin (Benjamin, 1997), where the “flaneur” is the alienated subject who imagines what it is like to be the other. Here, the iPod user can appropriate a person who appears in front of them and incorporate them into their reconstructed scene – an actor to go with their own sound track, so to speak. Bull’s conclusions concerning personal empowerment through iPod use are a little frightening, in the sense that they leave us with little hope for the future of public social space, but the empowerment they offer over the urban environment is also desirable. Curiously, although iPod listening is not technically geo-located, the modification of perception it induces possibly makes it so. Through the act of personal choice, our soundtrack takes over from the naturally occurring environment and thus participates in creating our location.

In a 1994 study of Walkman users, Jean-Paul Thibaud examines the way in which gestural behaviour adapts to meet that of the music being listened to. He suggests that this places the wearer of headphones in an entre deux which brings into question not only the sound and social space but mobility itself: “Rather than the condition or the cartography of the itinerary, it is the action of walking to music, allowing it to penetrate us, lending our body to the voice of the Walkman which lends content to our movement” (Thibaud 1994).

Today the Sony Walkman is a distant souvenir and many of us now have as much processing power in our pockets as a professional recording studio had a few years ago. It is therefore possible to addition these ideas:

That mobility in itself enhances the listening experience and that headphone listening while on the move transforms music into the audio scene.

That Real-time sensing can be used to sound out the environment and, if used when on the move in itself, even in its most basic form, generates a narrative which is by definition in symbiosis with the behaviour of the user and can create new bonds between the listener and the environment as is the case in the example of Christina Kubisch’s Electrical Walks.

To these I will rapidly add another proposition which I suggest allows us to create musical form from our trajectory as that trajectory is unfolding and which involves a step sideways into musical perception. For this idea to work, it is important to accept the ideas developed by Hanslick in the mid nineteenth century, which propose that the beauty in music, rather than being found in mimesis of human emotions (or indeed any other association or figuration), is inherent to the music itself and the delicate relationship of the notes as they unfold (Hanslick 1854). This is the beginning of a musical formalism in which we can consider music as a flux rather than as an architecture of patterns and symmetry which is laid out. One of the particularities of music is that it unfolds in time. Even if we may hold in our memories the structure of a piece after listening to it – allowing a certain form of comprehension a posteriori – surely a large part of musical affect and even profound “understanding” is to be found within this unfolding. Going back to being inside zeno’s arrow, Bergson often used music as a metaphor for duration (that aspect of time, which can only be perceived through intuition and cannot be projected as a spatial concept) and the multiplicities that arise from it.

Might it not be said that, even if these notes succeed one another, yet we perceive them in one another, and that their totality may be compared to a living being whose parts, although distinct, permeate one another just because they are so closely connected? (Bergson 1913, 60).

In his 1956 book Emotion And Meaning In Music, Leonard B. Meyer offers a kinetic-syntactic explanation to this question: “Music is a dynamic process. Understanding and enjoyment depend upon the perception of and response to attributes such as tension and repose, instability and stability, and ambiguity and clarity” (Meyer 1961, 257).

Because of a previous musical event a subsequent musical event becomes more or less likely to take place (we know this, according to Meyer, because of our pre-existing knowledge of musical form), thus the significance of a next musical event is dependent on its degree of expectedness. An event that is totally expected is without significance – it is tautology. Taken further and viewed from the position of information theory, “it is the flux of information created by progression from event to event in a pattern of events that constitutes the reality of experience []…” (Meyer quoting Coons and Kraehenbuehl 1958). This flux then does not just depend on the musical event which immediately proceeds the present one but on the whole string of events since each has an influence in succession or, as Meyer puts it: “the significance of an event is inseparable from the means employed in reaching it” (Meyer 1961, 259). Musical pleasure is therefore related to the answering of expectations, but above all the skilful manipulation of discrepancy with obvious expectations.

It is not possible here to go into more detail of Meyers theory, however if we retain this principle of kinetic syntactic perception of music and apply it to composition, we have the basis for a system to generate music in real time from incoming data which can simultaneously play on different variations in structure creating degrees of “expectedness” and thus musical emotion without having an overall pre-defined plan or architecture.

Fig. 1: RoadMusic App. Photo: Fabien Hartmann.

Fig. 1: RoadMusic App. Photo: Fabien Hartmann.

Going back to my working experience, unlike recorded music in the car, RoadMusic is a sympathetic system, it has no recorded sounds to play back and the different modules of synthesis (instruments) that constitute its orchestra (or soundscape) are based on audification of the incoming data. It constantly responds to the surface of the road, the movements of the car, the variations in landscape and of course, the drivers driving. If however, it where only to do this by direct mapping, it might rapidly be perceived as redundant and become boring (the sound inside the car would just be an analogous reflection of another sensation). As it is, RoadMusic analyses data on different levels of complexity and different timescales revealing the drive through its own musical logics. Thus, not only does it sonify the incoming stream of data directly, it also measures difference in order to detect events, then counts events to create statistics and combines these flux in different ways to create complex voices each of which has a life of its own that the driver gradually gets to know (and hopefully appreciate). However these identities are always different and so in keeping with John Cage’s experimental music (Cage 1971) even the composer discovers the music as it unfolds.

I draw a parallel between the internal, caused-by and external sounds that we experience as humans (voice, footsteps and environmental sounds) and RoadMusic’s digital sounding of the car’s environment. It is the tread of the tyre on the road that sets the (virtual) audio space into resonance in its micro-sonic detail. Events echo, their influence slowly dying away. Each curve and bump is reflected in the musical structure as it is playing and as it will continue to play. Sonification of the visual field brings outside objects and atmospheric sound into the mix. As the car becomes prosthesis, an extension to our body, the music played through its loudspeakers becomes the reverberation of the re-calibrated space that the car/person(s) occupies.

RoadMusic (as its name invokes) is designed exclusively for use in the car. There are various reasons for this but perhaps the most important is the fact that, when driving in a car we are, to a large extent, cut off from the soundscape through which we travel – even with open windows environmental sounds will be masked by turbulence. The modern day hybrid or electric cars, arguably, do not possess their own audio environment either and so we might consider that the car radio has become the default sound source. Compared to “normal” car radio listening, Road Music represents a gain in terms of real-time perception of the situation since it sonifies the road and the cars movements – hypothetically, and indeed user testing would tend to confirm this, RoadMusic increases your awareness of the road rather than diminishing it as listening to radio or music in the car probably does.

Conclusions and future research

I have described an artistic project RoadMusic that puts into practice ideas concerning real time composition based on principles extracted from theories of (human) perception. Today RoadMusic exists as a downloadable app available on “Google Play” and that will run on any recent android device. One might consider that this project is proof of concept that it is possible to create artistic form from the immediate situation without the higher level structure of recorded data and without the recourse of deliberate human intervention.

If the audio environment particular to the individual car (the audio bubble) allows me to consider that RoadMusic potentially offers an improved perception of the situation, the question that I am now faced with concerns the way in which to deal with the new possibilities that audio processing power can add to iPod-type listening – (wearing ear-buds when on the move or when in social spaces). In such cases the user is in a situation where there is an existing soundscape and the information obtained through normal listening is often useful, potentially for avoiding inconvenience or even danger. However, as Anthony Pecqueux has shown (Pecqueux 2009), individuals are willing to trade the inconvenience of what might be considered as an amputation of “normal” hearing, for the siren of ear buds (Michael Bull develops on this idea) and judging from purely personal observation it is not a practice that appears to be diminishing. Some, possibly most, mobile applications (if one does a quick web search) are based on the principle of the audio guide, in other words they play back geo-localised sound files according to one’s position in the field. This can be a way of making the iPod listening “useful” in the sense that it provides extra information and it has also become the basis for a new narrative genre as has been developed by other participants in this special issue. However, it will have been understood that this is not my area of predilection and today my reflections are turned towards finding ways to incorporate music generated from and for the situation from the bottom up, so to speak, and which thus, by definition, include information about the field.

 

References

Ascott, Roy. 2006. Engineering Nature: Art and Conciousness in the Post-Biological ERA. Intellect Books.

Ascott, Roy. 2007. “The Cybernetic Stance: My Process and Purpose.” Leonardo 40 (2): 188-197.

Bull, Michael. 2010. Sound Moves. New York, NY: Routledge.

Bull, Michael. 2001. “Thematic Series: Sonic Impressions – Mobile Sound Technologies.” You Tube. UBC. 24 Nov 2011. Accessed December 30, 2011. http://www.youtube.com/watch?v=y66YIG7p_7c.

Benjamin, Walter. 1997. Charles Baudelaire: A Lyric Poet in the Era of High Capitalism. London: Verso.

Bergson, Henri. 1912. An Introduction to metaphysics (Introduction à la métaphysique 1903). Translated by T.E. Hulme. Hackett.

Bergson, Henri. 1913. Time and Free Will: An Essay on the immediate data of conciousness (Essai sur les données immédiates de la conscience 1889). Translated by F.L. Pogson. London, 1913.

Bregman, Albert S. 1994. Auditory Scene Analysis – The Perceptual Organization of Sound. Cambridge, MA-London: The MIT Press.

Brinkmann, Peter. 2012. Making Musical Apps. O’Reilly Media, Inc.

Cage, John. 1971. Silence lectures & writings. London: Marion Boyars.

Coons, Edgar, and David Kraehenbuehl. 1958. “Information as a mesure of Structure in Music .” Journal of Music Theory II: 145.

Eacott, John. 2009. “Flood Tide.” Accessed December 30, 2011. http://www.informal.org/.

Hanslick, Eduard. 1854. The beautiful in music. Translated by Gustav Cohen. Bobbs-Merrill.

Kubisch, Christina. 2008. “Electrical Walks.” In Christina Kubisch, Electrical Drawings, Works 1974-2008. Heidelberg: Kehrer Verlag.

Khan, Hazrat-Inyat. 1996. The Mysticism of Sound and Music. Shambhala.

Lefebvre, Henri. 2004. Rythmanalysis – space, time and everyday life (Eléments de rythmanalyse 1992). Translated by Stuart Elden & Gerald Moore. London: Continuum.

Meyer, Leonard B. 1961. “On Rehearing Music.” Journal of the American Musicological Society 14 (2): 257-267.

Oxford Dictionary of English. 2010. Oxford University Press.

Pecqueux, Anthony. 2009. “Les ajustements auditifs des auditeurs-baladeurs.Instabilités sensorielles entre écoute de la musique et de l’espace sonore urbain.” Accessed December 7, 2012 http://www.ethnographiques.org/2009/Pecqueux.

Sinclair, Peter. ed. 2012. “Sonification – What Where How Why.” AI & Society 27 (2).

Thibaud, Jean-Paul. 1994. “Les mobilisations de l’auditeur-baladeur: une sociabilité publicative.” Réseaux. Communication – Technologie – Société 12 (65): 71-83.

Thibaud, Jean-Paul. 2010. “Towards a praxiology of sound environment.” Sensory Studies – Sensorial Investigations. Accessed May 30, 2012. http://www.sensorystudies.org/sensorial-investigations/towards-a-praxiology-of-sound-environment/.

PDF version of this article

 

Peter Sinclair (Ph.D University of Arts London) is a Sound Artist and Co-Director with J. Joy of Locus Sonus Sound Lab and professor at École d’Art d’Aix-En-Provence (France). Long time builder of autonomous musical machines and sound installations, his work today focuses on the mediation of real time data and mobile audio media. He has exhibited and performed frequently in Europe and the US in such venues Exploratorium San Francisco, MAC de Lyon (Musiques en scène), Postmasters Gallery New-York, Festival Interférences Belfort, Eyebeam – Beta Launch – NewYork, Festival de Cinéma et de Nouveaux Media Split, ISEA Nagoya, STEIM Amsterdam, La Gaîté Lyrique Paris, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *