Mobile Audio Apps, Place and Life Beyond Immersive Interactivity

In this paper I investigate the emerging trend toward connecting audio content with place in mobile audio applications for smart phones. I contextualize such apps in terms of previous mobile music practices, like those associated with the mp3 player, as well as draw connections to mobile sound art. I consider how these apps diverge from ideas such as Michael Bull’s (2007) auditory bubble by focusing attention on relationships between audio content, listening, and place. Using my own experiences with the mobile app RjDj as reference point, I argue that the moment of interactive experience with these apps tends to be the focus of attention, but that there is a need to expand considerations of engagement beyond that moment of interactivity in order to foster more involved reflection on place and relationality.


The growing availability and popularity of smart phone technology in recent years has contributed to the explosion of mobile applications that explore the possibilities of processing, combining, and networking data on the go. In terms of audio, an app like Brian Eno’s Bloom, which creates music based on user interaction mixed with autonomous generative processes, demonstrates how increased processing power is being used to produce interactive music, diverging from more traditional linear forms and formats such as mp3s of songs. An emerging trend among mobile audio apps is the exploration of how audio content can be connected in various ways to aspects of place. The forms of such connections range from GPS-based location-aware music that changes depending on the precise location of the device, to audio content that is composed of a blend of preprogrammed sounds and sounds that are picked up by the phone’s mic and processed with effects, such as echo or pitch shift. The listening practices suggested by these apps diverge from a common line of thinking around mobile music devices that sees them as means of disconnection, enabling listeners to separate themselves from the world outside their headphones and occupy their own private soundscape. My aim is to consider and contextualize the connections between listening practices, audio content, and place that are being made through these apps. Using Reality Jockey Ltd.’s app RjDj as a primary reference point, I argue that the fostering of connections between the world inside the listener’s headphones and the world outside often means that place becomes conceived of as an extension of the app’s interface, and as a novel way of interacting with the music. Drawing on my experiences engaging with RjDj on a variety of levels, I highlight the tension that exists between a promotional focus on the fleeting moment of interactive experience – alluded to in Reality Jockey Ltd.’s tagline, “We don’t do apps. We craft sonic experiences!” (“RjDj”) – and the possibility of working with and considering the app beyond such ephemeral interactions. Ultimately, I argue for the importance of expanding the possibilities for engagement with audio apps like RjDj beyond interactivity alone in order to create the conditions for generating more involved reflections on place and relationality.

Sound and Place

In the field of soundscape studies (or acoustic ecology), connections between sound and place have long been a primary concern. Diverging from Pierre Schaeffer’s work developing musique concrete in the late 1940s, particularly the central idea of the objet sonore – sound conceived of as material detached from context and made manipulable via audio technology such as the tape recorder – soundscape studies pioneer R. Murray Schafer (1977) stresses the idea of sound as an event that must be considered on location. Schafer considers Schaeffer’s sound objects ‘”laboratory specimens,” noting by contrast that the term sound event “is in line with the dictionary definition of event as ‘something that occurs in a certain place during a particular interval of time’ – in other words, a context is implied” (p. 131). According to Schafer, a single sound, such as a church bell, “could be considered a sound object if recorded and analyzed in the laboratory, or as a sound event if identified and studied in the community” (p. 131). In acoustic ecology, sound is emphasized as a central component in relationships within society and between people and places. Technologically mediated sound, particularly when listened to via headphones on the go, has often been viewed as antithetical to the values of soundscape studies. Schafer describes headphone listening as a practice that removes the listener from the acoustic horizon that surrounds her, fostering an integrity with herself but breaking contact with the world outside (p. 119). Similarly, Barry Truax (2001), writing on the Walkman, argues that “The audio advertiser’s exhortation to ‘Shut out the city’ with their stereo products is now being answered by the walk-person’s logical response, ‘Shut out everybody!’” (p. 135), highlighting a disconnect both between listener and environment and between listener and other people. More recently, there have been signs of a shift in this attitude toward headphones and mobile listening, as evidenced for example in a 2011 Vancouver soundwalk – a soundscape studies practice that involves moving through a place while listening1 – that experimented with the mobile app RjDj. At the same time as the soundwalk considered the potential of this audio app, the description of the soundwalk reveals the history of tension between soundscape studies and mobile devices, noting “we will embark on a soundwalking experience that hopefully subverts the acoustic ecologist’s most dreaded practice – iPod culture” (“Soundtrip with RjDj,” 2011).

Michael Bull’s work on the iPod supports the grounds for antagonism between soundscape studies and technologically mediated mobile listening, as Bull emphasizes the ways in which listeners shut themselves off from the world outside their headphones. As Bull (2007) puts it, describing iPod listeners: “The preoccupation with the pleasures of solitary listening and the resulting cognitive control achieved through the management of their sound world is preferable to the contingency of the world existing beyond their auditory bubble” (p. 28). However, there have also been divergent views on the practice of listening to mobile devices. Shuhei Hosokawa, writing in 1984 on the Walkman, emphasizes the significance of the combination of the listener’s movements, the environment, and the audio content. Hosokawa positions mobile listening as a generative activity, where the familiar soundscape is transformed by the singular, unfamiliar acoustic experience of the combination of the music and the other elements with which the music comes into contact through the listener’s movements (pp. 175-176). Jean Paul Thibaud (2004) has argued that “using a Walkman in public places is an urban tactic that consists of decomposing the territorial structure of the city and recomposing it through spatio-phonic behaviours” (p. 329), emphasizing both an engagement with the listener’s surroundings and a creative activity. Meanwhile, David Beer (2007) contends that the auditory separation between headphones and environment that Bull emphasizes is overstated, and that the sounds of the city continue to be heard even while listening on a mobile device. Beer suggests that there is creative potential in this persistence of city sounds, as it “may even form new and distinct experiences of the music as it intermingles with the hum of the city and the places the listener moves through” (p. 859).

Hosokawa, Thibaud, and Beer’s ideas around the use of mobile music devices find something of a correlate in the ideas and approaches of mobile sound art. Challenging the idea of the listener’s separation from their surroundings, mobile sound art has been at the vanguard of rethinking mediated listening practices and technological possibilities, often exploring various layers of sonic experience as a means of increasing participants’ engagement with their environment. For instance, since 1996, Teri Rueb has been creating GPS-based sound walks that map audio content to particular places. Her 2007 work “Core Sample” superimposes Boston Harbor’s Spectacle Island with a layer of sounds that are played back on mobile devices through headphones while listeners walk around the island. Rueb’s decision to use open-cell headphones (allowing the sounds of the environment to be heard) shows her interest in merging the layer of sound that she has created with the sounds of the island. Commenting on her work via video documentation, Rueb (2007) says, “Immersed in sounds as they traverse the island, I wanted visitors to feel almost as if they were becoming part of the island, sinking through its layers of material, cultural, and social histories,” and she emphasizes her interest in blending and blurring different kinds of sounds and different sound sources as a way of exploring the layers of place (n.p.).

Christina Kubisch’s ‘electrical walks’ invite another form of engagement with listeners’ environments through mediated sound. Since the late 1970s, Kubisch has been experimenting with electromagnetic induction in her sound installations. In 2003 she developed the ‘electrical walks’ in which participants are given specially designed magnetic headphones that respond to electrical fields in their environment, transforming them into audible sounds. ATM machines, security systems, neon signs, and other such devices become the sources of strange electronic noises as the listener walks through the city (“Works with Electromagnetic Induction,” n.d.). Kubisch provides participants with maps suggesting routes and locations that she has found particularly interesting, but she also encourages people to explore and find new zones of electromagnetic resonance. Kubisch’s ‘electrical walks’ reveal an invisible and usually inaudible layer of the cityscape, making the listener aware of the electrical signals that permeate the environment. In this way, the ‘electrical walks’ work to transform the listener’s experience of the space of the city, provoking a new engagement with the listener’s surroundings through sound, effectively expanding the soundscape through the mediation of the mobile technology of the headphones.

The project Sonic City, developed by artists and researchers at Sweden’s Viktoria Institute and Interactive Institute in the early 2000s, involves a wearable system that senses bodily and environmental parameters and uses this data to transform local sounds picked up by a microphone, creating personalized and location-specific electronic music in real-time. Sonic City is premised on the idea of the city as interface and mobility as musical interaction, allowing everyday experience to become aesthetic practice: “Encounters, events, architecture, weather, gesture, (mis)behaviours – all become means of interacting with, appropriating, or ‘playing the city’” (Gaye et al., 2003, p. 109). Emphasizing the connection between the listener and the environment, the creators of Sonic City note that the emergent music is co-produced by the listener and the city (p. 110). For instance, a listener’s heart rate (body-related input) might affect one aspect of the sound processing, while pollution level (environment-related input) might affect another – both will be applied to sound content sampled from the environment via a microphone. An original goal of the project was to examine how Sonic City could become part of the daily practice of a city dweller – not unlike listening to an mp3 player, but one that would be personalized and ever-changing. Unfortunately, the Sonic City project seems to have been discontinued after 2004.

The mobile apps that integrate audio content and place may also be thought of as posing a challenge to the idea of separation between the soundscape and mobile mediated listening, since they play on both the view that mainstream mobile devices are connected to context and the ideas of sound, mobile technology, and place explored through mobile sound art. In these apps, the theorizations of Hosokawa, Thibaud, and Beer – ideas such as mobile listening constituting a process of re-composition via a mixing of sounds inside the headphones with conditions outside the headphones – take on new dimensions and a new degree of literality. At the same time, some of the experiences offered by mobile sound art are made available on mobile platforms that are much more widely available than the often custom-made apparatuses involved in projects like Sonic City and Kubisch’s electrical walks.

Generally speaking, the connections that these apps make between audio content and place can be broken down into three different types, though a single app may involve all three types: 1) audio content is connected to GPS coordinates; 2) audio content is connected to environmental factors picked up by sensors; and 3) audio content is directly influenced by sound picked up by the device’s microphone and fed into the headphones. All of these types of connection between audio content and place involve real-time content responses to incoming data. Such responses can take many forms, often tending to be either more content-determinate (play this sound at these GPS coordinates) or process-oriented (apply reverberation to sounds picked up by the mic), though many apps will mix the two. The location-aware musical compositions offered by the duo BlueBrain provide an example of audio content connected to GPS coordinates that is relatively preplanned and determinate. At the time of writing, BlueBrain has created three apps, one for Washington D.C.’s National Mall, another for New York City’s Central Park, and a third for the grounds of Austin’s South by Southwest music festival. The musical content of the apps is all predetermined and mapped onto the respective places. A listener walking through Central Park will hear different music in different parts of the park, but the music heard in any one part of the park will always be the same. The app FutureSound, by contrast, involves audio content that is more process-oriented and tied to sensors rather than GPS coordinates. The app uses the phone’s mic to determine ambient sound levels in the listener’s surroundings and adjusts the audio content that is played back based on that data. The audio content seems to be generative, emerging from a system in response to the environment rather than being composed in advance. Finally, the apps created by Reality Jockey Ltd. provide examples of audio content that incorporates sounds picked up by the smart phone’s mic. The apps RjDj, Inception, and Dimensions work with what they call ‘scenes,’ ‘dreams,’ and ‘dimensions,’ respectively. In RjDj, artists contribute scenes that can be downloaded and played by the app; Inception and Dimensions are effectively bundles of scenes structured to form games. The basic principle of operation is the same for all three formats: a scene (or dream, or dimension) is comparable to a song, but instead of being linear, it unfolds according to a mix of input data and preplanned operations. Sounds picked up by the microphone, for instance, may be processed in real-time with effects such as echo or reverb (among the more traditional effects) and mixed with other sounds taking place in the headphones. The other sounds may be completely predetermined or they might also vary depending on other input parameters, such as sensor data or GPS coordinates. Each scene has its own unique mix of audio content and parameters affecting that content. In this way, the apps from Reality Jockey Ltd. effectively blend the three types of connections between place and audio content and also mix content determinacy with a more process-oriented approach.


The connections between audio content and place explored through these mobile apps present a new level of interactivity in comparison to previous mobile audio technology. Here, I draw on Henry Jenkins’s (2006) understanding of interactivity as “the ways that new technologies have been designed to be more responsive to consumer feedback” (p. 133). The Walkman was interactive in that the user could adjust the volume level of playback, the EQ, and operate the transport controls (play, pause, fast forward, rewind etc.). As recordable and portable media, cassettes also made it possible for listeners to create their own mix tapes that could be taken with them on the go. Portable CD players offered more or less the same options for interactivity, in some cases adding the ability to program a playlist different from the order of tracks on the CD. However, portable CD players also put a hold on the ability to create one’s own mix of songs from a variety of sources until the advent of the CD-R. Mp3 players facilitated all the previous parameters of interactivity, often extending their reach by virtue of the amount of audio content that could be carried on a single device. For instance, many mp3 players provide the option of making on-the-go playlists drawing from any of the material stored on the player’s hard-drive, whereas a custom playlist on a portable CD player could only be made from the tracks of the single CD currently in the player.

With audio apps designed to take advantage of the processing power, sensors, and location-awareness of smart phones, place comes to provide a new means of interaction. Despite their differences, previous mobile audio technologies, from the Walkman to the mp3 player, all held in common the assumption that interaction would primarily take the form of the user’s engagement with the device. Increasingly, interaction with the device is being supplemented by the idea of place itself functioning as an interface for interaction, with mobility operating as a way of navigating that interface. Thinking of a different precursor, interacting with a BlueBrain app resembles in some ways the act of moving about in order to pick up a station on a transistor radio, although the stations on a given frequency in a given area would now have to be multiplied in number and reduced in coverage. A user listening to BlueBrain’s location-aware music effectively sequences audio content by moving through a place, determining how long a particular section will play (by remaining still), when it will transition to another section (by moving), and what that section will be (depending on where the listener moves). In some ways, this also recalls the previous ability for a listener to determine a playlist on the fly. However, it differs significantly in that the listener does not determine what is heard by interacting with the device, but by moving across a given terrain, and the songs and transitions are more malleable. This malleability and the fact that there is no original sequential ordering of the music means that the way the audio unfolds is left more open than on a traditional album, pointing toward a crucial aspect of the place-based audio interactivity explored by such apps: the degree to which audio content itself can be affected.

The apps produced by Reality Jockey Ltd. arguably provide the largest variety of ways in which the audio content might change depending on context, and the app creators promote idea that the user takes part in the process of the music’s creation. In an interview for the online publication The Next Web, Reality Jockey founder Michael Breidenbruecker says that the initial impetus for the app RjDj came from his experiences creating music on a PC in the 1990s (Bass, 2011, para. 4). His desire was to make the creative process of computer-based music-making accessible to everyone. Making this process accessible means, in part, making it easy, opening up the process so that the listener cannot help but be a part of it. Unlike apps that are designed specifically for audio production, such as NanoStudio and Beatmaker, RjDj requires little to no technical proficiency or familiarity with conventional music-making software. Rather, RjDj presents an interface for which the user is only required to move about, make sounds, or otherwise physically interact with the environment and device in order to take part in the process of musical creation. The user may not even be required to ‘do’ anything, as simply being somewhere often provides enough microphone and sensor inputs to affect the audio content in a way that would be different from being somewhere else.

Will Straw (2010) has noted that much recent theoretical work emphasizes the idea of media as technologies for the storage, processing, and transmission of knowledge (p. 22). Where prior forms of recorded music, from the vinyl LP to the mp3, focused on the storage and transmission of music, RjDj directs its attention to the processing aspect of media, experimenting with how the listener can become part of the process. When Reality Jockey boasts, “YOU are inside the music” (“App Store – RjDj,” 2011), they mean both that the music surrounds you and that you are part of the music, not just experiencing it passively but influencing it, interacting with it, and taking part in its composition in an intuitive and immersive way. Fostering a link between audio content and place by making music that responds to elements of place constitutes an effective way of integrating users into the process as they, of course, cannot help but be someplace. In other words, place becomes an aspect of the device’s interface, a means of interacting with the app, that the user engages with simply by being, since to be is to be in place.

The user’s involvement in place, translated to an involvement in the process of the emergence of the audio content, simultaneously suggests complicity between the app and the user. A sort of app-user fusion is created as the user, by being in place, is also in the interface of the app. This app-user fusion finds further support in the proximity of the earphones to the body and, moreover, within an entire history of discourse on sound. Jonathan Sterne (2003) has investigated an array of phenomenological accounts of sound and vision and finds in them a number of common impulses, such as the idea that hearing is an immersive sense, that it places us inside an event, and that it involves physical contact with the world, whereas seeing allegedly involves distance, objectivity, and separation between seer and seen (p. 15). These kinds of ideas find fodder in the contrast between earphones, which are worn in the ears, and the mobile screen, which is held in the user’s hand at a distance from the eyes. While augmented-reality glasses capable of superimposing graphical data with physical spaces promise to reduce the distance between digital interface and world outside, for now, AR platforms such as Layar and Junaio use the smart phone camera and screen. Apps like RjDj can be thought of as audio-based AR, creating a blend of content originating from the world outside the headphones picked up by the smart phone’s mic and from the digital processing of the device itself. The closeness to the body of the apparatus for experiencing this blend, combined with phenomenological accounts of sound stressing immersion, contributes to the idea of a reduction in distance between app and user.

The connection between RjDj and the body is rhetorically extended through descriptions that compare the app to psychoactive drugs. One of RjDj’s investors explains “The consumer experience of RjDj is similar to the effects of drugs. Drugs affect our sensory perception, so does RjDj. RjDj is a digital drug which causes mind twisting hearing sensations” (Lutz, 2011). Similarly, commenting on Reality Jockey’s app Inception, singer/producer Pharrell Williams says, “It’s like legal drugs with no side effects” (“Inception – The App,” 2012). Comparisons to drugs implicitly support the sense of collapse in distance between app and user, since drugs need to be ingested, to physically combine with the user, in order to take effect. Not coincidentally, comparisons to drugs also highlight the notion of the app as experience – an experience that will be unique to each person who listens as he or she becomes immersed in sound and fused with the app, taking part in the process of the music’s emergence.

Beyond Interactivity

The focus of apps like RjDj on experience and process fits snugly into existing discursive frameworks around both sound and mobility. Investigating how the concept of mobility is deployed as a way of understanding the world, Tim Cresswell (2006) explores what he sees as “two principal metaphysical ways of viewing the world: a sedentarist metaphysics and a nomadic metaphysics” (p. 26). Where the sedentarist metaphysics focuses on stasis and stability, viewing mobility as a potential threat, the nomadic metaphysics embraces mobility and notions of flow and flux. RjDj resonates with the second of these positions, “the idea that by focusing on mobility, flux, flow, and dynamism we can emphasize the importance of becoming at the expense of the already achieved – the stable and static” (Cresswell, 2006, p. 47). Speaking of the inspiration behind RjDj, Breidenbruecker notes, “Creating music on a PC is a real-time sound experience and it is fun no matter if the recording is good or bad” (Bass, 2011, para. 4), clearly valuing a process of becoming over the relatively stable results of the process, the recording. Meanwhile, Francis Dyson (2009), investigating rhetorical maneuvers in discourses on both sound and new media, shows how sound and listening are commonly thought of as “describing a flow or process rather than a thing, a mode of being in a constant state of flux, and a polymorphous subjectivity,” arguing that tropes used to think about aurality are frequently mapped onto thinking about new media phenomenology (p. 5). Stressing ideas of process, experience, and immersion, descriptions of Reality Jockey’s mobile apps provide a case in point. Both Cresswell and Dyson caution, however, that rhetoric around mobility, sound and new media often serves, whether intentionally or not, to hide important operations taking place beneath the surface. For instance, Dyson notes that the idea of ‘immersion,’ by foregrounding an all-encompassing condition, conceals the social and technological operations that are involved in making the experience of immersion possible (p. 6). My intention here is not to uncover a comprehensive list of such operations or to claim that there are startling machinations at work,2 but rather to consider what might be gained by thinking beyond the particular ideas of process and experience typically emphasized by mobile audio apps such as RjDj. What can be gleaned by edging beyond the moment of immersive interactivity?

One of the things that sets RjDj apart from the other apps I have discussed so far is the variety of possibilities for engaging with the app that are offered. Despite the strong rhetorical focus on a particular kind of engagement – interacting with a scene in the moment – RjDj provides tools for working with the app that allow interested users to create their own scenes and make them available for others to experience. Users listening to scenes are also able to record the audio mixes they have made in navigating scenes and can upload these recordings to RjDj’s website where others can listen to them. In this way, RjDj extends the possibilities for engagement to include both participation in the production of scenes and sharing of the recordings that arise from using the app. Reality Jockey’s most recent apps, Inception and Dimensions, have done away with these two options, however, essentially bundling scenes into goal-oriented games. These apps are more structured and less open to free play and exploration. By excluding the ability to produce one’s own scenes and to record interactions within a scene, these apps again focus engagement on the moment of scene interaction, which is promoted as processual, experiential, and immersive. In the face of its seemingly low-priority status, I want to argue for the importance of expanding possible engagements with new mobile audio apps beyond the moment of interactive experience, to include thinking about the production of that experience as well as the recording and sharing of experiences. This is not to pit interactive experience as fleeting ephemeral thing against the products that make it possible or the products that may arise from it, but rather to recognize the importance of all these things and the necessity of their relations to one another.

My own interest in these emerging mobile audio apps began with RjDj and was motivated in particular by the ability to create original scenes. The app seemed like an interesting tool for continuing my experimentation with the idea of the ‘music-route’ – a musical composition created entirely from field-recordings of a place meant to be listened to on a mobile device by someone moving along a particular path through that place. For Montreal’s Nuit Blanche 2012,3 I had the opportunity to create a music-route using RjDj, and that experience shed light both on the potential of the app and on the current tensions existing between the different ways of engaging with it.

The original idea was to create an RjDj scene, intended to be listened to while walking from point A to point B, with which participants could make recordings that would be shared via RjDj’s website. The music-route formed part of the exhibition Lost Rivers: La Pétite St. Pierre, operating as a sort of connecting trajectory between two gallery spaces in Old Montreal. Exploring the theme of Montreal’s lost rivers – waterways that have been buried and/or diverted into sewer systems – the resultant RjDj scene uses samples of field-recordings made from the streets of Old Montreal, the sewers below, and streams of Quebec, combining them with effects applied to mic input, such as reverberation and quantization, to create a piece of music that unfolds in real-time as the listener walks between the two exhibition spaces. On the whole, the event was highly successful. People reported fascination at how the sounds of walking through the late-February slush were morphed in their headphones, shifting their perception of their place in the world. A resident of Old Montreal commented on how the combinations and transformations of sounds caused her to see her neighborhood in a new way, to think about the rivers that used to run through the city and the sewers under her feet, opening up other layers of place, both temporal and spatial. Listeners seemed genuinely fascinated by the ‘new sonic experience,’ which was in fact more than a sonic experience, as ice lanterns lit the way and Andrew Emond’s photographs of the sewer system comprised part of the scene’s interface. However, the idea of recording and sharing interactions with scenes was not fully realized. Though many listeners made recordings, actually sharing them on the RjDj website proved awkward. Only the most recently uploaded recordings are easy to find on the site and the others are not searchable. Sometimes recordings are displayed but have not been transcoded for playback, and consequently cannot be listened to. A chart system ranking the most played recordings made with each scene provides another way of navigating the recordings, but it is not enough. Contributors cannot be searched. The option of posting recordings has a lot of potential, even including the ability to geotag recordings to offer a sense of where people have listened to various scenes, but the potential has yet to be fully realized. For the time being, the interactive experience of the scene in the moment seems still to be more highly prioritized than exploring what might arise from sharing recordings after the fact.

The scene production process also revealed its own set of difficulties. There is no in-depth guide for scene-making; information is piecemeal and at times seemingly outdated. Moreover, at the time of the exhibition (and the time of writing), the ability to upload a scene to RjDj’s server to be widely distributed was not in operation. Although the scene could be made available to visitors on location via a local server, its accessibility was very limited both in terms of time and place. A notice on the RjDj website says they are working to improve the scene uploading and distribution process, but it remains to be seen when this will be done. On the one hand, Reality Jockey is a very small team and as scene production is a relatively niche activity it is somewhat understandable if providing full support for it is not top priority. On the other hand, the current difficulties and limitations around scene production and distribution mean that the expansion of engagements exploring the potential of the app is stunted. The ability for users to create and share their own scenes provides the possibility of forging more meaningful connections between places and music, since it creates the conditions for designing scenes that reflect on particular places rather than treating place primarily as an input to change audio content. The currently available scenes, which generally tend to use any place as an interface element – a sort of extension of the device that aids in the creation of ‘augmented music’ (where the focus is weighted toward the user’s experience of the music) – could be supplemented by scenes produced in a variety of locales that offer more developed relationships with the places where they are made. Making location-specific scenes widely available in other places would also offer an opportunity to explore interactions between and across places. How would a scene made for Montreal play out in another city? And even if scenes are not created with particular locations in mind, the expansion of contexts for their production made possible by opening up scene creation and distribution to users, combined with the dynamic between where the scene is designed and where it is played, offers the potential for new relationships to emerge.

Ultimately, moving beyond the fleeting experience of interaction with a scene in the moment is not just about focusing on a moment before (scene production) or a moment after (sharing a recording). Rather, it is about moving from a focus on interactivity to consider relationality, about moving beyond the individual experience to consider a network of relations. In a recent talk in which he considered aspects of immersion in his artwork, media artist Luc Courchesne expressed the feeling that what matters is not only how we relate to art, but how we relate to each other through art (“Space and Place of the Screen,” 2012). Somewhat different from Nicolas Bourriaud’s (2002) idea of relational aesthetics, which takes social relations themselves as the work of art, Courchesne’s comment suggests, to me, an upholding of the experience of the art object while at the same time drawing attention to its existence as a network of relations. Apps such as RjDj, as they potentially mainstream sound art or “sound-art” a mainstream device, should also be thought of in these terms. The individual emergent experience of audio and place is one thing, but it is worth also considering how someone produced the conditions of possibility for that seemingly emergent experience through their relation to place and to the technology, and how we might make our own experience of place available to others for reflection as well. Overemphasis and prioritization of the moment of interactive experience, the moment of the music in process and the listener as part of the process, risks creating the false impression of a self-sufficient loop, conjuring notions of an auto-affectivity in which we simultaneously create and receive the music we hear. Such ideas draw away from meaningful engagements with place and back toward Bull’s notion of the auditory bubble. The idea of thinking beyond the moment of interactivity does not mean denigrating the process that occurs in the moment, but rather not mistaking an open process for the illusion of a closed one.


Apps like RjDj have the potential to stimulate reflection on place and the relationships, processes, and technologies that form a core component of experiences of place. Yet the more a single kind of experience – the experience of an individual interacting with the app in the moment – is elevated, the less fully this potential is realized. As new possibilities for mobile listening emerge, it is vital to consider what kinds of experiences are being made possible and to attempt to open things up, to think about how people can not just use a platform as-is, but also contribute to its possibilities as a medium for provoking creativity, thought, and reflection. This means not just using place-based interactivity to move beyond the auditory bubble, but also considering the conditions of this interactivity and what other experiences and relations can be made possible. This is not an argument against individual experience in the moment; rather, it is a call for the proliferation of experiences, the fleeting/trippy/serendipitous ones that apps such as RjDj might like to emphasize, yes, but also the experiences tied to producing the conditions that give rise to such experiences and the experience of sharing them.


App Store – RjDj. (n.d.). iTunes Preview. Retrieved June 9, 2012, from

Bass, E. J. (2011, December 19). Meet the man behind pioneering “augmented audio” startup RjDj. The Next Web. Retrieved June 6, 2012, from

Beer, D. (2007). Tune Out: Music, Soundscapes and the Urban Mise-En-Scène. Information Communication Society, 10(6), 846–866.

Bourriaud, N. (2002). Relational aesthetics. (S. Pleasance & F. Woods, Trans.). France: Presses du réel.

Bull, M. (2007). Sound Moves: iPod Culture and Urban Experience. London; New York: Routledge.

Courchesne, L. (2012, May 31). Talk for Panel “The Space and Place of the Screen.” Presented at Mutek 13th Edition: Digi_Section Series, Montreal, QC.

Cresswell, T. (2006). On the move mobility in the modern Western world. New York: Routledge.
Dyson, F. (2009). Sounding new media: immersion and embodiment in the arts and culture. Berkeley: University of California Press.

FutureSound by FutureAcoustc. (n.d.). Retrieved June 6, 2012, from

Gaye, L., Mazé, R., & Holmquist, L. E. (2003). Sonic City: the urban environment as a musical interface. Proceedings of the 2003 conference on New interfaces for musical expression, NIME ’03 (pp. 109–115). Singapore, Singapore: National University of Singapore. Retrieved from

Hosokawa, S. (1984). The Walkman Effect. Popular Music, 4, 165–180.
Inception – The App. (2012, May 7). iTunes Preview. Retrieved June 6, 2012, from

Jean Paul, T. (2004). The Sonic Composition of the City. In M. Bull & L. Back (Eds.), The Auditory Culture Reader (1st ed., pp. 329–341). Oxford and New York: Berg Publishers.

Jenkins, H. (2006). Convergence Culture: Where Old and New Media Collide. New York: New York University Press.

Kubisch, C. (n.d.). Works with Electromagnetic Induction. Retrieved May 16, 2012, from
Location-Aware Music. (n.d.). Retrieved June 6, 2012, from

Lutz, C. (2011, April 4). reality jockey – Retrieved June 6, 2012, fromäge/2011/4/

McCartney, A. (Forthcoming). Soundwalking: creating moving environmental sound narratives. In S. Gopinath & J. Stanyek (Eds.), The Oxford Handbook of Mobile Music Studies. Oxford University Press. Draft available at:

Past Soundwalks: Soundtrip with RjDj. (2011). Retrieved June 9, 2012, from

RjDj. (n.d.). Retrieved May 16, 2012, from

Rueb, T. (2007). Core Sample. Retrieved May 16, 2012, from

Schafer, R. M. (1977). The Tuning of the World. Toronto: McClelland and Stewart.
Sterne, J. (2003). The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press Books.

Straw, W. (2010). The Circulatory Turn. In B. Crow, M. Longford, & K. Sawchuk (Eds.), The Wireless Spectrum: The Politics, Practices, and Poetics of Mobile Media (1st ed., pp. 17–28). University of Toronto Press, Scholarly Publishing Division.

Truax, B. (2001). Acoustic Communication (2nd ed.). Westport, Conn: Ablex.

1 comment for “Mobile Audio Apps, Place and Life Beyond Immersive Interactivity

Leave a Reply

Your email address will not be published. Required fields are marked *