A taxonomy for Listening and Performing ‘in-between’ migratory spaces using mobile apps

By Ximena Alarcón

Abstract

After four years developing telematic sonic performances via the Internet, listening to the ‘in-between’ space in the context of human migration (Alarcón, 2014; 2015; 2016), I argue that questions derived from technical challenges and accessibility suggest the exploration of mobile phones for such performances. I suggest key components to develop an app turning around the concept of ‘in-betweeness’ (Ortega, 2008), which finds resonances with the concept ‘net-locality’ (de Souza e Silva, 2013), emerging from people’s interaction in a mobile space. Focusing on a qualitative review of apps, I propose a taxonomy of listening and performing to facilitate and widen the exploration of ‘in-betweeness’.

1. Telematic sonic performance challenges

Through the making of telematic sonic performances [1] (Fig. 1) in the project Networked Migrations (Alarcón, 2014; 2015), I have previously explored questions regarding listening, performing and technology to approach ‘in-between’ spaces in the context of migration. The ‘in-between’ space is understood as created out of the negotiation between internal and external space (symbolic, cultural, historic) by a person in exile (Ortega, 2008), to make sense of the new space that s/he has to inhabit. By using Pauline Oliveros’ Deep Listening practice (Oliveros, 2005) combined with Networked Listening (Schroeder, 2013), through telematics via the Internet, I have invited people with migratory experiences (not acquainted with these listening practices) to improvise using spoken word and pre-recorded sounds. Through Deep Listening participants have expanded their perception of sound as it travels in time and space, through sonic meditations, listening in dreams and listening to the body; through Networked Listening, a concept developed by Franziska Schroeder (2013), they have experienced the ‘essential unselfing’, becoming a fragile body performing through the distance, reaching the other, locating the self in an ‘in-between state’ (224). I suggest that while listening practice helps one to navigate the ‘in-between’ space (e.g. negotiating the meaning and feelings produced between sounds from different locations, and the making of alternative sonic spaces), performing between distant locations stimulates the sensation of being in an ‘in-between’ state, which relates to the multidimensional physical experience of the performer (Schroeder, 2013) when trying to reach  another person or location in the distance, during performance; these two components – listening and performing – inform sonic ‘in-betweeness’.

In these performances, the diversity of languages and the interplay that participants create invites us also to hear traces of what Janette El Haouli (2006) calls a “nomadic voice”, wandering between native and second languages, a voice without fixed territories, “a bridge for the overcoming of pre-established values and inherited questionings.” (106).

Figure 1. Migratory Dreams Telematic Sonic Performance (Bogotá - London 2012)

Figure 1. Migratory Dreams Telematic Sonic Performance (Bogotá - London 2012)

For connectivity, in the performances I have used bi-directional high quality audio streaming software [2] , developed by engineers and musicians (Cáceres & Chafe, 2010; Carôt & Werner, 2007) to overcome concerns with delay, multiple participants, interconnection with other sound software, audience, and quality of sound. In my practice, demands for large capacity bandwidth to achieve high sonic quality bring challenges for venues outside academic institutions. Equally in these venues many pieces of equipment needed for the performers, such as microphones, audio units, laptop computers, and loudspeakers, are not easily accessible. On the other hand, firewall security on academic or large institutions can hinder the ease of connections. When connections and equipment are in place, another challenge is the development of simple screen-based interfaces to serve the performance.

To tackle these issues, I wondered what social, cultural and technical opportunities mobile technologies could offer to creatively access the sonic ‘in-betweeness’ in the context of migration, by listening and performing in the distance.

 

2. Mobile technologies and human migration

In the last decade, new media has been transforming paradigms of migration, making a radical shift from ‘nostalgic reclamation’ of belonging, to an identity that finds definition ‘through its mobility and interactivity with others’ (Papastergiadis, 2014). For instance, mobile phones have been regarded as central to the maintenance of long-distance relationships in transnational spaces (Madianau, 2014). Furthermore, musicians and artists have regarded listening through this medium in a performative manner as an intimate and introspective practice (Tanaka, 2014), yet often situated in a public space, inviting connection with others. In turn, Adriana de Souza e Silva (2013) introduces the term ‘net locaIity’ to refer to the state (not a space, not a place) where, for the mobile user, ‘remote connections are still present, but become part of the space in which the mobile user is, instead of removing users from it’ (118, 2013). This perspective of being in a ‘state’ within a space resonates with the ‘in-betweeness’ experienced through Deep and Networked listening in telematic performances that explore migratory feelings, when negotiating distant and local acoustic and emotional spaces.

Envisioning the introduction of mobile technology in migration-based and dislocated telematic sonic performances felt very relevant for expanding and enriching the experience gathered using the Internet. I began to explore the potential of a hypothetical mobile sound app for migratory spaces. Such an app might enable the exploration through listening and performing of “in-betweeness”, through immersive experiences. For instance, it might support the exploration of multiplicity of languages and voices, which are influenced by a diversity of sound environments and locations. The listening experience might focus on headphones or small loudspeakers, creating interventions in public and private spaces, and it might be widely accessible. For this imagined app to be developed, I wanted to explore how mobile sound apps have been used in the practices of listening and performing, and to investigate their current technological options and limitations.

 

3. Comparative review

In 2014, with the support of the app developer Donal O’Brien, I engaged in a comparative review of more than forty publicly available mobile apps that use streaming sound, voice and other sources of sound for listening and performative purposes (Alarcon; O’Brien, 2014). Originally intended as a heuristic evaluation, we looked at previous heuristic analyses that could offer us elements to systematically explore the apps. The closest study found was the one in ‘playability’ for mobile games (Korhonen and Koivisto, 2006), as playing could be understood as performance. However, soon we realised that listening and performative experience using mobile sound apps should have its own territory of analysis. For that reason, an open qualitative exploration was more suitable for this review.

To select the apps we looked at the technical characteristics that they might include for the achievement of telematic sonic performances: Voice Transmission (VoIP apps), recording (using voice), use of pre-recorded sound (including music production samplers), use of location (e.g. GPS, multiplayer games), multi-user connectivity (to explore bi-directional relationship between sender and receiver, e.g. multiplayer games and jamming sessions), and sound spatialisation (e.g. the use of multilayering, binaural and 5.1, surround, and ambisonics).

To experience each app we decided to focus on qualitative parameters such as: 1. listening experience (sound spaces perceived and types of sounds used that relate to place); 2. expression and performativity (how the app invites the listener to engage in actions that expand the perception of sound in space); 3. embodiment and gesture (how the body is involved in the interaction with the app); and 4. social engagement (if the app promotes collaboration or interconnection with others).

Each of us experienced the apps in our own time and shared a table where we compiled our experiences and comments. O’Brien also focused on the technological aspects (e.g. how the app was made).

 

4. Listening to apps

A selection of the reviewed apps is presented here, as the best representative examples found to approach sonic ‘in-betweeness’. I have included our general perceptions of space, thoughts and feelings that arose during our interactions with the apps. Special attention is given to the apps that work with distant and local acoustic environments.

Acoustic environments from all over the world are used by apps that use GPS to stream sound to or from the Internet. For instance, the LocusCast app (Figs. 2 and 3) part of the Locustream project [3] , streams sound in one direction in real time to a sound map on the Internet. When used as a resource for an installation such as the REVEIL/SoundCamp [4] project led by Grant Smith, Locustream can present the user with “the sound of daybreak streamed from microphone positions all over the world as a continually changing soundscape over a 24hr period” (Papadomanolaki, 2014). The perception of streaming in real-time strengthens the feeling of connectedness with others in different locations in the world. In an interview by Maria Papadomanolaki, Smith describes the experience:

“… remote listening does seem to give a quite distinctive sense of location. Listeners commonly report that, as they are listening to what is going on under the ice, they often become much more closely aware of local sounds as well. The juxtaposition of two live audio fields seems to be brought into relief, curiously, by the more or less conscious effects of latency, which creates a disjuncture of a few seconds if you listen to the same sound locally and via the network [18]…something like watching and hearing a woodcutter in the distance. Except that here both channels are audio. So the disjuncture works something like a conceptual stereophonic effect.”(2014: 11)

Figure 2. LocusCast app

Figure 2. LocusCast app

Figure 3. LocusMap

Figure 3. LocusMap

 

 

 

 

 

 

 

 

 

 

 

Using streamed pre-recorded sounds from the Freesound project [5] , the 43D App [6] offers a virtual tour of planet earth via soundscapes. The user can explore the soundscapes of particular areas by selecting locations on a map, and mixing them in two modes: random and simple. The listening experience is similar to a radio station that permanently broadcasts sound environments, and uses visual mapping techniques. Although the mix is interesting, this approach seems to ‘fix’ the sound in time and space, simplifying the representation of sound, a transient medium, on a map, and thus possibly overlooking relationships that might be established (further than geographical location) between the sounds.

Mixing pre-recorded and live sound can be experienced in the Sound Hailuoto app [7] (Fig. 4), informed by a participatory project with children. The screen-based interface shows a graphic piece of land representing the Finnish island Hailuoto, which acts as a fascinating concrete element that contrasts with the previously mentioned sound mapping interface. It mixes natural sounds from the island with any environment where the user is located. The application uses a network feature via a server and offers the possibility of recording the mix and sending it to the project. The mixture of these two spaces is an interesting contrast between nature and built/urban environments. Also, the listener is invited to locate herself in a different space created by the mix and to adjust to a different perception of time (as might occur in a migratory experience).

Figure 4. Sound Hailuoto app

Other approaches to listening to space and specifically to location are evident in apps that allow the user to leave their sonic trace on a GPS locative point, such as Shoudio and Woices. These apps invite the user to record with the built-in microphone, leave a message, and then to search for other sounds that are in the user’s proximity. In Shoudio [8] , the sound is definitely enhanced by the association of location. Being able to view where the sound comes from on a map and in relation to the user’s current location creates a sense of exploration of the close locality. Furthermore, the app uses information relating to the date and location of the recordings that, during listening, create a dislocation of time and space, particularly for recordings made a long time ago. The app enhances an urge to create something worth listening to; by making a recording the user is putting herself on the map, available to be heard by anyone anywhere. An ‘explore’ mode allows the user to browse sounds by proximity, popular sounds, and recently added. There is also an option to leave the app running in the background and allow it to play sounds that were recorded nearby as the user moves around.

Woices app [9] is intended as an audio guide to local areas, using voice as primary resource. These recordings are played back according to the listener’s location, triggering the closest guides. Using GPS systems, it invites one to overhear neighbours' sound traces, and to perceive perhaps a sense of surveying the community. Although the ethos of the app is local community engagement, and it is globally available, it suffers from a lack of popularity.

In contrast, the app Arrivals (Figs. 5 and 6), created by the vocal artist Viv Corringham [10] , offers performative and documentary elements about her exploration of location with residents in the city of Kingston, NY. The user can walk anywhere in the world and the app will track a path according to the route that the artist took with the residents. An interesting sense of dislocation is generated by the tracing of a foreign city within another city. Furthermore, Viv Corringham’s embodied experience, expressed with her voice, offers to the listener a very inspiring way to approach a path in any city. The themes of the interviews are about location, home and histories; these invite us to link these experiences of a distant place with the local one, while missing a reference of time. The artist is the guide through the path, and this makes the multilayered experience both beautiful and interesting in its documentary character.

Figure 5. Arrivals app

Figure 5. Arrivals app

Figure 6. Arrivals app

Figure 6. Arrivals app

 

 

 

 

 

 

 

 

 

 

 

If an envisioned telematic performance app involves connecting live, bi-directional audio streams, Voice over Internet Protocol (VoIP) apps offer the basic needed features [11] . Skype, Google Hangouts and Viber are examples of apps with audiovisual communication functionalities. For performance purposes, artists have used Skype, and in fact, it has been useful for working with participants in the Networked Migrations performances (Alarcón, 2014). When performing, it is noticeable that Skype uses a sound compression that works for relatively normal conversation, but when the sound goes above the dynamic level understood as normal (e.g. shouting, or singing loud), the compressor or limiter reacts by muting one of the two sources of the conversation. It can be argued that Skype and other commercial applications bring another aesthetic, and that performances can take place with it. However, in the envisioned app, sound quality is key to offering a listening experience that brings subtleties within mediated ‘in-between’ space, with some degree of control over the sound parameters in the network.

 

5. Performing with apps

As noted previously, the use of voice and language in telematic sonic performances, as well as the experience of listening to de-territorialised ‘nomadic voices’, has been important. Also, performative aspects of ‘in-betweeness’ bring the body into play. Thus, I have included a selection of apps that use voice and integrate the body as part of the performance.

Either for voicing, speaking, or singing, the reviewed apps enable performance with strategies known in the musical world, such as looping and layering. The Voice Jam app (Fig. 7) invites one to listen in anticipation for the sound that has been recorded and visualised on the interface. Performing takes place while looking at the interface. It is an engaging app that invites the user to improvise with up to six different loops. The interface suggests the possibility of creating visual, animated scores [12] . In a similar way, but more simplified in terms of interface, LoopyBeatbot app [13] (Fig. 8) involves looping using a skeleton animation, creating playful and interesting links between voice and body.

Figure 7. Voice Jam app

Figure 8. LoopyBeatbot app

 

 

 

 

 

 

 

 

 

 

 

The Overdub app [14] allows the user to determine a loop of arbitrary length. The user can then overdub an unlimited number of times, each time specifying a new track for the recording. This becomes very expressive, since the user can build up a ‘sound mesh’ utilising multiple layers of her/his own voice. In these apps, traces of nomadic voices could be explored and recorded in a self-immersive manner.

With a more focused approach, and one addressed to children’s interaction with their own voice, the iPad Voice Bubbles app (Fig. 9), by Yvon Bonenfant, uses different sophisticated transformation parameters: echo, pitch variation, granulation and filtering, inviting children to transform their voices, which leads to the creation of imaginary characters. The recording becomes active with touch, allowing visual exploration. The sequencing of the voice effects with colourful bubbles acts as visual feedback, playing eventually individual and collective compositions made by children. The iPad-only display invites many children to interact with it at once, stimulating shared listening and play.

Figure 9. Voice Bubbles app

In what Harmony Bench (2014) calls ‘gestural choreographies’, mobile app developers have explored forms of interaction to extend the expression of the user, by using features such as multi-touch (tapping and dragging) and screen capture through video tracking, as well as the on-board accelerometer and gyroscope, which allow detection of movement, position and bearing, and the built-in microphone.[15] [16]

The Ocarina app [17] (Figs. 10 and 11), by using the built-in microphone and physical modeling of sounds, transforms the mobile phone into an instrument, which invites the user to perform by blowing. The reverb helps to create pauses for listening and playing the mix of sounds (only four sounds), and it is possible to choose timbre and scale, which makes the experience enchanting. The immediate response to touch is rewarding.

Figure 10. Ocarina app

Figure 10. Ocarina app

Figure 11. Ocarina app listening mode

Figure 11. Ocarina app listening mode

 

 

 

 

 

 

 

 

 

 

 

An engaging use of video tracking has been developed in the AirVox app [18] which invites the user to wave their hands in the air. Taking inspiration from the Theremin, the app makes use of the front-facing camera of the newer iPhone models to detect hand movement in space. The user can use either one or two hands to engage, mapping one to the pitch control and another to various parameters, including volume, vibrato and filtering. The gesture of the hand with the body in stillness offers awareness of each movement the body is making by changing the sound.

The iPad AUMI app (Fig. 12), by Deep Listening Institute, was designed to provide full engagement with the body using camera tracking and motion. The wide variety of sounds and instruments allows for expansion of the listening experience. The software finds the 'intentional motion' of the user if all lighting settings and conditions are in place. The sounds are high quality and beautiful. The app was designed for people with “little to no voluntary mobility to participate in improvising music” (Oliveros et al, 2011; 180) and was based on a previous desktop version. In the practice of musical improvisation, this interface has contributed to “an increase in control of physical voluntary movements”, and to “positive developments in psychosocial aspects” of students with physical impairments (179). If the facilitator of the session with the app wants to record her/his activity, s/he can log in to AUMI. This allows collaborative learning between the creators of the app and the users, which are mainly in educational institutions that work with children with impaired movement. The app has options to work via a local network, and has been created with improvisation in mind. This app constitutes the closest approach to inclusive features that invite people to play with each other, by using their body and sound, and listening and moving as paramount for interactivity.

Figure 12. AUMI App

Figure 12. AUMI App

The Music Ball app [19] uses a combination of onboard sensors and a game engine that mimics gravity. Tilting the screen influences the direction in which the balls fall and bounce, producing sound. In the Fourier Touch app [20] , in addition to its multi-touch interface, with the help of the embedded accelerometer, the user can control pitch and volume by tilting the device on the x and y axis. By using screen touch (and dragging) the Sonic Zoom app [21] creates precise sonic changes, and an immersion in many layers of generative sound through the zoom feature, which is attractive and engaging. It can take the user into an exploration of areas of pure electronic sound, with an engaging interface that is far from typical music production knobs [22] . However, in these apps the interaction is being led by decisions based on visuals. This is also the case with other visually engaging sound apps, such as Patapap [23] , Bloom [24] , Dropophone [25] , and Soundrop [26] .

 

6. A taxonomy for Listening and Performing with sound apps

After experiencing all the reviewed apps and making the selection of the most interesting approaches, I propose two taxonomies to locate the apps in terms of listening and performing, knowing that the two practices are not separate, and that it would be useful to have a categorization of the elements that play a role in reaching the experience of in-betweeness. These taxonomies might serve as guides for understanding possible parameters of ‘net locality’ that are specific to the mobile medium, and suggest apps that creatively explore sonic ‘in-betweeness’ in contexts of human migration.

The first taxonomy (Fig. 13) is ‘Listening to in-betweeness’. The horizontal axis represents the domain of ‘net locality’. On the extremes of the axis, I have placed local and distant locations as references to indicate where sound is coming from. I suggest that if the listening experience seems to be in the middle of the axis, we are approaching the complexity of ‘in-betweeness’ regarding location, as is experienced in human migration. In turn, this net locality axis is crossed by a vertical axis, which represents the domain of the perceived ‘transmission time’ in the listening experience. In the upper extreme, I have located ‘real time’, and on the lower extreme, ‘past time’; the latter indicating mainly pre-recorded material. I suggest that if the listening experience seems to be in the middle of this axis, we might approach a perception of timelessness. For instance, when you feel yourself to be simultaneously present in two different locations (as in the Sound Hailuoto app or in the Arrivals app), the sense of time might be challenged in terms of perception: no past and no present, but a sum of experiences of time, as might occur in migratory experience [27] . Thus, when an experience with a mobile sound app situates the listener in the middle of the two axes, the app is offering rich and complex approaches to time and space, enabling the experience of ‘in-betweeness’.

Figure 13. Taxonomy ‘Listening to in-betweeness’

Figure 13. Taxonomy ‘Listening to in-betweeness’

For instance, in the Sound Haiuloto app subtle differences in the perception of real time are important, as the app highlights the timeless feeling created by the combination of a pre-recorded distant context (e.g. rural), and a real-time local context (e.g. urban/enclosed). In the Arrivals app, walking and its powerful forms of listening to territory, when embodied by a voice, creates a perception of location, where although the listener knows the sounds are from Kingston NY, s/he can experience the same route in London, UK. It could be argued, and will require specific tests to determine, that the feeling of timelessness might be created by the juxtaposition of acoustic external space and space experienced with the headphones. Variations might include actions such as the listener voicing memories of the local place (outside Kingston) where the app is being experienced.

The second taxonomy (Fig. 14) is ‘performing in-betweeness’. This table presents the categories ‘Performing Alone’ and ‘Performing with Others’. The axes include arrows, indicating the movement that performers might establish between two options during an improvisatory performance. Ideally movement between the two categories might allow either ‘unselfing’, or a return to solo mode after performing with others. The table is further divided into three rows, indicating the main sources of interaction used in performance, such as ‘voice’, ‘body and devices’, and ‘visuals’.

Figure 14. Taxonomy ‘Performing in-betweeness’

Figure 14. Taxonomy ‘Performing in-betweeness’

Performing alone in the reviewed mobile sound apps is a more developed feature. These apps raise questions about voice and identity within a migratory context. The looping feature in the Voice Jam, LoopyBeatbot and Overdub apps suggests that archiving a voice and its immediate interaction with yet a new voice, with some delay, opens possibilities for experimenting with traces of ‘nomadic voices’. When exploring identity and place, it would be interesting to explore different forms in which the voice travels in real time—e.g. bounced back to the listener, informed by the environment where the listener is, or by other people’s voices.

The body keeps memory of place, as has been evident during the work in Deep Listening developed with migrants (Alarcón, 2015). Tracking movement of the body in space, as demonstrated with AUMI app, is a feature worth exploring together with the perception of physical space in local and distant locations. Using the precision that sensors offer to bodily motion could be explored when performing; particularly slow movements can be explored to expand awareness of sound in space/time. The mobile phone becomes an instrument during the performance; the iPad specifically offers more space for tracking both bodily motion and collective interaction, as is the case with AUMI and Voice Bubbles.

If the user moves between sources of interaction, these might be combined; for instance, with the use of voice and body, employing a subtle visual interface. On the other hand, visual interfaces could leave sound to its role without falling into a functional relationship, but establishing an interesting dialogue with sound, suggesting, for instance, animated scores. Engagement with touch seems very relevant if the screen is understood as a 'limit' between the two locations, which can be richly explored aesthetically and technically. The possibility of playing with screen space is an interesting metaphor for migration and ‘in-betweeness’ that could stimulate sonically rewarding experiences in an improvisational context, as in a multiplayer space.

 

7. A reflection on Listening and Performing

Creating taxonomies based on a qualitative and technical review for the exploration of the ‘in-between’ sonic space has been helpful for understanding the mobile as a medium that differs from Internet-based experiences, and that situates listening and performing within the territory of ‘net locality’.

When combining the proposed taxonomies of listening and performing, it might also be possible to envision an app where voice and body are the sources of interaction, and might move between solo and collective performance. Using sound from the environment in the experience could be based on creating archives of space and of a voice in a particular location, creating also mobility between perception of time and perception of the space (local or distant).

Envisioning hybrid spaces involving others is still undeveloped for apps using voice and sounds of place. Perhaps the medium itself is not yet inviting us to listening and performing as  integrated activity using such sonic material. However, existing technical options could offer different possibilities for the understanding of voice, body and interfaces in mobile app based performance.

Using listening and performing taxonomies can help us to imagine apps for local and distant interactions that follow the concept of ‘in-betweeness’ in migration, expanding our senses of belonging and place.

 

Acknowledgements

This research has been supported by the research centre CRiSAP (Creative Research into Sound Arts Practice). Collaboration from the app developer Donal O’Brien in the comparative review has been sponsored by the Staff Development Research Fund 2014, at the London College of Communication, University of the Arts London.

 

 

PDF Version of Article

References

Alarcón, Ximena. “Sonic Migrations: Listening In-between, Sensing Place.” In Environmental Sound Artists: In Their Own Words. Edited by Frederick Bianchi and V.J. Manzo. New York: Oxford University Press, 2016.

______. "Telematic Embodiments: Improvising via Internet in the Context of Migration." (including two sound files rom 'Migratory Dreams' telematic performance). In Vs. Interpretation: An Anthology on Improvisation Vol.1. Edited by David Rothenberg. Prague: Agosto Foundation, 2015.

_____. "Networked Migrations: Listening To And Performing The In-Between Space”. Liminalities: A Journal Of Performance Studies Vol. 10, No. 1, 2014.

Alarcón, Ximena and O'Brien, Donal. Comparative Review Apps towards an ‘in-between’ performance app. Technical Report. CRiSAP. (Unpublished). 2014. Accessed July 2, 2016. http://ualresearchonline.arts.ac.uk/8415/

Bench, Harmony. "Gestural Choreographies: Embodied Disciplines and Digital Media." In The Oxford Handbook of Mobile Music Studies Vol. 2. Edited by Sumanth Gopinath and Jason Stanyek. Oxford New Press: USA, 2014.

Cáceres, Juan-Pablo and Chafe, Chris. “JackTrip: Under the hood of an engine for network audio.” Journal of New Music Research, 39(3), 2010.

Carôt, Alexander & Werner, Christian. “Network Music Performance – Problems, Approaches and Perspectives”. Paper presented at the “Music in the Global Village” - Conference September 6-8, 2007 Budapest, Hungary. Accessed July 1, 2016. http://www.carot.de/Docs/MITGVACCW.pdf

De Souza e Silva, Adriana. “Location-aware Mobile Technologies: Historical, Social and Spatial Approaches.” In Mobile, Media and Communication Vol. 1, Number 1, January, 2013. Accessed July 1, 2016. http://mmc.sagepub.com

Grinberg, León & Grinberg, Rebeca. Migración y Exilio. Estudio Psicoanalítico. Madrid: Biblioteca Nueva. 1996.

Korhonen Hannu & Koivisto Elina M.I. “Playability Heuristics for Mobile Games.” In MobileHCI’06, September 12–15, 2006, Helsinki, Finland. Accessed July 13, 2016. http://doi.acm.org/10.1145/1152215.1152218.

Madianou, Mirca. “Polymedia communication and mediatized migration: an ethnographic approach”. In Mediatization of Communication. Edited by K. Lundby. pp. 323-248, Berlin: De Gruyter. 2014. 

Oliveros Pauline, Miller Leaf, Heyen Jaclyn, Siddall Gillian, and Hazard Sergio. “A Musical Improvisation Interface for People With Severe Physical Disabilities.” In Music and Medicine 3(3) 172-181. 2011.

Oliveros, Pauline. Deep Listening: A Composer’s Sound Practice. Lincoln, NE: iUniverse Books, 2005.

Ortega, Mariana. “Multiplicity, Inbetweeness, and the Question of Assimilation.” In The Southern Journal of Philosophy 46: 65–80, 2008.

Papadomanolaki, Maria. "The Field Reporter." In Soundcamp tabloid publication. Pages 10-11, 2014.

Papastergiadis, Nikos. “The Role of Art in Imagining Multicultural Communities.” In Breaching Borders: Art Migrants and the Metaphor of Waste. Edited by Juliet Steyn, Nadja Stamselberg. London: I.B.Tauris & Co Ltd, 2014.

Schroeder, Franziska. "Network[Ed] Listening—Towards A De-Centering Of Beings". Contemporary Music Review 32, no. 2-03 (2013): 215-229. Accessed July 13, 2016. doi:10.1080/07494467.2013.775807.

Tanaka, Atau. "Creative Applications of Interactive Mobile Music." In The Oxford Handbook of Mobile Music Studies Vol. 2. (2014). Edited by Sumanth Gopinath and Jason Stanyek. Oxford New Press: USA.

About the Author

Ximena Alarcón, sound artist and Research Fellow at Creative Research into Sound Arts Practice – CRiSAP, LCC, University of the Arts London.

X.Alarcon@lcc.arts.ac.uk

Footnotes

  1. Telematic Sonic Performances created until now, focusing on the migratory experience are ‘Letters and Bridges’ (Mexico – Leicester, 2012), ‘Migratory Dreams’ (Bogotá – London. 2012) , ‘Tasting Sound Listening to Taste’ (London – Troy, 2013), Bangalore: Aural Transitions (Srishti Campuses Bangalore 2015), ‘Suelo Fértil’ [Fertile Soil] (London – Mexico – Austria, 2016). ^
  2. SoundJack, Tube Plug and Jacktrip software. TubePlug is a VST plugin, unfortunately no longer distributed and supported, created byJörg Stelkness http://www.t-u-b-e.de/iplug.htm Accessed July 30, 2013 ^
  3. This project has been developed since 2005 by the research group Locus Sonus in France. http://locusonus.org ^
  4. REVEIL is ‘the first 24 hour radio broadcast of the sounds of daybreak around the world’. It was transmitted in Soundcamp, a listening event over the first weekend in May (2014). REVEIL used LocusCast app in conjunction with LiveShout App, which is a mobile streaming app that allows for single or simultaneous multiple user broadcast and works with Icecast streaming technology. By the publication time of this paper, LiveShout App has been updated including broadcasting and simultaneous listening of up to three streams, as a form of bi-directionality. Supported by the AHRC (Arts and Humanities Research Council) it has been led by Franziska Schroeder and Pedro Rebelo from SARC (Sound Arts Research Centre) in Belfast, in collaboration with Peter Sinclair from Locus Sonus. ^
  5. http://freesound.org ^
  6. Developed by 43D ^
  7. Developed by Juan Carlos Duarte Regino, from Hai Art. http://www.haiart.net/ ^
  8. For iPhone made by RP Landegent ^
  9. by Woices ^
  10. With the technical production of Paul Cantrell https://itunes.apple.com/gb/app/arrivals-kingston/id534582158?mt=8 (most recent update 26/06/12) ^
  11. Several popular VoIP apps exist today for both the iOS and Android platforms ^
  12. Examples of animated scores are the ones created by Ryan Ross Smith, http://www.youtube.com/user/ryanrosssmith/videos Accessed 21/06/14 Other scores have been created as apps themselves, such as Decibel ScorePlayer developed in Australia by Lindsay Vickery. These used networked possibilities too. https://itunes.apple.com/gb/app/decibel-scoreplayer/id622591851?mt=8 ^
  13. by RD Wong ^
  14. by Kirill Edelman ^
  15. The sensors available in Android phones are Motion sensors (including accelerometers, gravity sensors, gyroscopes, and rotational vector sensors), Environmental sensors (including barometers, photometers, and thermometers), position sensors (including orientation sensors and magnetometers) http://developer.android.com/guide/topics/sensors/sensorsoverview.html (Accessed on 17/09/14) ^
  16. The sensors available in iOS devices are Proximity sensor (iPhone), motion sensor/accelerometer (iPhone, iPad), Ambient light sensor (iPhone, iPod, iPad), moisture sensor and gyroscope. http://ipod.about.com/od/ipodiphonehardwareterms/qt/iphone-sensors.htm (Accessed on 17/09/14) ^
  17. By Ge Wang ^
  18. By Yonac Inc ^
  19. by Acoustic World ^
  20. by KonakaLab ^
  21. OS iPad App. PhD Project at Queen Mary University of London. By Robert Tubb. Created on 05/08/2013 ^
  22. Currently there are knobs developed for iPad to control Touch OSC, http://www.wired.co.uk/news/archive/2014-07/09/tuna-knobs (Accessed on 14/08/14) ^
  23. by Jono Brandel ^
  24. by Brian Eno and Peter Chilvers ^
  25. by Yosuke Hayashi ^
  26. by Develoe, LLC ^
  27. The migratory process brings moments of confusion for the migrant. León and Rebeca Grinberg (1996), in their psychoanalytic study “Migration and Exile”, state that the migrant experiences an overlapping of cultures, places, languages, and memories, when trying to transform the unknown into the familiar. S/he is transferring streets and people from the past to the new place, feeling s/he is having a reencounter with known people in the faces of unknown passers by. I suggest that in those moments not only spaces but the perception of the time overlaps, bringing situations and places from the past as if happening in the present time, creating a timelessness feeling for the migrant. ^

Leave a Reply

Your email address will not be published. Required fields are marked *