Gaëtan Parseihian, Sølvi Ystad, Mitsuko Aramaki, Richard Kronland Martinet: The process of sonification design for guidance tasks

CNRS-LMA, UPR 7051, Aix-Marseille Univ, Centrale Marseille

Fig. 1

Fig. 1

Abstract

This article deals with the process of sonification design for guidance tasks. It presents several studies that aim at overcoming two major problems of sonification: the aesthetics of sound design and the lack of a general method. On the basis of these studies, it proposes some guidelines for the generalization of the sonification process. First, it introduces the need to disassociate data and display dimensions; then, it proposes a method to classify and evaluate sound strategies; finally, it introduces a method for the customization of the sound design. The whole process is based on the identification and the manipulation of particular sound morphologies.

Introduction

During the past few decades, developments in new technologies have led to an expansion of systems aiming to enhance mobility or to guide specific users. Such systems can be found in several areas and are designed for different user categories. In the majority of these cases, the visual modality is either already occupied or absent (e.g. blind people, fire fighters in smoke-filled environments, etc.). It is thus necessary to transmit guidance information through another sensory modality such as haptic or auditory modalities.

Many approaches to using sound to transmit guidance information have been designed in function to the context. For the vast majority, their aim is to transform visual, cartographic or distal information in a comprehensible manner using a sound display. Auditory display of spatial information can be designed in several ways that will affect the effectiveness and efficiency of the system as well as users’ satisfaction.

This article attempts to conceptualize and to generalize the process of sound design in the case of guidance tasks. It focuses on the design process of guidance information and will not consider technical aspects such as tracking systems and sensor technologies used to determine the guidance information. Section 2 describes existing methods for sound display of spatial information in mobility aids and guidance systems. Then, a summary is given of several of our projects and papers published during the last few years that highlight various current issues in sound design for guidance applications (Section 3). Finally, a generic methodology is proposed and guidelines are provided for mapping data to sound dimensions in systems that imply guidance issues (Section 4).

Different approaches to auditory guidance

Auditory display for the enhancement of guidance tasks can be found in several types of applications. The most evident, assistive technologies for users with visual impairments, aim at addressing different types of common problems encountered by visually impaired people such as avoiding obstacles (Borenstein and Ulrich 1997; Shoval et al. 1998; Jacquet et al. 2006), finding a route (Helal et al. 2001; Holland et al. 2002; Katz et al. 2012; Loomis et al. 1998; Wilson et al. 2007), or finding an object (Bujacz, Skulimowski, and Strumillo 2012). In surgery, assistive devices are commonly used to guide the surgeon’s hand (Wegner 1998; Cho et al.); in medicine, to guide patients with disabilities in their rehabilitation (Basta et al. 2008; Dozza et al. 2004; Scholz et al. 2014); in elite sports to help in guiding athletes away from inefficient movements toward the correct, efficient ones (Godbout and Boyd 2010; Schaffert et al. 2009), etc.

Depending on the application, one or several types of information should be displayed to the user. In a city travel aid, for example, in addition to providing trajectory information such as orientation and distance, it is also of interest to provide information such as landmarks, points of interest or obstacles. Guidance information can also be displayed in one or several dimensions (in Cartesian or spherical coordinate), which will strongly affect the audio feedback (a distance will not be displayed with the same manner as a 3D position). If the majority of applications only require guidance toward static targets (fixed points), it is also of interest, in some cases, to guide the user toward dynamic targets (e.g. for pursuit-tracking tasks).

All these considerations strongly influence the choice of the mapping between the data dimension (guidance information) and the display dimension (the sound that will guide the user). The following paragraphs attempt to summarize different strategies encountered in the literature used for transforming guidance information into sound.

The representation of data using sound is called sonification (Kramer 1994). Many studies have investigated methods of data to sound conversion depending on the type of information. From auditory icons and earcons that are brief sounds used to monitor events in user interfaces, to parameter mapping sonification (PMS), different sonification techniques have been defined. The Sonification Handbook (Hermann et al. 2011) provides a good introduction to various methods to choose between depending on the application. There are many different ways to undertake the design and realization of a sonification. Recently, in a survey of the sonification methods to convert images to sounds for mobility applications (Sanz et al. 2014), the authors proposed classifying the different sonification paradigms used for assistive technologies according to two main categories. This categorisation is based on the way the correspondence between data and sound is designed. For the first category, the “psychoacoustic sonification”, the mapping between data and sound is based on human perceptual and cognitive capacities of spatial hearing. It uses effects naturally perceived by the hearing system to help locate the source position by virtually rendering its position through stereophony or Virtual Auditory Display (VAD) (Begault 1994). This involves creating binaural 3D sound rendered via headphones used as an acoustic beacon to point the user towards a specific direction in space or guide the user’s movements towards a specific target. Most electronic travel aids for the visually impaired are based on this technique (Bujacz et al. 2012; Katz et al. 2012; Loomis et al. 1998; Wilson et al. 2007). The second category, the “artificial sonification”, uses perceptual characteristics of sound, such as pitch, loudness, tempo, brightness, etc. to transmit guidance information to the user. In this category, the auditory display is not related to physical characteristics or parameters of objects, but artificially links to them in order to transmit information. A large number of guidance systems are based on this paradigm. In a number of obstacle detection systems for visually impaired, for example, the distance to the obstacle is mapped to sound frequency or to musical notes (Jacquet et al. 2006; Shoval et al. 1998).

Both of these mapping categories, have advantages and drawbacks. If “psychoacoustic sonification” is considered as more intuitive and natural, it is less adapted to situations where high accuracy is needed due to poor sound localization abilities in terms of elevation and distance (Blauert 1997). On the other hand, “artificial sonification” has proved its efficiency in many systems such as in the case of parking car systems (decreasing time interval between impulses provide accurate perception of distance), but it requires a longer learning process.

In practice, many systems are based on mixing solutions from these two categories in order to overcome the problems inherent to each option previously presented (Bujacz et al. 2012; Parseihian et al. 2012).

Another interesting aspect is the level of interactivity provided by the assistive device. Using sounds to guide the user in a specific task involves interaction with sounds in control loops. The device should support real-time control of sound generation and guidance information must be updated rapidly in response to user actions in order to be continuously transformed into sonic feedback. This process is called interactive sonification (Hunt and Hermann 2011). It has been shown to be very effective, for example, in enhancing the performance of the human perceptual system in the field of motor control and motor learning (Effenberg 2005) or in enhancing 3D navigation in virtual environments (Lokki and Grohn 2005). Concretely, when designing sonification for highly specialised domains having high performance requirements, where the users will be selected and trained, the use of continuous sound streams allows full interactivity without disturbing the user. On the contrary, the sonification design of devices for casual everyday use (the blind, etc.) should consider the use of short sounds to avoid disturbing the user during mobility and to decrease the annoyance factor. However the use of short sounds diminishes the interactivity level and thus the efficiency of the task as it breaks the continuous flow of information. Generally speaking, guidance applications based on continuous sounds involve parameter mapping sonification methods (Grond and Berger 2011) while applications based on short sounds involve beacon sounds such as auditory icons or earcons. If the distinction between sonification methods according to the level of available interactivity has not been considered in the literature, a brief analysis of the methods proposed in each application category reflects its importance. Indeed, the use of continuous sounds is more common in applications such as surgery aids (Hansen et al. 2013), aircraft pilot aids (Brungart and Simpson 2008) or rehabilitation devices (Dozza et al. 2004), while applications such as mobility aids for visually impaired (where the sonification should not mask real sounds) are mostly designed with short sounds (Loomis et al. 1998; Walker and Lindsay 2006).

Case studies

The overview of sound design approaches for guidance aid shows that there are a large number of sonification methods. Despite promising results obtained through these approaches in laboratory studies, auditory guidance struggles to find its place in commercial applications.

A first explanation could be the discomfort caused by the use of sound. In fact, many users reject these types of aid because of the lack of aestheticism of the auditory design, the fatigue caused by the type of sounds generally used, or simply because the sounds do not correspond to their taste. While effectiveness (task performances) and efficiency (learnability) of auditory guidance have been well investigated, the notion of user satisfaction has been absent from most research on guidance aids.

Another potential explanation could be the lack of a general formalization of the sonification design process that could serve as a tool for the designer who is to choose the most appropriate sound strategy for the given context of a device. Indeed, the multiplicity of approaches and sonification techniques together with the lack of comparative studies does not facilitate the designer’s choice and leads to case-by-case design of the sonification strategies. For each application, all the sonification parameters should be re-evaluated and rethought, thus reducing the portability of these display methods toward various situations.

This section details several projects of sonification for guidance aids that aim to improve aesthetics and efficiency of sonification.

Customizable sound strategies

This study was conducted in the context of the ANR project NAVIG (Katz et al. 2012), which aimed at augmenting the autonomy of visually impaired in their most problematic basic daily actions: far-field navigation (arriving at a destination while avoiding obstacles) and near-field guidance (finding and grasping an object in the peripersonal space). The project worked towards providing the user with spatial information concerning a trajectory – their position, and important landmarks on the one hand, and the identification and localization of an object on the other hand. The goal was to provide users with the information necessary to construct accurate cognitive maps of the environment, making them more confident during their displacements. Guidance information was provided using spatialized audio rendering with both text to speech (indicating street names or landmarks) and non-speech sound (pointing to spatial object such as path or point of interest location).[1] After evoking considerations related to the user needs, the sound design process of guidance information will be described here for the far-field navigation and for the near-field guidance.

In order to design an accessible, pleasant, and ergonomic sonification, user needs were evaluated in terms of user interface outputs, using several questionnaires as well as a creativity session held with visually impaired persons. One of the most important results of these investigations was the number of different ways the users imagined the system’s sounds. Indeed, if some users asked for electronic sounds (such as video game sounds) in order to easily differentiate them from natural ambient sounds, others preferred decontextualized natural sounds (animal, sea, or forest sounds). In regard to these results, it was not possible to find a consensus on the choice of sounds for the design of a navigation aid. In response to these findings it was decided to design the sonifications using customizable sound strategies.

Far-field navigation

Concerning far-field navigation, experimentation with panels of visually impaired individuals, and orientation and mobility instructors permitted the formulation of a list of information necessary to guide a user along an unknown path. Five important categories of objects were identified (with some categories containing several subcategories for a total of 15 objects): Itinerary Point (IP), Difficult Point (DP), Landmark (LM), Point Of Interest (POI), and Personal Favourite Point (PF)[2]. During navigation, users are guided toward IP locations while being informed about other information categories with VAD (psychoacoustic sonification paradigm). The concept of morphocons was developed in this context, to allow the user to rapidly identify and differentiate between each class of objects and to be informed about the subcategories within each class.

Morphocons (morphological earcons) allow the construction of a hierarchical sound grammar based on temporal variation of several acoustical parameters. With this method, it is possible to apply these morphological variations to any type of sound (natural or artificial) and therefore to construct an infinite number of sound palettes, while maintaining a certain level of coherence among objects or messages to be displayed. For the NAVIG project, a semantic sound grammar was developed to allow the user to rapidly identify and differentiate between each class of objects and to be informed about the subcategories within each class. This grammar was established so that each sound could be easily localized (i.e. large spectrum, sharp attack), and the possibility of confusion between classes was minimized. The semantic sound grammar is described as follows:

  • IP: a brief sound.
  • POI: a sound whose frequency increases steadily, followed by a brief sound.
  • FP: a sound whose frequency decreases steadily, followed by a brief sound.
  • DP: a sequence of two brief sounds.
  • LM: a rhythmic pattern of three brief sounds.

This grammar allows for the realization of a variety of sound palettes satisfying individual user preferences in terms of sound aesthetics while maintaining a common semantic language. As such, switching between palettes should not imply any significant change in cognitive load or learning period. Three different sound palettes (natural, instrumental, and electronic) were constructed (see Fig. 1) and perceptually evaluated by 60 subjects with an online classification test. Results showed a good recognition rate for discrimination between the categories (78 ± 22 %) with no difference between sighted and blind subjects.

Near-field guidance

Concerning near-field guidance, the device combined a bio-inspired vision system able to quickly recognize and locate objects and a 3D sound rendering system, which mapped a spatialized sound to the location of the targeted object (Parseihian et al. 2012). The use of such systems raised fundamental issues on human abilities to locate and to grasp near-field sound sources (< 1.5 m), and on the VAD’s capacities to re-produce the necessary cues for a good localization in this area. A preliminary study compared the localization and pointing accuracy toward real and virtual sounds and visual targets (Parseihian, Jouffrais, and Katz 2014). The results showed a compression of distance perception for real sound sources and no significant distance perception for the VAD. In the context of an auditory guidance system in the peripersonal space, considering the observed limitations, additional cues would be necessary to aid the user in estimating the distance to the auditory target object. The present study aimed at exploring the influence of the addition of new acoustic cues for distance perception. These can be implemented through the use of artificial sonification techniques.

To answer to user constraints, distance sonification was designed as a digital audio effect applicable to the sound. With this concept, the distance is mapped to one or several parameters of the audio effect and the resulting sound pattern is thus distance dependent. This method allows for the design of several distance metaphors while leaving the user the possibility to customize the actual sounds of the interface. Furthermore, it has the advantage that once the metaphors are understood and learned, users are able to change the sounds without relearning the sonification mapping.

On the basis of this idea, two symbolic distance metaphors were developed. The first was based on the inter onset interval (rhythm metaphor) and consisted of repeating the stimulus three times and varying the time interval between each repetition as a function of distance (the closer the target, the faster the repetition). The second effect was based on the pitch perception (pitch metaphor). It was created using a band-pass filter with a time sliding central frequency. The initial central frequency of the filter (at the beginning of the sound) was fixed regardless of the distance and the final central frequency of the filter (at the end of the sound) increased proportionally with distance. With this effect, a noise burst sounds as a noisy chirp with a final frequency that depends on the distance (the smaller the distance, the lower the final frequency).

The distance metaphors were then evaluated with a sound localization experiment, which underlined their contribution to distance perception compared to the control reference condition consisting solely of virtual rendering (psychoacoustic sonification). The results (presented in Fig. 2) highlight a significant improvement of the distance perception with the artificial sonification. The ability of these two effect metaphors to improve near field distance perception shows the interest of mixed solutions between psychoacoustic and artificial sonification and constitutes an interesting solution to the user acceptance problem.

Fig. 2

Fig. 2

Taxonomy of sound strategies for 1D guidance

This study took place within the framework of the ANR project MetaSon. Its purpose was to explore the effect of the “artificial sonification” paradigm on a one-dimensional guidance task. It proposed a classification of several sound strategies based on of their ability to convey information and guide the user. For this purpose, a taxonomy of sonification strategies was defined, and several sound strategies were designed and evaluated through a perceptual experiment (Parseihian et al. 2013).

The proposed taxonomy was based on the definition of three morphological categories:

  • Basic strategies: These strategies are based on the variation of basic perceptual sound attributes such as pitch, loudness, tempo or brightness. The sound attribute is directly a function of the distance to the target. Two polarities may be chosen (the attribute is maximum on the target or the attribute is minimum on the target). These strategies are constrained by human perceptual limits and the maximum reachable precision should be defined by the just-noticeable difference of the attribute.
  • Strategies with reference: The idea here is to include a sound reference corresponding to the target. Using this reference, the user should be able (by comparing the varying parameter to the reference) to evaluate the distance to the target without exploring the whole space. In the case where the parameter is pitch, adding a reference tone will produce modulations (if the frequencies are close) near the target and a continuous sound on the target. It is also possible to use an implicit perceptual reference such as the inharmonicity (the sound is harmonic only on the target) or the roughness (there is no roughness on the target).
  • Strategies with reference and “zoom effect”: In order to increase the precision around the target and to reduce the time needed to identify the target, it is possible to enhance the “strategies with reference” concept by adding a “zoom effect”. This zoom consists in duplicating the strategy in different frequency bands. For example, in the case of the pitch with reference, rather than constructing the strategy with a pure tone, the use of a harmonic sound with several frequencies will create different modulations for different frequencies.
Fig. 3

Fig. 3

Fig. 3 shows the spectrograms of three sound examples illustrating the categories of strategies. For each strategy, the figure highlights the spectral evolution of the sound as a function of the normalized distance (a distance equal to one corresponds to the maximum distance and a distance equal to zero corresponds to the target position).

An experiment was designed to evaluate the strategies’ efficiency in guiding a user toward a sound target in a one-dimensional space with one polarity. Its aim was to quantitatively assess potential behavioural differences induced by the different strategy categories. For that purpose, nine sound strategies were created. The results showed the influence of the sound strategies on the guidance behaviours. While some sound strategies enable the user to quickly reach the targets other strategies result in a better precision or a more direct guidance to the target. For example, subjects were more precise with strategies with reference and “zoom effect” than with basic strategies, and they were slower with strategies with reference.

Furthermore, this study highlighted the importance of a rigorous comparison of sound strategies by precisely defining the guidance task (find the target: precisely, quickly, or without overshooting it). It provides relevant information for predicting user behaviour for a chosen sound strategy and constitutes a first step toward general guidelines for mapping data to auditory display dimensions.

Toward guidelines for sonification design in the case of guidance tasks

This section provides some general guidelines for the sonification design in the case of guidance tasks. A generalization of the sonification process requires, on the one hand, that this process is sufficiently abstract to be applied to different applications. On the other hand, it requires a good knowledge of the perceptual influence of all the available auditory parameters to predict the efficiency of the task and thereby help the designer choose the most appropriate sound strategy for the application in question. Finally it seems important (as highlighted in Section 3) to design sound strategies based on the evolution of sound morphology and not on the evolution of a specific sound in order to be able to apply these morphological evolutions to different sounds. This section begins by providing guidelines to dissociate data dimensions from auditory dimensions, then it proposes a overview of future work on the evaluation of performances, and finally details the process of sonification customization based on the use of sound morphologies.

Dissociating the data dimension from the auditory dimension

To achieve an abstraction of the data-to-display, we first introduce the concept of “target” so that the data is no longer an application-dependent dimension directly mapped to the audio parameter, but corresponds to an abstract “distance” between a current and a target data value. This concept involves the definition of specific data values considered as targets, which may change over time. For example, in applications that use sound as a status and progress indicator of the evolution of a dynamic system, the target(s) may correspond to one (or several) requested system state(s) between which the user (or any process) is evolving, and the information to display corresponds to the distance to these targets. As an example, in a driving aid application, the targets may represent the optimal speed and the optimal steering wheel angle for the road section on which the user is located. The system will then give information on the velocity difference to be applied to reach the optimal speed and on the angle difference to be applied by the driver to reach the optimal steering wheel position. The data-to-display will then be the distance between the current and the target speed on one hand and the distance between the current and the target steering wheel angle on the other hand.

One can wonder if it is possible to define a scale independent strategy from a laboratory experiment where the user moves within a few centimeters on a pen tablet, which is also valid at larger scales; for instance in the case of the displacement of a walking person or the subtle movement within a few degrees of the steering wheel of a car. In these examples the variables do not belong to the same scale and lead to different scaling functions for the sound mapping. In order to overcome these problems, we propose a normalization of the data (by the maximum data value) enabling the global addressing of different scaled applications using the same sound strategies, thus simplifying the mapping choice. Therefore, the sonified data no longer relate to a physical dimension and are always dimensionless. The normalized distance varies between 1 (maximum distance to the target) and 0 (the user is on the target).

The abstraction process is based on the definition of a (or several) target(s) and on the calculation of the normalized distance between the current and the target value of the data to be sonified. With such a process, it seems possible to analyse and compare the sound strategies independently from the application.

Effectiveness of the sound guidance

As mentioned in the previous sections, several sound parameters may be used to convey guidance information. By using the notion of “target”, any of these parameters can be used without concerns about the “right” associations between auditory and display dimensions. Hence, any sound parameter can be used to represent any display dimension but all the parameters will not provide the same level of information, which may affect the guidance task in both precision and time. In order to characterize the ability of the sound parameters to convey information and guide the user, it is important to understand how their morphology impacts the guidance task. Indeed, the aims of the sonification when guiding a user toward a target can be multiple:

  • to guide as precisely as possible;
  • to guide as quickly a possible;
  • to guide without passing the target.

These guidance behaviours are closely linked to design choice as some acoustic properties may mainly affect speed, and others precision. To take into account these guidance behaviours, it seems important propose taxonomies of sound strategies first and then to experimentally evaluate them with respect to these three criteria. Such results may lead to a sound strategy classification in terms of precision, rapidity and overshooting and thus the creation of a guidance strategy space (reflecting the relation between sound morphologies and perceptual evaluations) assisting the designer in sound strategy choice.

Considering the different purposes of guidance applications, several taxonomies must be created that take into account the sonification type (psychoacoustic or artificial) and the number of dimensions involved in the guidance task. Indeed, some sound strategies may induce precise guidance in a one-dimensional task, but they can be disruptive to the user when combined with another sound strategy for guidance in a two-dimensional task. The taxonomy introduced in Section 2 constitutes a good start for one-dimensional tasks, but it must be precisely evaluated according to the three previously mentioned guidance behaviours. Furthermore, this taxonomy must be extended to two and three-dimensional guidance tasks by taking into account the perceptual effect of the combination of several audio streams.

Sound strategies based on morphological attributes

A common reproach made by end-users is the lack of conviviality and aestheticism of sonification. Most studies use simple sounds (noises or tones) that can be irritating for daily use. In order to design accessible, pleasant, and ergonomic sound strategies, we suggest allowing the user the possibility to customize the sound strategies. As reflected in Section 3, there are different sonification methods that allow for the customization of the sound display by the user. For all these methods, design strategies based on morphological attributes of the sound are proposed. With such a concept the information is conveyed by the sound evolution rather than the sound itself. The main advantage of this process is that once the sound strategy is understood and learned, the sounds can be changed without relearning the mapping between data and sound. Furthermore, all the sound parameters that contain a particular morphology can be applied either as an effect to real sounds or as a control parameter in sound synthesis thus creating an infinity of available sounds for each strategy.

Conclusion

This article discusses several ways of using sound display to transmit guidance information. It highlights the high degree of suitability of sonification for guidance, but also some difficulties related to the generalization of laboratory tests on industrial devices. We propose some guidelines for the generalization of sonification design in the case of guidance tasks. First, we introduced the need of dissociating the data dimension from the display dimension in order to analyse the effectiveness of the sound strategies independently from the application. Then, we proposed an evaluation and comparison of all the sound strategies for generic tasks with different dimension levels (1D, 2D, and 3D). This evaluation goes hand in hand with the definition of sound taxonomies for each situation. With such results it will be possible to create an efficient design space that will allow the sound designer to choose the best sound strategy for a given application. Then the chosen sound strategy can be applied as a sound morphology to any type of sound in order to favour customizable devices.

References

Basta, D., F. Singbartl, I. Todt, A. Clarke, and A. Ernst. 2008. “Vestibular Rehabilitation by Auditory Feedback in Otolith Disorders.” Gait & Posture 28 (3): 397–404. doi:10.1016/j.gaitpost.2008.01.006.

Begault, D.R. 1994. 3-D Sound for Virtual Reality and Multimedia. Cambridge: Academic Press.Blauert, J. 1997. Spatial Hearing, The Psychophysics of Human Sound Localization. Cambridge: MIT Press.

Borenstein, J., and I. Ulrich. 1997. “The GuideCane – A Computerized Travel Aid for the Active Guidance of Blind Pedestrians.” In IEEE International Conference on Robotics and Automation.

Brungart, D. S., and B. Simpson. 2008. “Design, Validation, and In-Flight Evaluation of an Auditory Attitude Indicator Based on Pilot-Selected Music.” In Proceedings of the International Conference on Auditory Display (ICAD2008). Paris, France.

Bujacz, M., P. Skulimowski, and P. Strumillo. 2012. “Naviton—A Prototype Mobility Aid for Auditory Presentation of Three-Dimensional Scenes to the Visually Impaired.” J. Audio Eng. Soc 60 (9): 696–708.

Cho, B., N. Matsumoto, S. Komune, M. Hashizume, and N. Matsumoto. 2014. “Surgical Navigation System for Guiding Exact Cochleostomy Using Auditory Feedback: A Clinical Feasibility Study.” BioMed Research International, 2014. http://dx.doi.org/10.1155/2014/769659

Dozza, M., L. Chiari, and F. Horak. 2004. “A Portable Audio-Biofeedback System to Improve Postural Control.” In Engineering in Medicine and Biology Society, 2004. IEMBS’04. 26th Annual International Conference of the IEEE, 2:4799–4802. IEEE.

Effenberg, A.O. 2005. “Movement Sonification: Effects on Perception and Action.” IEEE MultiMedia 12 (2).

Godbout, A., and J. E. Boyd. 2010. “Corrective Sonic Feedback for Speed Skating: A Case Study.” In Proceedings of the 16th International Conference on Auditory Display.

Grond, F., and J. Berger. 2011. “Parameter Mapping Sonification.” In The Sonification Handbook, edited by T. Hermann, A. Hunt, and J. G. Neuhoff. Logos Publishing House.

Hansen, C., D. Black, C. Lange, F. Rieber, W. Lamadé, M. Donati, K.J. Oldhafer, and H.K. Hahn. 2013. “Auditory Support for Resection Guidance in Navigated Liver Surgery.” The International Journal of Medical Robotics+ Computer Assisted Surgery: MRCAS 9 (1): 36.

Helal, A., S.E. Moore, and B. Ramachandran. 2001. “Drishti: An Integrated Navigation System for Visually Impaired and Disabled.” In Proceedings of the 5th IEEE International Symposium on Wearable Computers, 149. ISWC ’01. Washington, DC, USA: IEEE Computer Society. doi:http://doi.ieeecomputersociety.org/10.1109/ISWC.2001.962119.

Hermann, T., A. Hunt, and J. Neuhoff. 2011. The Sonification Handbook. Berlin: Logos Verlag.

Holland, S., D.R. Morse, and H. Gedenryd. 2002. “Audio GPS: Spatial Audio Navigation with a Minimal Attention Interface.” Personal and Ubiquitous Comput. 6 (4): 253–59.

Hunt, A., and T. Hermann. 2011. “Interactive Sonification.” In The Sonification Handbook, edited by T. Hermann, A. Hunt, and J. G. Neuhoff. Logos Publishing House.

Jacquet, C., Y. Bellik, and Y. Bourda. 2006. “Electronic Locomotion Aids for the Blind: Towards Mode Assistive Systems.” In Studies in Computational Intelligence, Intelligent Paradigms in Assistive and Preventive Healthcare.

Katz, B.F.G., S. Kammoun, G. Parseihian, O. Gutierrez, A. Brilhault, M. Auvray, P. Truillet, M. Denis, S. Thorpe, and C. Jouffrais. 2012. “NAVIG: Augmented Reality Guidance System for the Visually Impaired. Combining Object Localization, GNSS, and Spatial Audio.” Virtual Reality 16 (3).

Kramer, G. 1994. Auditory Display: Sonification, Audification and Auditory Interfaces. Perseus Publishing.

Lokki, T., and M. Grohn. 2005. “Navigation with Auditory Cues in a Virtual Environment.” IEEE MultiMedia 12 (2).

Loomis, J.M., R.G. Golledge, and R.L. Klatzky. 1998. “Navigation System for the Blind: Auditory Display Modes and Guidance.” Presence: Teleoper. Virtual Environ. 7: 193–203. doi:10.1.1.19.8438.

Parseihian, G., S. Conan, and B.F.G. Katz. 2012. “Sound Effect Metaphors for near Field Distance Sonification.” In Proceedings of the 18th International Conference on Auditory Display (ICAD 2012).

Parseihian, G., C. Gondre, M. Aramaki, R. Kronland-Martinet, and S. Ystad. 2013. “Exploring the Usability of Sound Strategies for Guiding Task: Toward a Generalization of Sonification Design.” In Proceedings of the 10th International Symposium on Computer Music Multidisciplinary Research, Marseille, France, 15-18 Oct. 2013.

Parseihian, G., C. Jouffrais, and B FG Katz. 2014. “Reaching Nearby Sources: Comparison between Real and Virtual Sound and Visual Targets.” Front. Neurosci. 8 (269).

Parseihian, G., and B.F.G. Katz. 2012. “Morphocons: A New Sonification Concept Based on Morphological Earcons.” Journal of the Audio Engineering Society 60 (6): 409–18.

Sanz, P., B. Mezcua, J. Pena, and B. Walker. 2014. “Scenes and Images into Sounds: A Taxonomy of Image Sonification Methods for Mobility Applications.” Journal of the Audio Engineering Society 62 (3): 161–71.

Schaffert, N., K. Mattes, S. Barrass, and A. O. Effenberg. 2009. “Exploring Function and Aesthetics in Sonifications for Elite Sports.” In Proceedings of the 2nd International Conference on Music Communication Science (ICoMCS2), 83–86.

Scholz, D., L. Wu, J. Pirzer, J. Schneider, J. Rollnik, M. Grossback, and E. Altenmuller. 2014. “Sonification as a Possible Stroke Rehabilitation Strategy.” Front. Neurosci. 8 (332).

Shoval, S., J. Borenstein, and Y. Koren. 1998. “The NavBelt – A Computerized Travel Aid for the Blind Based on Mobile Robotics Technology.” IEEE Transactions on Biomedical Engineering 45 (11): 1376–86.

Walker, B.N., and J. Lindsay. 2006. “Navigation Performance With a Virtual Auditory Display: Effects of Beacon Sound, Capture Radius, and Practice.” Human Factors: The Journal of the Human Factors and Ergonomics Society Summer 48 (2): 265–78.

Wegner, Kristen. 1998. “Surgical Navigation System and Method Using Audio Feedback.” Proceedings of ICAD’98 6.

Wilson, J., B.N. Walker, J. Lindsay, C. Cambias, and F. Dellaert. 2007. “SWAN: System for Wearable Audio Navigation.” In Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers, 1–8. Washington, DC.

Footnotes

[1] A detailed overview of the NAVIG project can be found in (Katz et al. 2012).
[2] A detailed description of these categories can be found in (Parseihian and Katz 2012).

PDF version of this article

Gaëtan Parseihian received a Ph.D. degree from the Université Pierre et Marie Curie (Paris VI), France, in acoustics, signal processing and computer science applied to music. He completed his Ph.D. on binaural sonification for navigation aid under the supervision of Brian FG Katz at CNRS-LIMSI, in Orsay, France. He currently holds a post-doc position in the COSMOS team at CNRS-LMA, and works with Mitsuko Aramaki, Richard Kronland-Martinet and Sølvi Ystad on the MetaSon project. He is particularly interested in the design of generic sonification methods as well as the means to improve sonification aesthetics. His main research interests include sonification, auditory guidance, 3D sound, spatial perception, human computer interaction and augmented reality.

Leave a Reply

Your email address will not be published. Required fields are marked *