The UCD process should be an iterative cycle of research, design, evaluation and monitoring after release (“International Standards Organisation,” 2009). This process is applied to many different types of products within the mobile space including mobile phones, network providers’ websites, and packaging of mobile devices. Having the right method available to gain the right insights at different phases is vital to successful product development. In practice using the right method can prove problematic due to the complexity of the mobile research space, restrictions imposed by the client, client’s lack of knowledge of methods, and issues around usability and accessibility.
The purpose of this workshop, on mobile user experience, was to bring together people from industry and academia to exchange methods and experiences related to observing mobile device UX. Therefore, in this paper we briefly present specific applied examples of observing the mobile user experience in practice. Over the past two years we have been involved in many activities that provide insight into the relative merits of various methods and highlight the barriers to design. We discuss barriers particular to practitioners in the design of quality mobile experiences. We also present brief examples of applied research during the four phases of UCD (formative research, conceptual design, evaluation and post purchase). Due to non-disclosure agreements we are unable to discuss particulars of the specific clients, but we do discuss the benefits and constraints of our methods. These insights are based on our activities as practitioners in a User Experience agency over the past few years.
Barriers to Mobile Research
Complexity of Mobile Space
One of the key factors to researching and designing quality mobile user experiences is understanding the complexity of the space and the multiple factors that need considering. To design quality products we need to consider:
- the variety of users (e.g. able bodied and disabled)
- the hardware (e.g. screen size and type, button placement)
- the software (e.g. proprietary, open source)
- the content (e.g. websites, applications)
- the network provider (e.g. coverage, costs)
- the network type and speed (e.g. GSM/CDMA, 2G/3G, Wi-Fi)
- contextual issues (e.g. lighting, glare, noise)
- functionality (e.g. storage capacity)
- the content (e.g. websites, applications)
- the network provider (e.g. coverage, costs)
- the network type and speed (e.g. GSM/CDMA, 2G/3G, Wi-Fi)
- functionality (e.g. storage capacity)
As well as the complexity of the space itself, one significant barrier to designing mobile user experiences is that as practitioners we rarely get to explore the entire space for one project. However, the variety of projects we do get involved with does provide some over-arching clarity of the best methods for research, design and evaluation within these conditions.
A significant barrier to designing quality mobile user experiences is the relationship between clients and practitioners. A client’s location, short time frames, tight budgets, need for secrecy and lack of UCD knowledge can all have negative influences. If clients are based in Asia with a large part of their market in Europe then methods need refining to accommodate their location. The multicultural needs for scaling, and communication within the design team, also need to be considered. Decision-makers are often not the team members that we see, making it difficult to influence design decisions. Bound by non-disclosure agreements practitioners are also severely limited in our use of case studies, which in turn restricts knowledge sharing. Clients also do not always understand or want the most appropriate method. For example, clients may ask for focus groups so they can see sixteen people in one day, when in-situ observation of three people in one day would provide much better data. Improving the client relationship with quality results is often the only way to ensure the best methods are used.
In our research, one thing is apparent in almost all projects; basic usability is often overlooked in the design of mobile devices, content for these devices and the supporting websites and collateral that accompanies the devices. While products such as the iPhone cash in on intuitive interaction because the actions are familiar, content is often poorly designed, and guidelines are ignored. Design focus is also often on functionality rather than usability. Consumers seem to be willing to overlook usability issues because of the functionality. But basics of noise interference, lighting and glare issues, poor use of screen real estate, the ergonomics of handsets, web content providers using absolute values, connection speed and not designing specific mobile sites all seem to be overlooked.
As well as usability, access for all seems to be almost completely ignored. Mobile devices are difficult to use in a variety of different contexts and these factors are often over-looked. As mobile devices are used in more and more varied locations the manufacturers and content developers need to consider access. For example, glare on screens, operating the systems in noisy environments, and in cold climates, using cocktail sausages for touch interfaces rather than taking hands out of gloves. Many handset manufacturers seem to still be missing the point, designing separate handsets for different demographics. Designing Fisher Price style phones for older adults is not respectful or tasteful. These oversights offer a huge space for improvement and gaining market share if clients are willing to spend the time and money.
User-centered design follows an iterative pattern of research, design, evaluation and release as illustrated in Figure 1. Clients require us to become involved in research at various phases of the design life cycle for different projects. While ideally we would be involved throughout the life cycle, as agency practitioners we are often brought in for one phase or another rather than end to end. Here we present a variety of research methods that we have used and experiences we have had when conducting user research.
Formative research is necessary to gain insight into the needs and desires of the target market and to gain greater understanding of the context in which products will be used (“International Standards Organisation,” 2009). While formative research is highly valued in UCD for determining user requirements and setting release criteria, in practice it is more rare than we would hope, as clients often mistakenly believe they already have sufficient insight to design their products. However, over the past two years when we have been involved in formative research we have used a variety of methods. These are mainly ‘in the wild’ methods with real consumers to gain insights into their behavior and context of use. For formative research we would definitely encourage research in naturalistic settings or environments. However, time and client needs often mean that this is not possible.
In one study for a mobile phone manufacturer exploring music consumption behavior on the move, observation was a key method used. The practitioner and one of the client design team observed participants in various settings including record stores, commuting on public transport, hanging out at home and university. The observation involved shadowing and a follow-up interview after the session. The observation was augmented with participants completing cultural probe (Gaver, Dunne, and Pacenti, 1999) type activities such as photographing significant moments influenced by music. The primary focus of this research was on contextual and behavioral aspects rather than the fine detail of the interaction with mobile devices making these ethnographic methods ideal.
During another study conducted to better understand blind mobile phone users’ needs, we used several other ethnographic techniques. We used an electronic diary study which blind consumers found difficult to complete, largely due to the time commitment involved which is a common complaint with diary studies. Participants mentioned that they would have preferred to use a Dictaphone to record their thoughts and activities.
Another tool used was a form of experience sampling method (ESM) (Larson, & Csikszentmihalyi, 1983). At various points over a two-week period participants were sent a text message asking them to perform a simple task using the internet on their mobile phone, for example to find a book on Amazon, and then to return a text message with details of how they got on. This technique was very successful with participants finding it much easier to respond immediately via text message rather than having to remember to note activities in a diary later.
Over the years we have also conducted numerous one-to-one interviews about mobile use, either in participants’ homes, neutral locations such as cafés, or in the System Concepts’ labs. One-to-one interviews on location are particularly useful as they illustrate contextual issues around mobile use. For example, we would not see the difficulties experienced by a blind mobile phone user with their Talks software (“Nuance Communications,” 2010) when it is used in a noisy environment, such as a café at a train station, if the interview was in the lab. In this environment Talks users hold the handset up to their ear to listen then move the phone down to press keys. In a quiet lab setting, users would not need to put the phone to their ear, as the phone would be audible.
In addition, the banter and level of connection with someone in their own home is more relaxed than in a lab setting making it easier to discover more personal facts. In Figure 2 we spent several hours learning more about this man, his mobile and the environment he uses it in, but we also learned about his hobbies and life.
Despite the major benefits of research done on location, interviews in the lab do make it much easier to set up recording equipment, which we will discuss further in Section 3.3. Another problem with ‘in the wild’ research is that travelling to observe people takes longer and involves more transport costs. It is also necessary to have one person facilitate the discussions and another to record the sessions, increasing the resources and possible intrusion. Yet, relying on self-reporting techniques, such as diary studies, can be problematic in general, but even more so for mobile research as the behaviors of interest are often undertaken on the move when pen and paper are not handy. Therefore it is necessary to find ways of making the recording of events simpler for the participants. What is clear from the formative research we have conducted over the years is that the location and method used greatly depends on the objectives of the research. If clients want the data recorded in great detail for later viewing then ‘in the wild’ research is more problematic. However, if the high level qualitative findings of the context and more general behaviors are more important, then ‘in the wild’ observation provides a richer picture.
Once consumer insight and user requirements have been gained from the formative research, conceptual design follows. While designing the product is the focus of this phase, testing conceptual designs with potential consumers and comparing them to design guidelines can help ensure the success of a product. Clients can involve us in this phase as independent researchers to assess other people’s designs or as consultants helping designers using insights gained in previous research, best practice knowledge and applying guidelines.
During one successful design consultancy project, we worked alongside a large online content producer who was adapting their online offering to mobile-specific sites for a variety of handsets. They required consultation regarding usability and accessibility of the sites on various handsets. We conducted expert reviews of preliminary designs using guidelines (Chandler, Dixon, Pereira, Kokkinaki, & Roe, 2005; Rabin, & McCathieNevile, 2008) and heuristics (Nielsen, 1994). Once designs were coded they were evaluated and changes made. The designers were willing to learn as much as possible and we facilitated this through awareness training and allowing the designers to shadow us during the expert reviews. During this research we used a variety of handsets to test the different designs, but it was not possible to consider all variables in Section 2.1.
In many situations it is clear that mobile web content producers and practitioners are unaware of the guidelines that are available and the restrictions of mobile design. During a recent, UK Chapter of the Usability Professionals Association (UK UPA) event relating to mobile design, it was discovered that few practitioners used the W3C guidelines (Rabin, & McCathieNevile, 2008) and none had used the RNIB guidelines (Chandler, Dixon, Pereira, Kokkinaki, Roe, 2005) when consulting. This is in part because guidelines are too specific and do not consider the interaction of the different factors listed in Section 2.1. In addition, few clients encourage use of guidelines preferring to look for innovation rather than solid design patterns. However, if clients can be convinced to involve practitioners who are aware of the guidelines and who can advise about the appropriate methods then better designs can result.
In another recent instance, a mobile manufacturer designing a new mobile phone content browser, wanted to explore how to present photos and video content. Following the technique of Rapid Iterative Testing and Evaluation (RITE) (Medlock, Wixon, McGee, & Welsh, 2005) for early design concept testing, we used low fidelity paper prototypes in the lab. We alternated between a day of testing and a day of workshops with the client to iterate the designs. In the final round of research we used prototypes of the visual design that weren’t interactive to assess the branding and emotion. However, it was clear that participants were happier to criticize roughly sketched designs than to what appeared to be higher fidelity prototypes. They were often distracted by the detail or the specific content.
Encouraging clients to use RITE is a massive victory for practitioners and one that includes consumers early in the design process rather than just for a final evaluation. It is also clear that paper prototypes are much easier to change than higher fidelity prototypes and participants are more willing to criticize them.
Once the conceptual design has been firmly established and higher fidelity prototypes are available evaluation against release criteria is often required by clients. This type of evaluation is usually to confirm that there are no major problems prior to release. Unfortunately clients often only bring practitioners in at this point to say they have considered usability rather than actually considering the user throughout the design life cycle. This often means that poor design decisions cannot be undone.
During a comparative study of a new proposition operating system with the Android and Apple operating systems, we used basic usability metrics to evaluate the products. In this comparative evaluation brand loyalty was a control variable with a focus on the usability of the new proposition operating system. The research was done in the lab, because there were a variety of tasks to cover with multiple handsets and it would not have been feasible to conduct this research ‘in the wild’. Participants did not use their own phones, the tasks were contrived and not all functionality was available due to the prototype. However, client viewing of the evaluation was vital and large numbers of participants were tested and these were better facilitated in the lab.
One evaluation method that we have found extremely useful is the automatic data logging of behavior. During the evaluation of a media player application, an application was installed on participants’ phones to record their activity with the phone and with the media player. In comparison to other diary studies we have conducted it is clear that automated data recording works better as it does not rely on users remembering to provide information.
Figure 3: Recording camera attached to device and the output of the remote high-zoom camera
The recording equipment used for evaluations (and research in general) is another issue to consider. ‘In the wild’ it is important to capture the behavior as naturally as possible. In the lab it is often important for the client to have control to focus on aspects they see as important. We have three different camera set-ups that we use in different situations detailed in Table 1. We are lucky to have a ‘bespoke’ camera that attaches to the phone which is much better for recording the interaction with the device in a natural way rather than some solutions which require the phone to be fixed. However, it does not record the facial reactions, comments and contextual issues. Figure 3 shows the attached camera and the output from a remote zoom camera.
|Camera Type||Freedom of participant movement||Lack of intrusion for participant||Client viewing experience|
|Attached to device||
|Suspended on Tripod||
Table 1: Ranking of different cameras for viewing and recording mobile device interaction.
Recently a group of UCD practitioners gathered for a UK UPA event. They ranged from freelance consultants specializing in mobile through to in-house practitioners at mobile phone manufacturer companies. Many of those present were aware of the W3C guidelines (Rabin, & McCathieNevile, 2008) and had used them for evaluation. Several practitioners stated that the guidelines had been augmented to included alternative wording, additional points to consider etc. None of these amendments seem to be fed back to the W3C or being shared which reduces the usefulness of the guidelines. However, of the approximately 50 people present only a handful were aware of the RNIB guidelines (Chandler, Dixon, Pereira, Kokkinaki, & Roe, 2005) and none of them had had an opportunity to use them.
UCD consideration should not stop once a product has been released. Most research that we are asked to do about mobile devices post-release are about the purchasing process, point of sale research and the out-of-box experience. It is rare that we are asked to do longer term studies into the learning and adaptation that is likely to take place over time.
We usually evaluate the out-of-the-box experience using expert reviews assessing the packing, wires, user-guide and we set up user journeys. We also do expert reviews of handsets and mobile websites. Observing the experience at high street stores is difficult due to recording issues. Assessment might also have an environmental impact focus (reducing packaging and documentation) or a purely usability focus (using heuristics). We have also used focus groups to explore issues that consumers had with phones they had been using for some time.
For blind and visually impaired consumers the purchase and post purchase situation is dire, with little information available about the relative merits of different handsets at high street stores. While we have not done any specific research on accessibility needs post purchase, the RNIB recently organized an event to help members to choose a handset, which we attended in an effort to gain further insight to share with members of the UK UPA. For the participants it was vital to have real hands-on experience with the devices, something that is lacking in high street stores.
Discussion and Conclusion
What is clear is that quality research in each of the phases of UCD helps gain a greater understanding of some aspects of the problem space. In addition, if the research is of a high quality then the practitioner earns the respect of the client and the relationship is improved. Practitioners can then be more assertive about which methods are preferred and can educate the client about becoming involved in earlier research and including more user involvement. Because of the complexity of the mobile device and the contexts in which the devices are used, different research methods are better in certain situations. While ‘in the wild’ research has many benefits, lab-based research can also offer useful insights and improve the overall client relationship by allowing them to participate more actively.
Many practitioners have never used guidelines, which raises the issue of their effectiveness. Content of the guidelines are often too specific making them difficult to use. Many practitioners also do not know about specific guidelines. Many designers also do not know how to apply them and clients often ignore them believing that the functionality will make up for any lack. The complexity of the space makes guidelines too simplistic. More research into how to present the guidelines better may help. In addition, a variety of methods are available to conduct research, but some are better applied at different points in the design process. Practitioners need to help guide clients as to which are the best.
There are a couple of take-home messages from this snapshot of practitioner life. Firstly, there is no single right method for research as each situation is unique and the mobile space is complex. Secondly, the client still needs convincing to do quality research throughout the design life cycle. Long-term relationships between UCD practitioners and business-focused clients will help ensure that the best methods are used every step of the way. Finally, there is still a need to focus on core usability and accessibility when designing products and improve the use of guidelines to ensure quality mobile user experience is designed.
International Standards Organisation (2009). ISO 9241.
Gaver, W., Dunne, T., and Pacenti, E. (1999). Design: Cultural Probes, 6 (1), 21 – 29.
Larson, R. and M. Csikszentmihalyi (1983). The experience sampling method. New Directions for Methodology of Social and Behavioral Science, 15, 41-56.
Nuance Communications. (2010). Convenient audio access to mobile phones.
Chandler, E., Dixon, E., Pereira, L., Kokkinaki, A., & Roe, P. (2005). COST219ter: An evaluation for mobile phones. In P. Bust (Ed.) Contemporary Ergonomics (2006) (Taylor & Francis, London).
Rabin, J. & McCathie-Nevile, C. (2008). Mobile Web Best Practices 1.0.
Nielsen, J. (1994). Heuristic evaluation. In JL Nielsen and R.L. Mack (Eds.), Usability Inspection Methods. New York, NY: John Wiley & Sons.
Medlock, M.C., Wixon, D., McGee, M., & Welsh, D. (2005). The Rapid Iterative Test and Evaluation Method: Better Products in Less Time. In Bias, G., & Mayhew, D. (Eds.), Cost Justifying Usability (pp. 489-517). San Francisco: Morgan Kaufman.