Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals PDF

Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals PDF

Name:
Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals PDF

Published Date:
11/18/2013

Status:
[ Active ]

Description:

Publisher:
CRC Press Books

Document status:
Active

Format:
Electronic (PDF)

Delivery time:
10 minutes

Delivery time (for Russian version):
200 business days

SKU:

Choose Document Language:
$74.4
Need Help?
ISBN: 978-1-4665-1113-2

Preface

When we interact with a technological system, each of our senses is somehow engaged in a particular kind of communication. This communication forms the basis for a dialogue between person and technology (intrasystemic dialogue), that is to say, a dynamic relationship among three components: (i) the designer's mental model, which generates the conceptual system model of the interface; (ii) the user's mental model; and (iii) the image of the system (Norman, 1983, 1988).

In this book, we investigate the complexity of the intrasystemic dialogue between person and technology by focusing on both the evaluator's and the user's perspectives. We choose to follow this holistic approach to create an integrated model of interaction evaluation (IMIE) that is adequate to distinguish clearly (but that does not separate) the evaluator's role from the designer's, thus providing an evaluation process that is able to consider all the different dimensions of the interaction.

As Steve Krug claims in his most famous work Don't Make Me Think, a user has a good interaction experience only when the interface is "self-evident" (2000, p. 11)—that is, when the user does not have to expend effort in perceiving the interface. The implementation of a self-evident interface should be considered one of the most important issues to be solved when it comes to creating a good system, i.e., a good architecture of information. Therefore, Krug's assumption strongly relates only to the designer perspective and can be epitomized as "the better the system works, the better the interaction will be." However, even though a well-designed interface can be achieved only by considering the properties of the object, the evaluation process also needs to take into account other dimensions of the interaction. In particular, since the goal of the evaluation process is to measure the human–computer interaction (HCI), the user's point of view somehow needs to be integrated into the evaluation methodologies.

Our objective differs from Krug's. Krug intends to provide developers with the tools for creating successful systems: he expresses the success of a system using the metaphor of good navigation or of navigation without barriers, and this kind of navigation corresponds, according to his approach, to his motto "don't make me think." With this book, we do not aim to provide tools for system development; instead, we want to provide tools for an evaluation of the interaction that even takes into account "what the user thinks" about the system, because, from our point of view, this constitutes an essential element. From Krug's perspective, the user "should not think" since the developer should already have thought about the possible barriers that could occur during the interaction. Conversely, from our perspective, the simulation carried out by the developers during the design process cannot alone be enough to create a fully accessible and usable system. We claim that the key factors for developing a usable and accessible interface are (i) a well-planned assessment process and (ii) a harmonized and equalized relationship between evaluator and designer during the life cycle. For this reason, this book is not only concerned with the developer's perspective, but it also takes into account all the actors who are involved in the evaluation process according to our integrated model of interaction evaluation: the expert evaluators—who are supposed to detect the barriers that usually prevent the interaction; the users—who can estimate the extent to which a detected barrier actually prevents their navigation; and the coordinator of the evaluation—who is supposed to integrate the results of the expert-based tests with those of the user-based ones by performing an evaluation of the evaluation.

Rather than providing the tools for developing a good system where the user "should not think," in this book we propose an evaluation process that is able to assess the users' satisfaction and experience with a developed system. Given that users should be called to judge the system with which they are interacting, we shall focus on the user, who can be considered as someone who thinks about the system. In particular, we shall describe a user-driven process to observe the user's behavior during his or her actual interaction with the system. Giving back to users their point of view on the system, we let the users' thoughts offer valuable information on the quality of the interaction.

The perspective and the models shown in this book are a new synthesis in the HCI field, and they are able to distinguish and integrate the evaluator's and the designer's perspectives in the evaluation process. Moreover, our work is based on three fundamental pillars: the interaction between designers and evaluators, an integrated evaluation, and the involvement of disabled users in the assessment cohort. The first pillar is built on the fact that to achieve the aims of "design for all and user interfaces for all" (Stephanidis, 1995, 2001), designers and evaluators should work together iteratively using a well-planned and integrated methodology that guarantees a successful dialogue between device and user. The product of this collaboration is a technology with a certain level of functioning and capacity, which facilitates users' interaction: the more the functioning of the technology is perceived and experienced by users to be accessible and usable, the more the technology can be considered to be an intrasystemic solution. We call the outcome of this process of collaboration between designers and evaluators psychotechnology, by which we mean a technology that plays an active role in the context of use by emulating, extending, amplifying, and modifying the cognitive functions of the users involved in the interaction (Federici et al., 2011; Chapter 3). The second pillar highlights the fact that any assessment process has to help designers to include the users' perspective in their mental model. Therefore, the evaluator should act as a mediator between designer and user by analyzing the dialogue between user and technology through a set of integrated evaluation methods. The aim of these integrated evaluation methods is to analyze all the possible variables that could affect the users' experience of the interaction and to report to and discuss with designers how to transform a technology into a psychotechnology (Chapters 4 and 5). The last pillar of our work is based on the fact that any evaluation cannot be considered as a complete process without the involvement of users with disabilities in the assessment cohort. In fact, the analysis of the intrasystemic dialogue carried out by involving users with a disability is a necessary condition for measuring the interaction between people and technologies in all its objective and subjective aspects by representing the whole possible variety of the human functioning (Chapter 6).

This book consists of eight chapters that aim to drive professionals in usability and user experience (UX) analysis to rethink and reorganize their perspective about the assessment of interaction. Moreover, it aims to help designers, manufacturers of technological products, as well as laypeople to understand what an evaluation is, the complexity of an evaluation, and the importance of assessment for the success of a product. In tune with this attempt to drive the reader through complex topics such as the interaction assessment, we focus, in collaboration with other experts, on specific sections (boxes) in which some of the topics presented in this book are discussed in depth and examples are given.

The eight chapters are organized in ascending order from the theoretic to the pragmatic issues of an HCI assessment by moving from the historical and theoretical background to the management of the assessment data and the application of evaluation techniques, as follows:

• Chapter 1: Brief History of Human–Computer Interaction. This chapter discusses the historical evolution of HCI and the most important models of interaction evaluation. Starting from an overview of how hardware and software have changed over time, from the 1960s onward, we conclude by discussing some of the latest ideas about the interaction between user and technology—ideas that have brought a significant increase in the development of specific evaluation techniques based on innovative aims and theoretical models. So far, practitioners have not provided the basis for defining a uniform interaction evaluation methodology nor have researchers agreed on standard tools for evaluating and comparing usability evaluation methods. In light of our historical analysis, we first point out how single evaluation techniques lack the possibility of catching the multidimensional aspects of usability, and second, as a consequence, we show the need for an integrated and comparable methodology in order to include the evaluation possibilities of all the different interaction evaluation methods.

• Chapter 2: Defining Usability, Accessibility, and User Experience. This chapter aims to present the definitions of accessibility, usability, and UX provided throughout the evolution in the field of HCI. We discuss the international rules of interaction, starting from a historical overview of the different definitions of accessibility and usability. On the basis of these international standards, the usability concept emerges as strongly linked to the accessibility one; first, because, from the evaluation point of view, it is often difficult to distinguish among interaction problems due to usability or accessibility issues and, second, because access and use are hierarchically related. Finally, we discuss UX as a new and evolving concept of HCI. As ISO 9241-210 (2010) suggests, UX is strongly linked to usability and represents the subjective perspective of the interaction system. In fact, as our analysis shows, the current international debate is moving toward a unified standard in which accessibility, usability, and UX concepts will be clearly redefined to highlight their relationships and measurements. Moreover, we propose that users' perception of their interaction with a product (UX) is a dependent variable, based on the access to the interface (accessibility) and on the use and the navigation of the technology and its contents (usability). Therefore, accessibility, usability, and UX should be considered as three different perspectives of the interaction, and any evaluator should assess these aspects in a hierarchical sequence in order to evaluate interaction completely.

• Chapter 3: Why We Should Be Talking about Psychotechnologies for Socialization, Not Just Websites. This chapter discusses the evolution of media and communication technologies in terms of the extension of human psychological abilities and participation opportunities. The discussion underlines how new developments and the success of communication technologies meet psychosocial users' needs (e.g., belongingness, esteem, and self-actualization), fostering direct user participation in the communication process and extending the network of socialization and participation opportunities. We propose the use of the term "psychotechnology for socialization," which replaces the classic "media and communication technology," and we also propose a new classification in which psychotechnologies are not only new kinds of technologies but are also a new user-driven adaptation, integration, and use of common technologies. A psychotechnology is presented as any technological product developed and assessed as an intrasystemic solution that can both facilitate and drive the dialogue between user and device in a specific context of use.

• Chapter 4: Equalizing the Relationship between Design and Evaluation. This chapter analyzes the relationship between the design and evaluation processes during the development of a product. We use the psychotechnological construct for showing how important it is to equalize, and concurrently discriminate between, the role of the designers and the role of the evaluators in the product life cycle. We suggest that designers and evaluators, from their individual observation poles of interaction, should share a holistic common perspective in which the components of the interaction, as entities in an intrasystemic dialogue (technology and user), have a concurrent role in defining the interaction experience. We describe psychotechnology as the outcome of the dynamic and reciprocal causation among the components of the interaction system—the technological object and its functioning, the user and his or her subjective experience of the interaction, the environment of use, and the role of this context in the dynamics of the interaction—that cannot be reduced to the device and its interface functioning per se. Finally, we describe the evaluators' role in the product life cycle by harmonizing it with and equalizing it to the role of the designers.

• Chapter 5: Why We Need an Integrated Model of Interaction Evaluation. This chapter presents and discusses the IMIE. We define an interaction problem as an interruption in the communication flow between the user and one or more elements of the interface. This gap concerns either the execution of the user action or the feedback of the product, and it can be due to an objective error (machine error) or a subjective error (a difficulty of the user in executing his or her action in the interface or in correctly understanding the feedback provided by the system). In light of this, we describe the interaction evaluation as the measure of the distance between the developer's and the user's mental models. This distance can be measured only by introducing another external mental model—that of the evaluator. After defining this mental model, we propose that a product be evaluated from two perspectives: (i) objectively, that is, from the perspective of measuring the accessibility, usability, and satisfaction generated by the developer's mental model; and (ii) subjectively, that is, from the perspective of measuring the accessibility and usability of, and satisfaction with, the product in the context of use. Finally, we present an IMIE together with the variables that an evaluator has to take into account in the assessment decision process.

• Chapter 6: Why Understanding Disabled Users' Experience Matters. By following the psychotechnology construct and the IMIE, this chapter proposes a wide accessibility approach in which accessibility is considered not as a special need that has to be measured by particular users, but as one of the main variables that an evaluator has to consider for an overall testing of the interaction. Moreover, we propose that the involvement of users with disabilities in the design and assessment of an interaction is a necessary condition. Since disability has to be considered as one way of functioning among the infinite possibilities of human functioning, an evaluator should include people with different abilities when selecting a sample of users. If the aim of evaluators is to support designers in transforming a technology into a psychotechnology, concurrently promoting the goal of the "user interface for all," then evaluators have to gather reliable and generalizable evaluation data. An evaluator can achieve this goal only by testing a large spectrum of human functioning and by including in the sample a wide range of user behavior. In light of this, we discuss the concept of representativeness of the overall population and how the evaluator may invest his or her budget and select a sample of users in the most productive manner. Finally, in order to support an evaluator's selection of interaction test participants, we present a user testing decision flow mechanism, and, on the basis of this, we suggest how practitioners may select people with and without disability for the assessment and monitoring of the sample representativeness of the overall population.

• Chapter 7: How You Can Set Up and Perform an Interaction Evaluation: Rules and Methods. This chapter aims to present the way in which an evaluator can set up and manage an IMIE to assess usability and the UX. By moving from the commonsense perspective, this chapter discusses what an evaluation is, in terms of measurements and criteria, in line with international standards. First, we distinguish between interaction assessments on the basis of whether the product has long- or short-term use, and then we define and discuss the main aspects that an evaluator has to take into account when analyzing the UX: the Kansei (emotional or affective aspects in Japanese), the quality traits, and the meaningfulness of the product. In line with this analysis, we present an innovative synoptic table that represents and organizes the most common evaluation techniques and measures of the UX and usability. Moreover, we discuss how the evaluator can organize and use the data obtained using the different techniques by using different approaches to evaluation data management. In particular, we explain how to manage the user testing data and how to determine the number of problems discovered by a sample of users by means of the "grounded procedure" (Borsci et al., 2013), a specific process created for extending the five-user assumption.

• Chapter 8: Evaluation Techniques, Applications, and Tools. This chapter aims to present a set of the most common evaluation techniques and their use in the framework of the IMIE. We start by discussing the inspection and simulation methods of the expected interaction (heuristics analysis, cognitive walkthrough, etc.), which allow a practitioner to inspect the product, without the involvement of users, by defining the gaps in the system and the errors in the product functioning. Furthermore, we present qualitative methods and subjective measurements of the interaction (questionnaire and psychometric tools, interview, eye-tracker and biofeedback analysis, etc.), which allow evaluators to observe users' reactions to the problems that they experienced while interacting with a technology. Finally, we discuss the usability testing methods and the analysis of real interaction (thinking aloud, remote testing, etc.), which are an essential step in any evaluation to assess how the functioning of the product is perceived by the user. For all the techniques discussed in this chapter, we show how it is possible to involve users with disabilities in the assessment by specific adapted tools and methods, such as partial concurrent thinking aloud (Federici et al., 2010a,b).


Edition : 13
Number of Pages : 290
Published : 11/18/2013
isbn : 978-1-4665-11

History


Related products


Best-Selling Products