Optimizing the user experience should be the ultimate aim of the Web usability designer.
Chapter 1: Human Computer Interaction for the Web
The billboards loomed over America's highways and byways, displaying three scenes. In the first, the frowning housewife holds up a pair of grubby trousers for all to see. In the second panel, she places the offending garment in the washing machine and then adds laundry powder from the sponsor's distinctive box. In the last picture, she displays the now sparkling clean pants, and her radiant smile, across the consumer landscape. That housewife sold millions of dollars' worth of laundry detergent.
It was the middle of the 1950s, and billboard advertising was in its prime in the United States. The detergent promotion did so well that the sponsor decided to expand it to international markets, and it chose a Middle Eastern country to experiment with adapting the advertisement.
The billboards had been very successful, so the ad agency confidently prepared the new design, substituting in Arabic the name of the detergent. Ad executives bought their display space and launched the campaign. It was a resounding failure.
Why? When the designers moved into new territory, they ignored three important principles: designing for user characteristics, designing for the user experience, and designing for context.
They disregarded user characteristics when they failed to replace the image of the quintessential U.S. housewife with a face and clothes familiar to the Middle Eastern audience. They ignored user experience by maintaining the left-to-right orientation of the three pictures in the display. Unlike readers of English, Arabic readers scan from right to left. Consequently, Middle Eastern consumers saw the smiling woman use the sponsor's soap and end up frowning at a grimy pair of trousers.
Finally, if the designers had analyzed context, they would have found that billboards for advertising were not common in this Middle Eastern country at the time. They might have chosen to promote their product through the more popular medium of magazines and newspapers.
The lessons learned from this experience are at the heart of this book. The absolute requirements to design for the user and for context are the bases of usable Web design and the foundation of Web design's connection to usability engineering.
The practice of interaction usability engineering is rooted in the field of Human Computer Interaction (HCI), which itself combines three distinct methodological approaches. HCI methodologies derive from the science of behavior, computing technology, and design. The science of behavior emphasizes the quality of the empirical methodology used to discover important insights about interaction behavior. Computing technology models and invents technological solutions for human interaction problems. For design, methodological power resides in the designer's virtuosity of expression. It is from this methodological context -- combining the methodologies of discovery, invention, and design -- that the practice of interaction usability engineering emerged.
This book connects interaction usability practice to the design of Web pages, Web sites, and Web applications. Because Web usability design is grounded in HCI methodologies and principles, we will begin with a summary of HCI history and principles and how they are related to Web usability.
From Human Factors to Usability: A Short History of HCI
During the past two decades, both the number and diversity of people using computers have increased dramatically. Computers now mediate everyday activities in business, industry, education, entertainment, and the home, whereas until the early 1980s computer use was restricted to the technically sophisticated. This rise in use led to a flurry of interface research and design activities during the 1980s and 1990s, which produced the Graphic User Interface (GUI) and, eventually, the Web.
Scientific interest in the interaction between human beings and computers and user interface design is rooted in the more general area of human-machine systems, human factors engineering, and ergonomics. Systematic investigations of human factors engineering go back to the early time-and-motion studies of Frank Gilbreth (1911). These and all other studies conducted between the world wars concentrated on the operator's muscular capabilities and limitations. During World War II, the emergence of radar and the technology associated with aircraft cockpits led to a shift in emphasis away from physical interaction with machines to the perceptual and decision-making capabilities of operators.
Toward the end of the 1950s, interest in human-computer interfaces arose out of this system's engineering tradition and crystallized around Licklider's (1960) concept of symbiosis. Licklider described a relationship in which the human operator and the computer and its software form two distinct but interdependent systems. They cooperate to attain a goal because each component has unique abilities to bring to bear on a given task. The human component is more suited to engaging in tasks that require creativity, such as raising the "important questions," posing the "original problems," or making the "critical decisions." Computers, on the other hand, excel at performing such functions as rapid and accurate data storage and retrieval, as well as rapid aggregate analysis, calculation, and plotting of retrieved data. Human operators and computer systems could thus have a symbiotic relationship in which they augment each other's capabilities in performing complex, multifaceted tasks.
Throughout the 1960s and early 1970s, human factors researchers paid more attention to mapping out the information-processing and decision-making skills of the typical user than to engineering a symbiotic association between operators and specific systems. It was not until well into the 1970s that technological advances made real-time interaction commonplace and with it made Licklider's idea of symbiosis feasible. As a result, the late 1970s and early 1980s saw a deeper interest in the now-blossoming field of cognitive psychology and adapting its findings to the design of user interface strategies. Of particular interest was the focus on interaction with databases (Reisner, 1977; Shneiderman, 1978; Thomas, 1977).
We also began to see the emergence of theoretical constructs of the interaction between users and computers. These included the keystroke-level and GOMS models of Card, et al. (1980, 1983). The GOMS model is specified by four components: a set of goals, a set of operators, a set of goal-achieving methods, and a set of selection rules to choose among methods. Another theoretical construct was the levels of interactions model, with its four interaction levels of conceptual, semantic, syntactic, and lexical, initially proposed by Foley and Wallace (1974) and expanded by Foley and Van Dam (1982).
Focus on the User Interface
In the late 1970s and early 1980s, a flurry of psychological research, mostly carried out in industrial research laboratories, dealt with the user interface (Reisner, 1977; Thomas and Carroll, 1979; Gould, 1981). It was during those years that the field of HCI was officially "born." For the first time, books with the words "human computer interaction" in their titles were published (Badre and Shneiderman, 1982; Card, et al., 1983). The Association for Computing Machinery (ACM) Special Interest Group in Computer Human Interaction was established, and in 1982 the first conference on Human Factors in Computing Systems, which became the annual CHI conference, was held.
Along with the explosion in popular and personal computing in the 1980s, there arose a parallel emphasis on usability issues: how to make software and computer systems easy to learn and use. Until the mid-1980s the interface was embedded in application software. There were, however, some graphic utilities, and the interface began to emerge as a separate component. The introduction of the Xerox Star began a new generation of interfaces using the desktop as metaphor. These interfaces displayed high-quality graphics and used the point-and-click mouse to invoke actions and manipulate screen objects. Thus emerged the GUI interface, which was popularized with Apple's computers: first Lisa, then the Macintosh.
As the GUI interface evolved, the new discipline of Human Computer Interaction matured. Key HCI principles of User-Centered Design (Norman, 1986) and Direct Manipulation (Shneiderman, 1982, 1983) emerged. We saw the first textbook -- Designing the User Interface: Strategies for Effective Human-Computer Interaction by B. Shneiderman -- and the first influential HCI design idea book -- The Psychology of Everyday Things by D. A. Norman. This was followed in the 1990s by several textbooks (Shneiderman, 1992, 1998; Mayhew, 1992; Preece, et al., 1994; Dix, et al., 1998) and the introduction of the ACM HCI Curriculum.
User Interface Software
Other researchers during the 1980s worked to develop tools and methods for user interface design. This research arose from the belated recognition by computer scientists that the user interface is as vital a component of computing systems as an operating system or database. Those scientists and engineers were primarily concerned with inventing tools and systems of tools that help create interfaces, such as User Interface Management Systems (UIMS) (Coutaz, 1989; Siebert, et al., 1989) and User Interface Design Environments (UIDE) (Foley, et al., 1989).
In parallel with the research and academic evolution of HCI, the software industry focused on designing user-compatible interfaces and making software systems increasingly more usable. Starting in the mid-1980s and gaining strength in the 1990s, the interface development community employed usability engineering methods to design and test software systems for ease of use, ease of learning, memorability, lack of errors, and satisfaction (Gould and Lewis, 1985; Nielsen, 1993).
It was no longer enough for a designer to be sensitive to usability concerns or to adopt an intangible user-centered perspective. Designers must also set objective, measurable, operational usability goals. An operational definition of usability, in turn, must include some identifiable and measurable concept of effort. For example, the effort to complete a task may be measured in terms of both the time it takes a user to perform a task successfully and the number of observable actions taken in the process.
Usability practitioners of the 1990s considered two factors as measures of usability.
Ease of Learning: We can measure usability by comparing the time it takes users to learn to do a job when working with an unfamiliar computer system to the time it takes them to learn to do the same job some other way. As measured by time, it takes the user more effort to learn a system that does not incorporate and build on the user's existing habits. The users will have to ignore what they already know about the job to develop a new collection of habits.
Ease of Use: The minimum number of actions required to complete a task successfully becomes an increasingly important measure of usability for more experienced operators. For example, the number of mouse clicks entered per procedure is a good way to compare the ease of use of two designs. Other factors being equal, the design that requires fewer keystrokes per procedure will be more usable.
Focusing on the Web
At the same time that usability practitioners were becoming pervasive in the software development industry, the World Wide Web was becoming a force in information sharing and a popular medium for business advertising and transactions. Web usability became a particular focus of the HCI community during the late 1990s (Nielsen, 2000). This interest was heightened by the fact that Web developers were poorly designing corporate Web sites. The developer's training was limited to Web authoring tools and languages, which can be learned in a relatively short period of time. These early developers were not sensitized to the usability issues that had become an integral part of the software development culture. As the evolution in technology made it possible for "new media" style Web sites to be developed with graphics and animation, the number of usability problems increased, with a correspondingly greater negative impact on business revenues and customer retention (Manning, et al., 1998).
HCI Principles for the Web
The same basic HCI principles that govern software interface design apply just as effectively to designing Web sites and Web applications. Just as a badly designed user interface can doom a software product despite its complex functionality or the power of its technology, a poorly designed Web interface, despite its impressive graphics, can propel the user to another site with one click of the mouse. As user satisfaction has increased in importance, the need for reliable Web usability design methods has become more critical. As the following chapters will demonstrate, the same usability design principles developed for designing user interfaces also apply to the design of usable Web sites. These principles include user-centered design with early focus on the user, early human factors input, task environment analysis, iterative design, and continuous testing. Let¹s examine some of these principles as they relate to new media technology on the Web.
Defining the user culture, including user characteristics, user types, levels of expertise, and user task descriptions, is a prerequisite to interface development and testing. Attention to individual differences will increase in importance and detail as the new media allows us to interact at more than simply the information-processing levels. For example, as the Web interface incorporates video technologies, we must pay attention to individual and cultural differences in facial expressions, gestures, and demeanor.
Early Human Factors Input
Considerations are given to the human factor aspects of design very early in the development process because it is easier and less costly to introduce human factor constraints at this stage. As new media technologies allow us to create artistic, immersive, and all-encompassing interactive experiences, developers need to consider and design for the emotional, affective, and psychomotor human factors.
Task Environment Analysis
Task analysis is used to determine functionality by distinguishing the tasks and subtasks performed. Particular attention is paid to frequent tasks, occasional tasks, exceptional tasks, and errors. Identifying goals and the strategies (combinations of tasks) used to reach those goals is also part of a good task analysis. By conducting a task analysis, the designer learns about the sequences of events that a user may experience in reaching a goal. With the increase in the power of rendering and simulation technologies, metaphor-based Web designs will require us to become more environment specific, complete, and accurate in our task analysis. Task analysis based on time-and-motion studies, or that relates only to the cognitive and informational component, will no longer suffice. Analysis will need to cover all aspects of the Web task environment, including the physical, social, and aesthetic.
Iterative Design and Continuous Testing
The iterative design process for developing user interfaces stems from the experience that "first designs," no matter how well founded in experience and background, contain unanticipated flaws. In addition, because of the bias of visible experience, first designs often replicate the real world. With the new media available to Web designers, we can replicate real-world environments with much greater detail. This capability should not, however, confine us to the limitations of the real world if we can accomplish the same tasks using strategies that are more efficient, yet natural, to our human capabilities.
Several iterations of design and continuous testing are usually needed to take full advantage of the capabilities of the new media and allow us to come up with novel interactive environments to perform old tasks. You can see an example of this kind of design problem in the design of Web newspapers. Available technologies let us simulate the newspaper reader environment in almost exact detail, but the same technologies can also be used to improve on the limitations of the current environment. For example, we can free the reader from the physical limitations of the paper page by the use of hypermedia. The end result of extensive iteration of such designs could lead to an environment that is much more compatible with the human natural systems of information acquisition, processing, and representation.
Although HCI principles apply equally well to both graphic user interfaces and Web interface design, there are significant differences between GUIs and the Web. There are several unique Web features to which GUI-experienced usability designers should pay particular attention. Among the more prominent are compatibility with device and browser diversity, user-initiated and -controlled navigation, and low cost of switching between sites. Other Web-specific factors include multiple points of page entry into a site, ease of being distracted with the enormity of information available, and the easy ability to personalize what visitors want to see.
In a course I teach to professional developers on Web usability, I asked a group of 15 participants, who were experienced in both GUI and Web design, to brainstorm about the differences between GUIs and the Web that should concern designers. Here are the ten most important differences the group identified.
- The Web is less secure.
- There is less privacy on the Web.
- The Web is platform independent.
- Web sites contain more dynamic content.
- The Web has a broader audience.
- Web devices and browsers have compatibility problems.
- Users have different expectations for the Web.
- Learning is expected with GUIs, but not with the Web.
- There is more than one entry point into a site on the Web.
- Navigation is user controlled on the Web.
In addition to understanding the differences between GUI and Web design environments, usability designers should pay particular attention to the predominant and recurring usability problems infesting the World Wide Web. In the same usability course, I asked participants to name the ten most important Web design problems. They were first asked as a group to generate a list of problems. Then the 15 participants voted on what they considered to be the most important problems.
- The Web end user is not considered; the design is not user centered.
- It is slow due to large multimedia files and useless Java scripts and plug-ins.
- The information is disorganized and poorly structured.
- There is a lack of standards and consistency.
- "Design" consists of showing off technology.
- Designers treat the Web as a brochure.
- Pages are cluttered.
- Developers do not maintain and update sites.
- Pervasive banner ads are annoying.
- Page layout is poor.
Designing usable Web sites requires more attention to context than designing usable GUIs. Sensitivity to the factors that surround user interaction with the Web takes on added importance because of the ease with which users can turn off one site and put on another. As Web usability designers, we must make sure not only that the interaction is simple but also that the user feels comfortable in the physical, mental, and emotional environment of the interaction. The Web interaction context can be as small as a page or as large as the physical, cognitive, social, and emotional surrounds of the user in the act of using the Web.
Furthermore, providing the right functionality forms the essence of a usable Web site. As designers we must recognize that the usability quality of a Web interface diminishes and becomes insignificant if the site does not support the tasks that users want to perform or does not provide the information for which visitors are searching.
Title: Shaping Web Usability: Interaction Design in Context
Author: Albert N. Badre
Copyright (C) 2002 Addison-Wesley
Reprinted with permission.