acm - an acm publication

Articles

Interview with Cherri M. Pancake on Usability Engineering

Ubiquity, Volume 2002 Issue June, June 18, 2002 | BY Cherri M. Pancake 

|

Full citation in the ACM Digital Library


How we perceive, interpret and use information; applying human factors research to product design.

Cherri M. Pancake is Professor of Computer Science and Intel Faculty Fellow at Oregon State University. She is also Director of the Northwest Alliance for Computational Science and Engineering, an interdisciplinary research institute that focuses on the software usability problems faced by research scientists and engineers.


UBIQUITY: Cherri Pancake is a name that people won't soon forget.

PANCAKE: It is, isn't it? It's actually in a book called, "Remarkable Names of Real People". There were three brothers named Pfankuch who came to the US in the 1820s from Bavaria. They transliterated the name, which means a cake baked in a pan on top of the stove.

UBIQUITY: Let's start the interview by asking you to make some general remarks about usability engineering.

PANCAKE: In engineering there is a long tradition of something called human factors, which studies the physical limitations of the human body and also the cognitive limitations of the mind. Usability engineering applies human factors research to the design of products and systems. I specialize in how human factors research can make software tools and large data systems more usable, particularly for practicing scientists and engineers. Like other engineering disciplines, usability engineering is about problem solving. In this case, I'm trying to solve the problem of why technical software is so difficult for non-computer scientists to use.

UBIQUITY: Is it easy to identify the problems that need solving?

PANCAKE: More than any other product being manufactured today, software gets bad ratings on usability, whether from general consumers or professionals like scientists and engineers. People recognize that there are problems with usability. That's a little different from being able to put a border around what the problems are with a particular software product. When you're talking about access to data in general, and making it usable, it is hard to get your hands around what the problem actually is and what one might do about it.

UBIQUITY: You say you take lessons from human factors and move them into this newer area. What are the lessons?

PANCAKE: Some of them are related to physical factors. Take the physiology of the eye -- how we perceive colors and patterns. What visual conventions, colors, shadings or sizes allow humans to take most advantage of their pattern recognition skills? For example, I've worked with many software tools in the area of high performance computing. Typically the tools use a graphical format because they're trying to show the behavior of a large number of CPUs. Humans have very good visual pattern recognition capabilities, but it's easy to cripple those abilities inadvertently simply by choosing colors or shapes or sizes or patterns that obscure what the user should be seeing. I show software developers how to use color theory in order to enhance human capabilities rather than distract from them.

UBIQUITY: What other lessons have carried over from human factors research?

PANCAKE: We all have limits on our attention spans and limits on what we consider to be a good response time for software. If a piece of software doesn't provide visual feedback or recognition that you did an operation within a certain period of time, you don't sense that it's interactive. In fact, usually you try to perform the operation again because you think it hasn't happened. That can be disastrous when dealing with Web access to large databases, where the response time is slowed by the network. The software should convey information back to the users to keep them from repeating things or trampling on their own work simply because they didn't get physical confirmation that the system is responding properly.

UBIQUITY: Those are two examples that have to do with physical capabilities. How does human factors research address mental capabilities?

PANCAKE: Our mental capabilities have even more effect on software usability than physical factors do. For example, everybody's heard the term "short-term memory". Human memory actually works much like today's computer memory hierarchy. We have a small amount of memory, like a set of registers, that has very fast access. Registers are expensive and therefore there usually aren't many in a computer. We don't have many of those capabilities in our brains either. Typically we can only hold somewhere on the order of five to nine items of information at a time in that very fast access memory. We have other grades of memory, like a cache, that are a little slower but still pretty fast. Then there's long-term memory where you may have to sit and ponder to retrieve something. Usability engineering helps users utilize very fast access memory in the most effective way by eliminating the need for users to "memorize" a handful of little things that would quickly occupy the fastest access memory. By making things more explicit, more obvious to the user, we can reserve some of the user's mental capabilities for things that really count.

UBIQUITY: What in your academic background has prepared you for asking and answering questions like these?

PANCAKE: I did my undergraduate work in design and environmental analysis. I did a lot of work in physical anthropology and anthropometry, (measuring the human body), color theory and cognitive psychology, as well as some basic engineering and environmental design. I worked for roughly a dozen years in ethnography, which is the part of anthropology devoted to in-depth studies of other cultures. Most of my work was done in Guatemala, where I studied the Mayan Indians; I was curator of the Ixchel Ethnological Museum there for six-and-a-half years. The thing that has helped me most is the ethnographic approach, which really is the antithesis of the scientific method.

UBIQUITY: How do the two approaches differ?

PANCAKE: With the scientific method you form a hypothesis, then set up controlled experiments to evaluate its truth or falsity. In ethnography, you purposely don't have a hypothesis. Instead, you do "in situ" studies, observing how people live and work. You may ask questions to elicit information about what's happening, but you don't ask leading questions. One of the key techniques is to have people verbalize about what they do. By amassing many observations from different people and conducting interviews, you gradually identify the recurring patterns. For example, what are the words people use over and over again to describe something? What actions or gestures are repeated many times, over a large population? Essentially, you start with the evidence and extract patterns, rather than constructing a hypothesis and testing it.

UBIQUITY: How do you apply ethnographic techniques to software design?

PANCAKE: Before even starting to design a software tool, it's important to ask some questions: What do we know about the target users for this tool? How do they think about this problem? How are they solving it now? In their minds, how can software assist them in solving it? Even if they're totally unaware of the technology that will be used, understanding their goals and current task structure means that the software tool will not only help them address the problem better, but also mesh well with their current methods. I thought I'd left ethnography behind when I went back to school and got into computer engineering. Then I found out that no one had studied how scientists and engineers used computers. It turned out that my ethnographic field experience was an incredible foundation for working with users.

UBIQUITY: What prompted you to go back to school and get into computer engineering?

PANCAKE: Well, the political situation in Guatemala was very bad in the early 1980s. I came back to the US and worked as director of information services for a Central American research institute, handling multi-lingual databases and electronic publishing. At about the time I came to the realization I was never going to earn a good living in the museums world, I was invited to teach some courses at Auburn University. I decided, sort of on a lark, to see if I liked computer engineering enough to pursue it on a higher level. The problem-solving aspects were fascinating, and I was hooked!

UBIQUITY: Going back to looking for patterns in populations, what about sub-populations? In terms of computing habits, are there distinctions between men and women, young and old, and so forth?

PANCAKE: There are significant differences among age cohorts. Many people in my generation were not exposed to computers until quite late in life. It makes a big difference whether you learn to use a tool after you already have problem-solving habits and whether you learn to apply technology as part of the whole problem-solving process. Some older scientists and engineers are wary in adopting the technology unless they understand how the software or hardware integrates with the way they already do things. Younger ones may be more open, for example, to the idea of using data mining techniques or to changing their whole viewpoint.

UBIQUITY: Are there also differences between sub-populations within the sciences?

PANCAKE: For a long time, science was based on either theoretical or experimental work. Those sides didn't always communicate particularly well with one another. Now we have a third group, which uses computational techniques as opposed to experimental or theoretical ones. When I work with users, the first thing I try to find out is which paradigm(s) they use most. Very few are equally versed in all three -- experimental, theoretical and computational science. Most of the groups I've worked with to date are least comfortable in the computational idiom.

UBIQUITY: Does your background in ethnography help you communicate with these different sub-groups?

PANCAKE: Working with Mayan Indians I learned to be careful about language. Many of the Indians I worked with did not speak Spanish as a native language, nor did I (although I did become bilingual), so we communicated through a language that was somewhat artificial for both of us. I've noticed that when I get software developers in the same room with scientists and engineers, they'll use the same term in different ways. The problem is that they don't recognize they're using them differently -- after all, we're all speaking English and we're all using the same term. Don't we mean the same thing by it? A lot of my facilitation work is getting people to be more explicit about what they mean. Right now, for example, I'm working with NSF's new George E. Brown Network for Earthquake Engineering Simulation. Earthquake engineers, who are structural engineers by training, use the same terms as seismologists and oceanographers, but in very different ways. You find terms being used again and again, but often meaning very different things. Those scientists and engineers all measure and describe related things, but do so based on implicit assumptions that are discipline-specific.

UBIQUITY: What's a real-world example of scientists and engineers from different fields making different assumptions?

PANCAKE: Scientists or engineers in a particular field may measure things with a certain type of instrument. They know that any time the measurement exceeds a threshold value they should ignore it because it is an artifact of the data acquisition process or a meaningless value. Let's say that an air quality engineer has real-time monitoring data that measures changes in the concentration of oxygen levels. She knows that when the oxygen appears to exceed a certain amount, something is going on with the monitor and she should treat it as a "missing value." Then along comes an ecologist who is studying how plants produce oxygen. He may be unaware that these instrument readings should be ignored. If he takes them as literal values, he'll end up with contaminated data. That is an increasing concern nowadays, as data become freely available to people from diverse backgrounds. They might use data -- without having the same assumptions about how those data were gathered or what makes them valid data -- and easily misapply them.

UBIQUITY: When you talk to users, do you find they believe that things over the years have gotten better as far as usability? Or are they convinced that things are just as bad as they ever were?

PANCAKE: I think typically they believe that usability is still pretty terrible, even though there have been incremental improvements. Worse than that, many of them think "computing people will never give us anything we can really use." In recent years, a lot of my work has been facilitating groups where users and software developers can interact in a productive way so that the software developers get a much better idea of what they should be giving the user. No software developer sets out to design a crummy tool, one that users are going to hate. In fact, it's the other way around -- software developers see this fabulous possibility in the technology and get excited about getting it into users' hands. Then they're terribly disappointed when users don't like it. Meanwhile the users, who heard how this fabulous use of technology was going to help them, are disappointed and resentful because it doesn't make them more productive.

UBIQUITY: What do you think is the core of the mismatch?

PANCAKE: It boils down to the fact that different cultures are at work here. Computer scientists and software developers think they're part of the same culture as the target users. That's not actually true. In this age of specialization, we've created sub-cultures and sub-sub and sub-sub-sub-cultures in every discipline. We have different views about how to solve problems, what the most rational processes are, what the accepted methods are, and what the accepted terminology is. It's a cultural gap. I started working with high performance computing users and found they didn't even think about what was going on in a parallel machine in the same way software developers did.

UBIQUITY: Is there some simple way of giving the flavor of how the views of the two groups differ?

PANCAKE: One simple example is source code organization. Computing professionals organize software by functionality; they split programs into sub-routines and functions based on the logic of what the code does. They're trained to do this, so their code will be easier to test, reuse, and maintain. Software tools support that view of program code because computing professionals develop them. That is, tools assume that routines represent functionality, and let you interact with your code in ways that reflect that assumption. Most tools also assume that code is relatively small and stored in a single directory tree. I've watched a lot of scientific programmers work on high-performance computing applications. Their programs tend to be bigger than most computer scientists have any concept of. To most computer scientists, a big application means less than 20,000 lines of code; 25,000 is very big. To a scientist or an engineer, on the other hand, anything less than 25,000 or 30,000 lines is a toy.

UBIQUITY: Someone must be writing a lot of code.

PANCAKE: A single person doesn't write all this code. It's usually built up by acquiring submodels or other pieces of code from other people or from so-called "community models" that are shared by a particular discipline. A classic scientific application is divided into subroutines based on how that code was acquired, rather than on functionality per se. For example, if I'm modeling the evolution of a solar system, I have some long loops that execute for each time step, modifying data that represent the solar environment. Well, I'm not going to keep all 10,000 lines in that loop as a single file; that's not practical. But I'm not going to subdivide it the way a computer scientist would, either. I'll pull out the lines for an equation and put that in a function, and I'll take a nested loop and put it in another. Computational scientists tend to think "This block of code came from Ron Neilson's model; I'll put that into a subroutine and I might even call it Neilson." I have effectively isolated the piece of code I got from him.

UBIQUITY: What if there's a problem in Ron Nielson's code?

PANCAKE: When it comes time to debug a problem with the program, I may trust Neilson's code because I know he's used it for a long time and therefore it's the last place I'll look during debugging. Alternatively, I might not be sure about his code, so it will be the first place I look when there's a bug. Either way, the code is nicely isolated, which simplifies debugging (or performance tuning, or changing to a new algorithm). The problem is that most software tools are based on the premise that a program is split into subroutines by logical function and that therefore you only want to see one of them at a time. Whereas to support the scientific style of organization, tools should always let me see where a subroutine is invoked at the same time I see the subroutine's code, since bugs are likely to occur as control passes from one section of code to another -- either because I didn't debug that piece well or because Neilson's model does something differently than I thought it did.

UBIQUITY: Are there formal groups that deal with the problems of usability engineering?

PANCAKE: In the area of high performance computing, one of the groups I've been privileged to be associated with is the Parallel Tools Consortium. A group of us founded Ptools in 1993, in order to solve some of the problems that occur because of the gap between users and developers. The group included the heads of software development groups in industry who were responsible for tools, compilers, etc. on parallel machines, plus representatives from user sites like the Department of Energy's National Labs and the National Science Foundation supercomputing centers. The premise of Ptools is that no software tool should be developed for a high-performance computer unless both users and software developers are involved. Users understand their problems and needs better than anybody else. Software developers understand what is implementable in a production level tool. Both of those communities should be present in order to arrive at tools that are usable and can be reliably maintained over long periods of time.

UBIQUITY: The creation of a formal group somehow suggests that there's a right way to decide what users need. Is that the case or not?

PANCAKE: It's not like a cookbook, where if you follow steps one, two and three you are guaranteed to have usable software. Multiple usability engineering methods are generally needed, and how they're applied depends on the particular context. Let's assume you want to develop a new software tool. First, before you decide whether or not to invest that kind of effort, you need to get users involved in order to determine if it solves a problem for them. They won't bother using a tool if they don't believe it will make them more productive at the end of the workday. Second, when it's time to define the tool's requirements, again, you need to involve users because they're the only ones who really know what they're doing now to solve the problem, and what about the process is tedious, error prone, frustrating. That tells you where the tool can score the biggest wins. Third, when you're actually developing the software and interfaces, you involve users again because it's perfectly possible to have a great idea for a tool that meets user needs, yet does it in a way that users find counterintuitive or too awkward or clumsy to use. It takes user involvement with the software developers at every step in order to end up with usable software.

UBIQUITY: Why did you focus your work on high-performance computing?

PANCAKE: It was largely serendipity. When a grad student of mine wanted to write a parallel debugger, I found what a gulf there was in terms of understanding user habits and needs. More recently, I've been focusing on how scientists and engineers access and use data. When they're building computational models, the only way to determine if the model is functioning properly is to compare it with real-world conditions and outcomes. You could try to predict what the weather's going to be three days from now and then see if you get it right. Or you could take information on weather conditions that applied two weeks ago, run your model on those data, and see if you end up with the today's conditions. Either way, you need real-world data.

UBIQUITY: What aspects of usability are you working on now?

PANCAKE: What's really intriguing me these days is how we can take data that were collected in one disciplinary regime -- by a group of people who had particular goals in mind and shared common assumptions about how to approach them -- and make it really usable by people in other disciplines, for totally difference purposes. What do we have to expose about the nature of the data and the data acquisition process in order to make that possible? For example, suppose I am studying water quality in a large river. The water quality will be dependent on climate data as well as the water that's already in the river, but I'm not a climate specialist. Fungicide residues carried by irrigation water that drains into the river will affect it, but I'm not an agricultural chemist. It's not clear how software can help me find the most appropriate data, or help me apply them reliably in predicting water quality.

UBIQUITY: Most usability engineers work in industry. Why did you choose to work at a university?

PANCAKE: It's been a tremendous advantage for me in working with users. Because I work at a university, I'm perceived as a neutral party -- that I don't have a vested interest in whether a standard favors one vendor or another. The neutrality attracts more users to work with me, and to be more open about what they say. If I worked for a particular company, users would assume that underneath it all I was interested in selling them something or trying to find out things just for the benefit of one vendor. As it is, they see that I am gathering information that will be made available to a large number of software developers and hopefully have more impact on future products.

COMMENTS

This is great! I am doing a research on website usability evaluation

��� Fola, Tue, 30 Jun 2020 15:13:09 UTC

POST A COMMENT
Leave this field empty