UBIQUITY: Your work at Microsoft Research is focused on different display sizes. What does that involve?
CZERWINSKI: When we started our group about two years ago we were tasked with looking at the future technology of large high-resolution displays, and multiple displays, and trying to figure out what direction our software designs should take based on the way people actually work with these new displays.
UBIQUITY: What are we talking about when we say "large"?
CZERWINSKI: Let's just imagine in the future that you have your walls around your office as your displays, or at least as additional display surfaces. So instead of actually having piles of papers on your desk, you might actually have Â literally might have a visual design of piles of windows on your displays around you. And you may have a very large display like let's say a 42-inch or a 50-inch screen in front of you that you're working on, you know, as more of a personal workspace. But then you'll use your walls as additional display space so you can lay everything out, keeping it visible, so you can monitor what's going on in those separate tasks if you will. In contrast, you'll be working very closely and personally with information on your desktop.
UBIQUITY: So how did you start thinking about the problem?
CZERWINSKI: What we did as a group was to go ahead and build a logger, which we called the VIBE logger, that logs how people actually use windows when they work with windows today. And the company had never really built as robust and detailed before so our group was one of the first groups at Microsoft to be able to tell the company how people were actually using windows in intricate detail. We've had thousands of people external to Microsoft run our tool, so we could see what happens as the display size gets bigger, or as your resolution gets higher, or as you increase the number of monitors that are hooked up to the single PC. We've looked at how people changed the way they work, and, in fact, they do significantly change the way they work.
UBIQUITY: In what way?
CZERWINSKI: Well, one of the first things they do is they leave more windows open. Since you now have extra space, why would you ever minimize a window again? You keep more windows open and you keep more of those windows visible as your display size increases. So what we immediately determined from this was that people are keeping things in the periphery now that they have one Â a true periphery Â and they're monitoring work there. So they might leave more of their inbox visible. People can now just glance over and see if there was an important new message for them. They might keep a Web browser, and a Word document, and a PowerPoint document all open together because they're all related, and maybe they're copying and pasting back and forth. They might keep two windows side-by-side for compare-and-contrast, as they might want to do when they're doing a purchase on the Web or performing a product comparison. So we saw this right away.
UBIQUITY: What kinds of problems emerged?
CZERWINSKI: One of the things we saw was that, because there were so many more windows open with these larger displays, the taskbar becomes completely overwhelmed and users abandon it as a way of doing task switching. There are times when you can have, say, 40 windows open, and if you keep your taskbar on the bottom horizontally laid out, all you can see is maybe one letter per tile. And usually that's M for Microsoft! So it's not very informative. And you see the icon of the application but pretty soon your application windows start to stack on top of each other. It's called "glomming" or aggregation. And if you have that turned on you might have five Internet Explorer windows, and six Word windows in another tile in your taskbar. It becomes an order of magnitude harder for users to go in there and try to find the exact window that they want in the taskbar. With our logging tools we saw that users started to click on windows that were left open on the desktop to bring them into the foreground. And that's another reason they left them visible was so that they could grab them and bring them back.
UBIQUITY: What did you decide to do?
CZERWINSKI: As we saw these behaviors start to change Â and there were many more observations like these Â we decided to build tools that better supported them. We built tools that allowed you to actually leave piles of related documents open together in the periphery Â because if you have a very large display why would you ever minimize a window into a bar again? Why not just leave a small version of the window open in your periphery so you can see it and you can throw other windows that are related to it into that little stack with it, and keep them all open? You know where they all are because you put them there. So we built a lot of tools like this to support how users multitask and stay peripherally aware of information in order to task-switch. And all of these tools were based on real user problems we identified from the VIBE logger tool. You wouldn't have the standard normal taskbar anymore, you'd just have task piles in your periphery, which we call Scalable Fabric.
UBIQUITY: How would a user toggle between, let's say, a laptop and this wall display?
CZERWINSKI: The laptop is just another display in your environment. And if you were running, for example, an extended desktop where the laptop space is part of your display space that's in your environment, then you could easily just move things back and forth between the laptop and the rest of your display. And, by the way, you can do that today: all laptops that ship today ship with a card that allows you to extend your desktop space from your personal PC over to your laptop and vice versa.
UBIQUITY: Is this a solution to a new problem or an old problem?
CZERWINSKI: This is a new problem in that multiple monitor configurations are more ubiquitous today, and we need better designs to support this given the number of users is growing. It just so happens that the Macintoshes did multimon way ahead of everybody else, but stringing up a bunch of PCs and making a larger display just hasn't been mainstream. And what Microsoft research wanted to understand was how (when larger displays become mainstream and widely affordable and everybody uses them) Â how does the design of our software have to change to scale up to these very large displays? That's the essence of the research problem. It's not a hardware issue per se, but none of the designs right now scale to very large displays. Even the Macintosh is just as guilty of this as we are. You have a little start menu off in one corner in Microsoft Windows and you have your instant messages coming up in another corner, pretty far away. When you have a very, very large display, you have to skate that mouse a pretty good distance, so what we need to do now is think about designs that understand where the user's focus of attention is and where the user is working. Then you have to bring that important content to the user's focus of attention, and designs have to be context-aware about what the user's doing and what task the user is working on. That requires a lot: it requires some smarts about what the user's task is, some intelligence about what that task requires in terms of software functionality, and the ability to track where the user is.
UBIQUITY: So how would you sum up your group's accomplishment on this project you were tasked?
CZERWINSKI: We were able to actually ascertain and document the kinds of guidelines that would be required for designing better user interfaces and interaction techniques for large displays and we passed those along to the product teams. And then we became much more focused on the kinds of novel interaction techniques that are required to move information back and forth between very small displays, tablets, smart phones, PocketPCs, and very large displays, and how to visualize information on really small as well as really large displays, and on novel interactions between then. So a lot more pen input technology was required, a lot more novel interaction techniques for shrinking information down so that it's usable on a small screen, and then throwing it back and expanding it up on a large display. Things like that. I would say we're almost more on the information visualization side of the research equation and looking at how you scale your information bits from really small to really large displays. And we are moving into a ubiquitous computing territory now because we acknowledge that everyone's going to have a smart phone or a wearable computer of some kind. Displays will just be everywhere, or you're going to have some kind of a projector on your small wearable device that you project onto a wall. And so now we're much more sensitive to groups of users moving around their environments, and are looking at the social dynamics that have to be considered in terms of how you interact with your environment.
UBIQUITY: What is your own environment like?
CZERWINSKI I have two monitors strung together and a laptop Â well, it's really a tablet Â and a smart phone. So I have four lined up across my desk that I move back and forth between. And in our lab, which is right across the door from me, we have various kinds of displays, including a big-screen SmartboardTM that's touch-enabled. We also have various projectors all around the room that project large displays onto the walls, and some of them are dual projectors for doublewide display. Soon we will have 4x3 grid of displays on one wall; at the moment it's a 3x3 grid of displays. That's been very useful.
UBIQUITY: What are some of the problems?
CZERWINSKI When you have so many displays hooked together, you have problems knowing where your cursor is and knowing where information's going to land when a dialog box comes up on that big a grid of displays and stuff like that. But we've been just projecting things all around the room and playing with moving information back and forth as you work in a group, to see how it all behaves and how can it be better than it is today.
UBIQUITY: If we have any stock in wallpaper companies, is it time to sell?
CZERWINSKI Yes. I really like the idea of a little projector on your phone. You can't read your e-mail on your phone so you just stop and project it to a wall and read it there. I like that very much.
UBIQUITY: Has anything been built like that?
CZERWINSKI It has. Actually, we just passed around a news story yesterday. It's finally coming. I think there are still issues, such as it might be shaking when you project it. I'm not sure that projection from your phone is the best place, but I think the time of projection is finally coming.
UBIQUITY: How quickly is all this happening?
CZERWINSKI: It's happening faster than we thought it would Â and we thought it would happen pretty fast. We predicted that by the time Longhorn shipped about 25 percent of the population would be running two displays instead of one Â or at least one very large high-resolution display. Our numbers are actually showing that in certain markets we underestimated how many people would be doing this. So, for instance, if you're talking about people in technology companies, it's more like 33 percent already. So I think it's happening pretty fast. I walked in a Macintosh store this weekend and looked at some of the displays they're shipping with the Macintosh computers now. They've got that very high-resolution large display just shipping with their PCs now, so that's very encouraging and this trend is happening fast.
UBIQUITY: Now how many people are in your research group?
CZERWINSKI: We're a group of eight total now. We're a small team but we're very productive. We have two developers, they're both very senior and very good at what they do. We're eight total because a designer just joined us.
UBIQUITY: The designer's position is brand new?
CZERWINSKI: Yes, but we've worked with him closely. He worked in another research group but he's finally joining our team full-time, and he's fabulous.
UBIQUITY: So what kinds of people are there? What are their backgrounds?
CZERWINSKI: We have a new researcher, Desney Tan, who's a computer scientist and has just came from Carnegie Mellon and who has worked with us for three years now on basic research issues. We've actually stumbled serendipitously upon a gender difference where females benefit more when they're navigating through 3-D on these large displays. If you design the environment right so that the navigation through the environment is smooth and not choppy, and if you have a wide field of view, females benefit better from that than males do. Males benefit, too, so it's good for everybody, but for whatever reason men just don't need that wide field of view and that smooth animation. But when females have it they perform every bit as well as men do. And that's very important because there have been quite a few people out there, very well-known psychologists, who have been claiming for a long time that females shouldn't be trained in 3-D simulations Â for instance firefighters often go through these simulation programs on PCs to train them how to put out fires on a naval ship or something like that Âand it turns out that women can do that just as well as men and get trained quite well, providing you just give them the right equipment. So wider display technology is important, and I think that has huge repercussions for education and training. Any, so Desney just joined our group, and he's a spatial cognition expert now, after all that research.
UBIQUITY: Let's keep going. Who are some of the others?
CZERWINSKI: We have another researcher, Patrick Baudisch, who is more on the input side of the house. He's the one who's been doing a lot of work on small devices. That's his area of expertise right now. Then we have an information visualization expert, George Robertson, who spent many years at Xerox PARC and has been here at Microsoft now for almost as long as I have. Then we have Nuria Oliver, who does very interesting work modeling user behaviors, and is interested in helping our team identify tasks using our Vibe logger tool. Nuria is also interested in doing context awareness research on mobile devices. Because we're such a small team, we kind of all work together as one big, happy research family. The developers often have some of the best ideas, and they work hand in hand with us researchers, and everybody contributes in terms of the creative side of the house. Brian Meyers has been with the company for over 15 years now. I think he was responsible for some of the writing of the original visual C++. And he really knows Windows like nobody's businessÂ and that's a real benefit for the team. And then Greg Smith, an equally adept developer, has been with the company over 10 years. He was on the FoxPro team when he first started with the company, and he's a database expert. So having a Windows expert and a database expert on the team has just been crucial with the development of our VIBE logger tool. They've been wonderful.
UBIQUITY: What's the full name of the group?
CZERWINSKI: It's called VIBE, Visualization and Interaction for Business and Entertainment.
UBIQUITY: But not jet planes?
CZERWINSKI: It's true we could have said for "breaking and entering." We just wanted that B and that E in there. But I get your point. We actually do have some entertainment aspects to the things we do but normally we stay focused on solving problems for information workers.
UBIQUITY: How do you fit in with the global Microsoft?
CZERWINSKI: Microsoft Research is evaluated in three different ways Â by our external influence as professionals in the field of human/computer interaction; by product impact; and in terms of the numbers of patents and the amount of creative technology we develop. My goal as a manager to have a group that does well in all three of those, and I think we do. It's my job and the job of my team to interact closely with product teams, understand their problems for the next five to 10 years out, and design and develop prototypes that address those problems deeply in a way that perhaps the product teams don't have the time to study them. That's how I see our role in addition to just contributing to the field. I also want to make sure that we do a significant amount of public service as well as to partner with our academic colleagues. We have a very strong relationship with the University of Maryland's Human/Computer Interaction Lab. Ben Bederson, I think was someone who you talked to already for Ubiquity, and he's a very close colleague of ours, as is Ben Shneiderman, and Francois Guimbretiere, and Catherine Plaisant. We've had Bongshin Lee, one of their Ph.D. students as an intern with us twice already, so you can see that we just have a really close relationship with them. It's important to us that we interact with the students and professors in the field of human/computer interaction.
UBIQUITY: Is there some analog to the sound barrier that you have to crash?
CZERWINSKI: I think the sky's the limit. That's the beauty of working at Microsoft Research. We have a generous budget to create or purchase the kinds of equipment we need, and the beauty of working here is that we have some of the best minds in the business. For instance, working downstairs in the hardware lab, is Gary Starkweather, the inventor of the laser printer. He has built some very large displays for us and then we get to study them. It's just a delight.
UBIQUITY: And is part of your delight your relationship with the non-research part of the company?
CZERWINSKI: Yes, it is very much a delight, because they're very smart, very sharp people. They know their customers really well because they do a lot of user-centered design in the product team. And it just so happens that sometimes if we're lucky we happen to be studying a problem here in the labs that they discover is a deep problem for them and perhaps we've had five years of looking at it, and we've had the pleasure of having had the freedom and the time and they haven't. And so sometimes if you just hit a sweet spot you can give them all kinds of information that helps them design a better product Â and that's a blast.
UBIQUITY: Looking back over your career, do you see that it marched in a fairly straight line?
CZERWINSKI: No, I wouldn't say so, no. I would say my career has been Â because of my personality Â highly shaped by the individuals I happen to meet up with and collaborate with just because we share the excitement around a problem. I've always been interested in lots of different problems. The one thing I will say has remained a consistent and solid research interest has been research on attention, multitasking and task switching. I've studied that since my earliest days on my dissertation. But when I met George Robertson, a 3-D and information visualization expert, he took me into a world where spatial cognition was really important, as were many perceptual cues. And I really had to learn the literature in depth in that area and come up to speed and become an expert myself to help him as he created these fabulous designs that replace the desktop metaphor, for instance.
UBIQUITY: And where are you on that?
CZERWINSKI: We continue to go down that path, although I will say that we've backed off a little bit from full 3-D metaphors, and we now use 3-D for what it's truly useful for, which is understanding transitions, using transparency to get more information on the screen, leveraging spatial cognition, as I've talked about, and those kinds of things. We've seen that there are wonderful things like scaling that allow you to get more stuff on the screen for the user. So you can use 3-D in ways that the user can use with 2-D interaction techniques. They don't have to learn anything new, they don't have to learn how to use a new device. But you just subtly use 3-D to help them get their job done more easily.
UBIQUITY: In what way have you backed off of it?
CZERWINSKI: Just in terms of using full 3-D metaphors, so that you don't have to walk through the landscape to find your information. We keep you in a 2-D world but bring 3-D cues in like scaling the information down so you get more stuff on a page, or doing a gentle animation to allow you to see a transition from one hierarchy to another hierarchy Â very subtle things like that that help the user without forcing the user to learn any new techniques.
UBIQUITY: What do you see in the future?
CZERWINSKI: Well, I'll tell you that information is going to follow you around and have some understanding of your context Â that's going to be there in the not-so-distant future. The systems that we work with tomorrow will know more about us than we feel comfortable with today. I think we are going to give up a little bit of privacy for the benefit of having things at our fingertips at all times. I do believe that. A great example of that is something that's happening right now Â these RFID tags that patients put under their skin just got approved by the FDA. They have a code that a scanner can read and then the systems grabs a database of information about your medical history. So if someone finds you unconscious on the street a doctor can get access to your medical records just by using one of these readers. And that's approved, that's coming. And why would you do that? Why would you give up that privacy? Well, because you might be unconscious and you might want a doctor to have access to your medical record.
UBIQUITY: The implications for privacy?
CZERWINSKI: Slowly but surely we're going to be giving up some level of privacy because we believe the benefit is there. And I really hope in the human/computer interaction industry we begin to think about some of these more ethical and moral issues around privacy. I don't see enough research going on around that, and I know my group intends definitely to start looking at that, as well as looking at more pervasive kinds of computing.