Yale's Joan Feigenbaum talks about the possibilities for interdisciplinary research, the new field of algorithmic mechanism design, and her radical views on security.
Joan Feigenbaum is a professor in the Computer Science Department at Yale University. She received a BA in mathematics from Harvard and a PhD in computer science from Stanford. Between finishing her Ph.D. in 1986 and starting at Yale in 2000, she was with AT&T, most recently in the Information Sciences Research Center of the AT&T Shannon Laboratory in Florham Park, NJ. Her research interests are in theory of computation and foundations of electronic commerce. Dr. Feigenbaum is an ACM Fellow.
UBIQUITY: What are you interested in now?
FEIGENBAUM: I'm interested in the connections between economics and computer science. I've also been thinking a lot about academic computer science and what the next decade will be about. Since starting graduate school, I have witnessed several cycles of growth and contraction, or boom and uncertainty, in the academic computer science world, and I'm wondering what is going to happen next. I'm skeptical when I hear that something will always be one way from now on, or that something that has happened before will never happen again, because all of those absolutist predictions have proved false in the past.
UBIQUITY: What predictions are you thinking of, specifically?
FEIGENBAUM: I was in graduate school at Stanford from the end of '81 through the middle of '86. Back then, and more recently in the late '90s, many of the graduate students didn't go into academia or research. They worked in Silicon Valley, and some became entrepreneurs. My grad-school years were the time in the history of computer science when many universities were establishing computer science departments or computer science graduate programs. There was a lot of talk about how there weren't enough PhDs being produced who wanted to go into academic careers. By the time I finished graduate school, it was clear that this was completely untrue. There were definitely enough computer science PhDs to fill the openings.
UBIQUITY: And then what happened?
FEIGENBAUM: At that time, many people predicted complete doom and gloom; they thought there was never going to be another robust job market, and that theoretical computer scientists, in particular, were never going to be in demand. That all proved to be untrue as well. In the '90s we had the Internet boom, and once again theoretical computer scientists were in demand and making all kinds of interesting contributions. Now the Internet boom is over, and the huge growth in demand for computer science courses is temporarily over. Although there is still growth, there isn't the huge spike that there was during the Internet boom. I'm wondering what's going to happen next. Will there be a complete pendulum swing again? Or will people be a little more cautious this time about cranking down on PhD production because they believe that there isn't going to be enough demand?
UBIQUITY: What do you feel about the interaction between computer science academic programs and other programs at a university?
FEIGENBAUM: Interdisciplinary research is very popular at Yale, and the administration is strongly supportive of it. There is a lot of hope, for example, that computer science will be crucial to biology, genomics, proteomics and other data-driven and computationally intensive parts of biology. In general, I think that many university administrations would love to see joint work between computer science and the rest of the sciences.
UBIQUITY: Are there interactions between computer science programs and programs outside the other sciences?
FEIGENBAUM: There are all kinds of business and law questions that have come up. There are interesting developments in copyright law. There are interesting developments in privacy and information ownership and identification. I think that the next decade could see a lot of interdisciplinary work that involves computer science.
UBIQUITY: What is the main obstacle preventing that from happening? Turf would seem to be an obvious problem.
FEIGENBAUM: The received wisdom is that interdisciplinary and interdepartmental work in universities is difficult because of the way that tenure evaluations, promotions, funding, teaching, and training are departmentally defined. It's difficult to do something that is truly interdisciplinary. As I said, that's the received wisdom. I personally don't think it's inherently all that hard. Certainly Bell Labs and AT&T Labs, where I was from, didn't have this problem. There was a lot of interdisciplinary work there, and I'm optimistic that it can happen in universities as well. I don't think there are any fundamental obstacles to having interesting work thrive on the boundaries of computer science and the rest of the sciences or on the boundaries with business and law.
UBIQUITY: Is there sensitivity on the part of computer scientists when interacting with other scientists that they aren't seen as just research assistants who do the heavy computations?
FEIGENBAUM: I haven't personally encountered anything like that, but, if the other scientists take that attitude, the collaboration certainly won't work. Where there are open issues in computation -- not just a lack of people to do the computations -- but real open scientific questions about how to do them, then I don't see why computer scientists can't participate as equal partners.
UBIQUITY: What, if anything, is being done to get people in the habit of doing interdisciplinary research earlier in their careers? In other words, say at the undergraduate level, do students learn how to think in interdisciplinary ways?
FEIGENBAUM: At the undergraduate level, it's rare that students do research at all. If they do, it's usually in the senior year. To the extent that undergraduates do research at all, I don't think there's any reason they shouldn't or couldn't do interdisciplinary research.
UBIQUITY: What about interdisciplinary research at the graduate level?
FEIGENBAUM: At the graduate level, a student is admitted to a specific degree program and advised and supervised by one advisor. So, for a graduate student to do interdisciplinary research, it has to be under the supervision of an advisor who wants to do interdisciplinary research. For example, I could see myself advising somebody jointly with an economics professor. It hasn't happened yet, but the kind of stuff I'm working on right now is interesting to a couple of people in the economics department. I could see either their department or mine admitting somebody who was interested in both and the person's being jointly advised.
UBIQUITY: Let's talk about the work that you are doing now.
FEIGENBAUM: There's a new field of research within theoretical computer science that's called algorithmic mechanism design. The phrase was coined by Noam Nisan, a professor at the Hebrew University in Jerusalem, and his Ph.D. student Amir Ronen. "Mechanism design" is a well-established field of study in Economics. A less formal term for it would be market design. It's about setting up the rules that determine how an economic activity takes place in order to achieve a desired goal. "Algorithmic mechanism design" is about taking computational efficiency into account when setting up these rules.
UBIQUITY: Give us an example.
FEIGENBAUM: The classic example is an auction. There are many different auction formats. You can have a first-price, sealed-bid auction or a second-price, sealed-bid auction. You can have an ascending-price auction or a descending-price auction. There are different ways to run an auction with different properties to achieve different goals. The most relevant word in economic theory is "incentives." By setting up the rules of an auction, or the rules of how people buy and sell things, you provide different types of incentives for the trading partners. Incentives are a huge part of economic theory that until recently have been only a minor part of computer science theory.
UBIQUITY: So how would they relate to computer science theory?
FEIGENBAUM: The main focus of computer science theory has been on how to compute things efficiently. If you're doing something on a single computer, how fast can you do it? How little memory can you use? If you're doing something on a network, how much bandwidth do you need? How much coordination do you have to achieve between the different nodes? Conversely, efficient computation has not been the focus of economic theory. What's happening now is that computer scientists are paying attention to incentives, and economists are paying attention to computational and communication efficiency.
UBIQUITY: What brought about this change?
FEIGENBAUM: The Internet. I like to tell my students that the Internet has the characteristics of an economy as well as those of a computer. The key thing about an economy is that there are multiple independent parties that have independent goals. They have private information about their preferences, and their strategies are known only to them. So, while a market designer, an economist, a regulator, a politician, or whoever can set rules, he can't necessarily determine precisely how all of the different economic actors are going to interact. He can determine the rules of trade in a situation. But, as a whole, the various economic actors and the different economic activities that take place are not predictable and not well understood. The interaction between individual independent actors is most interesting. The best you can do if you're trying to promote some social or economic goal is to try to set up the rules that present the right incentives for people to do the things you want them to do.
UBIQUITY: What's a good example?
FEIGENBAUM: A good example is an auction where you want the bidders, who are bidding for a particular good, to bid truthfully. You want them to reveal what they are really willing to pay. One of the theorems of mechanism design that people teach in economics courses is the truthfulness of the second-price, sealed-bid auction, also called "the Vickrey auction." Each person submits a sealed bid, the highest bidder gets the item, and the price that the winner pays is the second-highest bid. The theorem is that, with this auction rule, no bidder can get an advantage by lying.
UBIQUITY: Is that economics or computer science?
FEIGENBAUM: That's classic economics. What's missing from that approach is computational efficiency. In more complex economic exchanges, there are more complex goals. The reason that this is relevant is that the Internet is a complex system, including routers and Web servers and hosts and browsers and people using all of these things. Computer scientists approach this as a "protocol-design" or engineering problem. Engineers believe that the network "should" behave in a certain way, and they design protocols for routing, caching, peer-to-peer file sharing or whatever. They publish protocols and standards, and they build software that implements them. This makes it easy for people to follow the protocols or standards; but the various nodes and sub-networks are owned and operated by independent entities, and no one can force them to conform to the standards. Presumably, if one of these "autonomous systems" could figure out how to gain an advantage by not conforming to the standard, e.g., if it could figure out how to interact with the rest of the network without having to provide service to everyone else, it would do it. I must say that we don't have much evidence that this is going on at the moment.
UBIQUITY: Is part of the reality of having different parts of the network owned and operated by different entities that you don't know exactly what people are doing?
FEIGENBAUM: Yes. Their behavior would have to cause some globally observable strange thing before you would suspect anything. Presumably, there can be all kinds of prescriptions for how different Internet nodes should act in order to achieve a global goal, but there's no way to force various autonomous systems to follow these prescriptions. This is exactly the situation in an economy as well. You can say it would be best for everybody concerned if everybody acted truthfully, but, if somebody believes that he can gain an advantage by lying, presumably he'll do it. I'm not talking about some kind of illegal lie. I'm talking about not revealing your preferences correctly or, basically, "playing the game" in an economic sense.
UBIQUITY: How do economists deal with this sort of thing?
FEIGENBAUM: Incentives. They set up the rules of trade so that all of the players give their inputs to the process, and then a global outcome is determined. In the auction case, each person either gets the good or does not get the good. Also, each person either has to pay a certain amount or doesn't have to pay anything. In an economic mechanism, actors are paid depending upon the outcome and depending upon what their inputs were. The design challenge is to set up the payment system in such a way that people are properly incentivized.
UBIQUITY: What would be an example of this in computer science theory?
FEIGENBAUM: The classic example in computer science theory is Internet routing. Any source can send a packet to any destination; in general, such a packet will travel across many different autonomous systems. At the moment, there is no uniform way of charging for such inter-domain routing of "transit traffic" What motivation does an autonomous system, say an ISP, have for carrying traffic that was not sent by one of its customers and will not be received by its customers? Now, the motivation is basically that the easiest thing to do is to run the standard protocols and not bother to change them. As long as the transit traffic is not too much of a burden on the internal network, who cares? Presumably that reasoning won't apply in the long run. There will be some actual costs to an ISP for carrying large amounts of traffic, and it won't want to carry traffic that is not destined to or coming from its own customers unless it's properly compensated. That's where algorithmic mechanism design comes in.
UBIQUITY: With an incentive?
FEIGENBAUM: Right. You want the various autonomous systems to be properly incentivized to supply truthful information about how much it actually costs them to carry traffic. You also want traffic to travel along lowest-cost routes so that there's global efficiency. You don't want to wind up sending traffic along a slower route so that a sender can avoid paying for the faster one or simply because the mechanism computed something that was not a lowest-cost route. You want to do both the path computation and the payment computation correctly. The goal is to design the right mechanism and to implement it with an efficient distributed algorithm. That's the kind of stuff I'm working on now.
UBIQUITY: Is there any interplay or specific link between this work that you're doing and your research in security problems?
FEIGENBAUM: They are somewhat related in that both security and algorithmic mechanism design could be considered "foundations of electronic commerce." I have been teaching electronic commerce, both on the graduate and the undergraduate level; both security and algorithmic mechanism design are parts of the curriculum.
UBIQUITY: What do you see as the biggest problems to be solved in those two areas?
FEIGENBAUM: As far as algorithmic mechanism design is concerned, it's all very new and not much is known. The central question is figuring out what you can and cannot do on a network in a computationally efficient and incentive-compatible manner.
UBIQUITY: What do you see as the biggest problems in security?
FEIGENBAUM: I have radical views on security. It's a very different area of research from algorithmic mechanism design in that we have been through several decades of security research. There's been some beautiful work, especially in the area of cryptology. But, in the real computing and communication systems we're now using, there is not very much security at all.
UBIQUITY: Since there has been so much research on it, why is there still so little security in the computing world?
FEIGENBAUM: A lot of the insecurity in our computing infrastructure results from software bugs. Many distributed denial-of-service attacks, for example, result from well known buffer-overflow bugs. There are software bugs in operating systems, e-mail programs, firewalls, and network gateways. Teenaged mischief makers can write programs that exploit these bugs and take over unsuspecting machines. Once one machine is infected, it can take over other machines. Even if a victim can trace the source of the attack to one of the infected machines, it probably would not be the machine of the original mischief maker. The originator is probably long gone by then and probably launched his attack from a temporary IP address. There are patches available for many of these exploitable bugs, but these patches haven't been installed everywhere yet. The point is that there are a lot of software bugs out there that can be exploited. Buggy software is not what cryptographic and security theory addresses.
UBIQUITY: What does cryptographic and security theory address?
FEIGENBAUM: Researchers look at how to prove properties of encryption systems, authentication schemes, and signature schemes. More generally, they design and analyze algorithms and protocols. The theory assumes that each user has a secure, trusted computing base (TCB). A user can draw a solid line around certain components of his system and know they are trustworthy. Then this trusted computing base has to interact with the rest of the world. Security theory asks how we can set up a protocol to do that. How can we ensure that all necessary untrusted actions take place outside of the TCB, don't corrupt the TCB, and achieve the goals of the protocol? We have to rely on the trustworthiness of the stuff inside the TCB. You can have a perfectly well designed and implemented authentication protocol, but if it's running on an operating system that has exploitable software bugs, it won't help you. Somebody who knows how to exploit the bugs and wants to cheat will still be able to do so. That's where many of the viruses that get so much play in the press come from. There's a mismatch between the real source of insecurity and the problems addressed by security researchers.
UBIQUITY: Are security researchers wasting their efforts working on the wrong things?
FEIGENBAUM: I don't think security researchers are wasting their time. If we didn't have good encryption algorithms and good protocols, then hackers would get at us that way as well. As things stand, hackers generally can't steal sensitive information while it's in transit, because we have good encryption. Instead, they get at us through distributed denial-of-service attacks and viruses in attachments, because those are the weak links in our system. Researchers have solved part of the problem. We've come up with good protocols and good encryption algorithms, and so we have prevented certain types of invasions of privacy or breaches of security. But there is a vast amount of insecurity that hasn't been addressed so far.
UBIQUITY: Well, let's end on a note of what should be done.
FEIGENBAUM: Frankly, to say that we should have good software engineering and a whole software infrastructure that is bug-free sounds very naï¿½ve. It's not like people haven't been doing research on software engineering for many years. They have. It's not like buffer overflow and many other software bugs that are exploitable by hackers aren't understood. It takes a huge amount of time, attention, and skill to make sure that all known bugs are fixed. Most computer and network administrators don't have anywhere near enough time to install all of the fixes that are already known. Most administrators are grossly overworked and in some cases undertrained and underpaid. Furthermore, new bugs are created all the time. New software, new releases, and upgrades are constantly being rolled out. We have a very dynamic computing environment. People install new applications all the time. There will probably be exploitable software bugs out there for a long time.
UBIQUITY: Is there a general principle that, as systems get larger and more complicated, they get more buggy?
FEIGENBAUM: I'm not a software engineer, but I've heard that, and I wouldn't be surprised if it is simply a rule a thumb of software engineering. We're going to have to start thinking about what we really mean by security. How are we going to achieve what we need in an imperfect and buggy software environment? The crypto and security world likes to have very precisely definable and provable properties. Maybe that's just not the way to go on computer security. Maybe we should look at it more from a risk-management point of view.
UBIQUITY: Explain what you mean by a risk-management approach to security research.
FEIGENBAUM: There are other huge, complex systems in our world. We have a financial system in which there is obviously some amount of fraud and abuse. Formerly solvent entities go bankrupt. Formerly honest parties become dishonest. But the whole financial system doesn't collapse. Why not? Because there is systemic tolerance for a certain amount of failure. If you are running a very complex business, you expect that certain things will fail, but you structure your business so that you can tolerate a certain number of failures. In the credit-card system, for example, it is known that a certain number of accounts will be defaulted on. There will be a certain number of deadbeats, a certain number of cards stolen, or whatever, and the fee structure is set up to tolerate it. We also have failures in the transportation system. They cause some inconvenience, but they don't make the entire system grind to a halt. If there's an accident on a New Jersey highway, it doesn't mean that nobody in or near New Jersey can drive anywhere. We should accept that there will be bugs and breaches of security in the computing infrastructure. There will be faults and failures. The system should have enough redundancy or robustness and tolerance of failure so that we can work around them.
UBIQUITY: If you were an advisor to the President, would you recommend a particular national policy or a national initiative?
FEIGENBAUM: I don't think we need a major new initiative. We need to keep working on computer security but broaden the scope so that we aren't only working on cryptographic protocols and algorithms that have worst-case, provable properties. We should use a more systemic, global, and risk-management approach that takes into account the fact that there are exploitable software bugs in our infrastructure that will lead to security breaches. Risk management has been an effective tool for designing and analyzing other large, complex systems; it should have something to contribute to the design and analysis of a global information infrastructure.