acm - an acm publication

Articles

An Interview with Wei Zhao

Ubiquity, Volume 2008 Issue June | BY Ubiquity staff 

|

Full citation in the ACM Digital Library

Wei Zhao is currently the Dean of the School of Science at Rensselaer Polytechnic Institute. Before he joined RPI in 2007, he was a Senior Associate Vice President for Research at Texas A&M University. Between 2005 and 2007, he also served as the Director for the Division of Computer and Network Systems in the National Science Foundation. He completed his undergraduate program in physics at Shaanxi Normal University, Xi'an, China, in 1977. He received his M.Sc. and Ph.D. degrees in Computer and Information Sciences at the University of Massachusetts at Amherst in 1983 and 1986, respectively. During his career, he has also served as a faculty member at Amherst College, the University of Adelaide, and Texas A&M University. This interview was conducted by Ubiquity editor-in-chief John Gehl.


ZHAO:     Computer science is not a science of computers. Computers are products. A product usually does not make a science. Thus, you don't have a science of television, you don't have a science of buildings. Computer science is considered a true science, but it's a science, not of a product, but of algorithms. However, I really want to explore whether or not this idea has been correctly understood. If not, how can we resolve its misperceptions? An algorithm is defined as a piece of sequential code; it is basically a list of instructions. Every instruction in the list has a well-defined meaning, such as go right, go left. Add one, subtract one. You can have some conditions. If X equals Y, do this. If X equals Z, do that. Basically, an algorithm has several basic features. The length of algorithm has to be finite. That is, you cannot have an infinitely long algorithm, although the running time can be very long. Also, by definition, when you use an algorithm, basically you follow the instructions to do something -- an algorithm must terminate. That means in a finite amount of time, an algorithm must finish and come to an end.
UBIQUITY:     Right.
ZHAO:     Also, an algorithm must be deterministic. That means that for any given input, the output must be the same. Let's use an adder as an example; you give an input one and one, the outcome must be two. The output cannot be random. An algorithm has several features like that.

In my view from the later '50s or the entire '60s and even the '70s, the computer scientists who were exploring this concept developed computer science around algorithms (and did a wonderful job!). Remember that you have operating systems that can execute multiple programs in parallel, and obtain more efficient execution of algorithms. And you have storage that will allow you to store large amounts of data and the program which are the realization of algorithms. And so on. But from the early '80s until now, we have seen a lot of change. For example, we have the Internet, and the Web. Now we deal with new problems including cyber security, among others. There are all sorts of new developments. So here's the question: Are these new developments still relevant to algorithms? If so, how? More specifically, would you think the Web is an algorithm or not?
UBIQUITY:     No, I wouldn't.
ZHAO:     I would say the answer is yes.
UBIQUITY:     I thought you might. Explain your thinking, please.
ZHAO:     Okay. The answer is yes in the following sense. There is a very regular theorem behind this. If you have one algorithm running, that's one algorithm. If you have another algorithm running, for example, if you have one algorithm doing multiplication and another algorithm doing addition and they are running in parallel. Actually there is a theorem that tells you that two algorithms combined together do meet the definition of algorithm. So the two algorithms combined together actually is still an algorithm. Then by doing this recursively, if you have any (finite) number of algorithms running parallel, that is still an algorithm. So for Web, you have millions of (sides) that all run in little programs and they somehow talk to each other, and so together the entire web is equivalent to a realization of an algorithm. So that means we are on track -- these new developments are algorithm related. But let me ask another question. Would you think the design of the Web or realization of the Web or (manton) is a Web or modification upgrade of Web benefit from our theory or foundation of algorithm?
UBIQUITY:     Well, I wouldn't think so but maybe in theory it does.
ZHAO:     Now, I agree with you. There is a disconnect here.
UBIQUITY:     Okay.
ZHAO:     People create and maintain the Web - do they know algorithm? Yes, they know. Do they really use the specific theories or theorems from algorithms design? Maybe not. This is very different from early computer science days. For example, you'd have a compiler. You'd have operating systems. You'd have a database to some degree. Those developments benefit directly and tremendously from either a theory of algorithms or from the foundation of algorithms. Now let me further elaborate.
UBIQUITY:     Let me interrupt just for a second. Do you have a particular definition of the Web in mind?
ZHAO:     Well, I am referring to the Web that we currently have, the one we're using every day. I'm talking a real Web. It's not an abstract Web. I'm talking about the real Web operating today.

So let me give you another question, then. The question is: "Can you use algorithm or theory algorithm or foundation algorithm in dealing with information security?" Yes, you may as we have developed specific algorithms (e.g., public key program) to address some of the issues there. But I am asking the question for the system at large. For instance, when you have a virus. When you're being intruded. When you receive junk email. And our foundation or the methodology from our foundation may help you. The answer is not a positive one, right?

After considering these two examples, we now wonder if the definition of computer science could be wrong or incomplete? Or what's wrong with this? Just look. We have our great achievement, the Web. We have a greater challenge in dealing with cyber security. Now we realize when we're dealing with these things our foundation may not help much. So that's a fairly fundamental question, right? And of course everybody may not agree with me, but I really think it is a serious issue.
UBIQUITY:     It would certainly seem so. Let me ask about how many people agree with you. I'm sure this isn't the first time you've talked about these ideas.
ZHAO:     First let me finish. I have another half story to tell.
UBIQUITY:     Okay.
ZHAO:     So now the question is, what's the problem? Basically, are you saying we need to create a Web science or should we just create a science for cyber security, so that they become independent branches from computer science? If we don't do that, what should we do? My thought on this issue is that the definition of computer is actually correct. Computer science is a science of algorithms, and that's fine as far as it goes. However, we need to take a more long-term and generalized view: First we need look at algorithm in a much more general sense.
UBIQUITY:     Would it help us if you gave us an analogy?
ZHAO:     Just like physics. You know, physics is a science of particles. But when you talk about Newtonian physics, you talk about a few particles interacting together. When you talk about dynamics, you talk about millions, millions of particles interacting together. That's a true dynamic. So we may take a similar approach. I consider that computer science can be broken into two parts. One is a science of individual algorithms. That's why if you look at any computer science textbook, you start looking at the sorting algorithms, the searching algorithms, all kinds of algorithms. And those are individual algorithms. We've done quite a bit, but we still have lot of challenges ahead including (NP) hardware problems, complexity issues, etc. Thus, computer science first is a science of individual algorithms. You study individual algorithms.

However, there is another important but neglected aspect of computer science, which deals with algorithm interaction. You have to study the science of how algorithms interact together. The reason our Web design did not benefit from our foundation methodology, the algorithm theory, is that we have a lot of theory and foundation for individual algorithms, but we do not have a good theory or foundation for algorithms interacting together, which is exactly what the Web is for. We have not done much on algorithm interactions - actually, many of us just didn't recognize that it's a part of computer science. So I think it's very important to formally reevaluate our recent developments and then reformulate them in a much more coherent science base.
UBIQUITY:     Could you give an example of what you mean?
ZHAO:     Sure. Consider networks. I'm sure your readers are familiar with TCP/IP protocols -- the very popular protocols for the Internet. Generally speaking, with the TCP/IP protocols, you really have two algorithms running out of two end hosts. And they interact with each other to complete a mission of transmitting messages between one another. That's a situation when the two algorithms interact together. The success of TCP/IP protocol really is that the interactions have been somehow designed to behave very well.
UBIQUITY:     Okay.

And what about security?
ZHAO:     Yes, now let's a look at cyber security. What does cyber security mean? Let's say you and I are communicating on the Internet. Somebody tries to disturb our communications. What we have here is that two parties, you and I, are talking. Your computer and my computer are cooperating to try to send a message from me to you and your message from you to me so that we can talk to each other. However, let's say there's a third guy who is not cooperating, but disturbing, trying to destroy the communication we have. Basically trying to destroy the mission we have. In general, the cyber security problem, many times, implies that we still have an interaction with a bunch of algorithms. But some are cooperating, and some are not cooperating. You combine them and it becomes a cyber security problem.
UBIQUITY:     What do you see as the new issue?
ZHAO:     Actually, it's the same problem. Once again, we have dealt with cyber security problems in a very ad hoc manner. We did not treat them as a science of algorithm interaction. We cannot rid ourselves of that lack of a good foundation. So, from this perspective, we'll assume that people now think computer science is 50 years old. Maybe that's the right number, because during the first 50 years we more or less concentrated on the science of individual algorithms; but I would say that in the next 50, years we will need to concentrate on the science of algorithm interaction. This is my thought about computer science in general.
UBIQUITY:     Very interesting, but I'm not sure I completely understand what you're saying, because for a minute I was thinking you were going to go in a direction that would almost include content, and you would try to make computer science into a kind of political science or social science to include what the people using the Web, what I and you and Joe Blow are actually doing on the Web -- which is not part of the formal apparatus of the Web. You're not doing that, are you?
ZHAO:     No, what I try to focus on is that the study of individual algorithms is insufficient to develop a foundation for computer science. We need to create a foundation to include the interaction of algorithms. Actually there are many different names for it. What I try to do is insist that we should not only be looking for individual algorithms, but we should also be looking for the interaction algorithms. Computer science is just science, like physics or chemistry; every science is limited. Maybe we will also need to deal with the social or management component of computing, which may or may not belong to the science of algorithm.
UBIQUITY:     Have you published these ideas yet?
ZHAO:     I haven't as of yet. I have given some talks here or there about my views on computer science, and I want to emphasize that I'm not saying that computer science is totally wrong. What I try to say is while individual algorithms are wonderful things, we can't ignore the importance of the interactions they have with one another.
UBIQUITY:     But the interactions you're thinking of are at a fairly high level, aren't they? Well, maybe "high" isn't the right word but a level of communication that above the level in which people are using their own algorithms to try to do their own work or their own mischief, as in the case of invaders of network security. Am I misinterpreting you?
ZHAO:     I would put it this way: computer science is not a complete science if it doesn't include the study of how algorithms interact. Computers involve a lot of things - storage, wires, memory, processors, and so forth. But in the early '60s, Knuth and others really crystallized the idea that in addition to all of the physical components, we were fundamentally dealing with algorithms. That realization was what opened the door for computer science and what initiated research in all kinds of dimensions.
UBIQUITY:     And you would like to expand that a realization to say we're dealing with algorithms both at the individual level and at the interaction level.
ZHAO:     That's correct.
UBIQUITY:     Is there any analogue to this you could point to?
ZHAO:     Well, once again, you can compare it with physics. You might say that if physicists were only dealing with individual particles and not worrying about their interactions, then the physics would ultimately be a very, very small place. The reality, however, is that no particle can move around without interacting with other particles, and that's what physics is all about. So I'm just offering to fellow computer scientists the friendly message that if we just concentrate on our individual algorithms, we cannot depend on our foundation to include such newer developments as the Web, networking, and cyber security.
UBIQUITY:     Would the redesign of computer science you propose be done throughout computer science? In other words, would you start introducing this idea at the very beginning of the study of computer science?
ZHAO:     Yes, absolutely. For example, today we don't have a course called algorithm interaction. Although we do have a course called algorithm design, it's a course about individual algorithm design. We don't have a course called science of algorithm interaction. That is not a trivial omission. It is an extremely serious omission.
UBIQUITY:     Your thoughts are quite provocative, and I'm sure Ubiquity readers will be intrigued by what you're saying. Thank you very much.


END


Source: Ubiquity Volume 9, Issue 24 (June 17 - 23, 2008)

COMMENTS

POST A COMMENT
Leave this field empty