acm - an acm publication

Articles

Internet virus protection

Ubiquity, Volume 2000 Issue July, July 1 - July 31, 2000 | BY Bill Hanson 

|

Full citation in the ACM Digital Library


ACM: Ubiquity - Internet Virus Protection

        ACM Logo





Invitation


Ubiquity - The ACM IT Magazine and Forum




Internet Virus Protection
By Bill Hanson


Catching viruses at the server level would prevent a lot of headaches, not to mention embarrassment, for IT professionals.


I read an article this morning that says we must start looking at the server platform to help protect against virus attacks. Well, duh! How far overdue is this? How many times do we need to be whacked with a stick to learn something new? How many billions of dollars do we have to spend recovering from virus attacks before we actually take them seriously? But here's the really good part. The proposed solution won't work.

I've developed software systems long enough to know that you have to think about potential pitfalls in your design, and accommodate them. Translation: Make it foolproof. I design systems for limited pools of users; employees of a single company, sometimes a single department. Yet we put a server on the Internet, open it up to the whole world, and take our chances. We subscribe to the security discipline of hope, fear and trust. These days we are in such a hurry to get stuff out there that we don't take time to think through important infrastructure issues. Thus, important facets of our systems are sacrificed: security, ease of use, response time, data integrity, and so on. But the one that gets the big press is security.

As an industry we should be embarrassed and ashamed of what we've created. First, we have to stand naked before the world and say we messed up with the Y2K thing. Then we have to turn around and say, sorry, some second-rate student from the Philippines was able to write a virus that can disable entire networks around the world. (Am I the only one who's embarrassed to admit at a party that I'm an IT professional? I've resorted to lying to people and telling them I'm a plumber. Then when they ask me a plumbing question I pretend to choke on my drink and run out of the room. I know they're all laughing and accusing me of being an IT professional. They saw my palm pilot.)

Internet servers are like flood control dams. Everything must flow through them. But they're like flood control dams with the gates wide open. I agree that virus protection must start at the server but I don't agree with the proposed methods, which take our current methods and move them from individual PCs to servers. Our current methods of virus protection are reactive and unimaginative. They put the onus on individual PC owners, and then poorly equip them with programs that only can identify viruses once they've been released. On top of this, ownership of such protection costs money and is optional. It's really not much of a dragnet when you think about it.

Now it's being proposed that we take the same reactive approach and move it up one level to the server. Servers will suffer the same virus identification problems that individual users suffer, but on a larger scale. Response times will suffer even more due to server programs interrogating every e-mail that comes through them. Doing just a little bit of math in my head, I see this as a potentially serious problem.

Now don't get me wrong, I think virus identification is important, but only as one aspect of an overall virus protection scheme. As a more complete safety net, we also need to be able to identify the results of a virus at the server level. Viruses that intend to damage an individual workstation are still going to be the responsibility of the individual PC owner. But viruses that do their damage by intentionally clogging servers with massive amounts of e-mails can be identified at the server, when they start shotgunning their e-mails. In other words, a server might be trying to identify viruses while passing messages, but it should also assume that some will get through undetected, and protect itself accordingly. Reasonable checks can be put in place. If too many e-mails originate from a single source in a given time period (say, more than 20 in one minute), this could be the work of a virus. People or companies broadcasting messages or utilizing push technology that legitimately puts out such volumes would need to register as special users with the server administrator. Individuals could protect themselves by agreeing to give up the right to send more than 20 e-mails a minute, a sufficient number for the vast majority of Internet users.

My proposal not only attempts virus recognition in e-mails, but also offers some level of protection from damage if a virus slips through the radar. One final piece, the most expensive yet most important one, remains. Internet protocol needs to be changed to better track e-mail lineages. This not only will help identify who released it from the lab, but also will dissuade potential perpetrators. If you knew it could be traced back to you, would you release such a virus? I wouldn't. But then, I used four-digit years in my files back in the 1970s.




[Home]   [About Ubiquity]   [The Editors]  


Ubiquity welcomes the submissions of articles from everyone interested in the future of information technology. Everything published in Ubiquity is copyrighted ©2000 by the ACM and the individual authors.

COMMENTS

POST A COMMENT
Leave this field empty