acm - an acm publication

Articles

Scarce resources in computing

Ubiquity, Volume 2008 Issue May | BY Espen Andersen 

|

Full citation in the ACM Digital Library

How we organize computing - and innovate with it - is shaped by what at any time is the most scarce resource. In the early days of computing, processing (and, to a certain extent, storage, which up to a point is a substitute for processing) was the main scarce resource. Computers were expensive and weak, so you had to organize what you did with them to make as much out of the processing capacity as possible. Hence, with the early computers, much time was spent making sure the process was fully used, by meticulously allocating time for users on the machine - first with scheduled batch processing, then with time-sharing operating systems that rationed processing resources to users based on need and budget.


How we organize computing - and innovate with it - is shaped by what at any time is the most scarce resource.

In the early days of computing, processing (and, to a certain extent, storage, which up to a point is a substitute for processing) was the main scarce resource. Computers were expensive and weak, so you had to organize what you did with them to make as much out of the processing capacity as possible. Hence, with the early computers, much time was spent making sure the process was fully used, by meticulously allocating time for users on the machine - first with scheduled batch processing, then with time-sharing operating systems that rationed processing resources to users based on need and budget.

The PC changed all that. Processing power became cheap. If a program wasn't running fast enough, it was often more economical to buy more computer than to spend time making the program faster. Most PC processing power was spent running screen savers or doing nothing in particular.

With the PC, communication became the limiting factor. We had fast processing on the PC, and shared information in central databases - and between them, we had slow communications channels. From the early nineties, the big transition was from centralized to client-server computing, where tasks were done either on the central server or on the local client. It became very important to limit the amount of data that needed to be sucked down the thin telecommunications straw between the two platforms. We spent time creating layered architectures with presentation layers (processing-intensive local activity) and permanence layers (where the information resided, capable of delivering the information you needed in response to short commands sent over slow communication channels (frequently asymmetrical.)

The WWW was a response to this, but as communications capacity has become cheap and ubiquitous (you can now get 100 Mbps most places in the urbanized world, for a relatively low fee) communication is no longer the limiting factor. So what is?

The most immediate scarce resource may turn out to be energy. Energy is a scarce resource because data centers consume enormous quantities of electricity, primarily for cooling. We can remedy this by taking advantage of the fact that it is much less expensive to move bits than electricity - by locating data center close to power source, preferably in cool place. Google, for instance, have built a data center in Oregon close to a hydroelectric power plant, and is looking into data centers in places such as Iceland.

Getting cheap power for hot data center, however, is conceptually straightforward compared addressing the long-term scarce resource: Knowledge (or, more precisely, the lack of people willing and able to work with information technology). Knowledge is the most scarce resource because the proportion of smart people is pretty much constant in any population, no matter the education level. Demand is increasing, and will continue to increase for the foreseeable future, for people that understand what is under the hood and can do something to evolve, not just fix, it. Hence, we virtualize, centralize, deliver Software-as-a-Service, outsource and offshore, primarily to get around the shortage (and, consequently, high cost) of knowledge.

The corporation of the future will have their data centers - for all but the largest of them, shared with other corporations - in the cool North, close to a large river or high waterfall. (If you permit a patriotic hint here, look to Western Norway: Old industrial installations (aluminum smelters and the like) in the deep fjords, with abundant hydroelectric power, relatively cheap engineers in a cool and politically stable climate with excellent telecommunications infrastructure. Go for it. I'll be glad to help.)

Most of the people working to develop and optimize this computing infrastructure, though, will not be in the cool and remote parts of the world. Since bits can be moved almost anywhere, the people working with them will be located in places that offer what they deem to be the good life, be it Boston or Bangalore, Silicon Valley or Silicon Glen. They will be relatively centralized (since face-to-face communication and physical proximity works well when you are developing new things), located more in time zones that physical space (as Cory Doctorow calls it, members of the Eastern Standard Tribe), and grouped more by knowledge than application or customer (much as large consumer-oriented companies increasingly move to structures based on global product responsibility.

Aside from knowledge, what will be the scarce resource in the future? Metadata is one candidate, but inventive approaches such as crowdsourcing, games and increasingly sophisticated audiovisual recognition technologies will remedy that. The end of Moore's law as processors become even denser and the forecasting limitations inherent in chaotic systems are hard problems that will limit us also in the future.

I think scarcity of knowledge will be with us for a very long time - while we move closer to Google's vision of an infrastructure for organizing all the information in the world, there is still a long way to go, and human ingenuity and variety ensures that if we ever reach the point of all working on the same information processing platform, it will a) look very different from anything we can envisage now, and b) will take a lot longer to develop and even longer to be accepted. A good bet is that it will either be run in broken English or have automatic translation between languages and client technologies.

I, for one, don't look particularly look forward to that day. As Jonathan Zittrain said at a recent conference, "at the edge of chaos lies suburbia" - a clean, well-functioning, and by implication tightly run and controlled computing infrastructure suitable for the many and irritating to those who want to extend it. Human ingenuity thrives in imperfect conditions, where errors can be exploited or overcome, and information arbitrage gives opportunities to those with the energy and initiative to seek them.

May there always be scarce resources to innovate around...

Espen Andersen is an Associate Professor of strategy at the Norwegian School of Management and the European Research Director of nGenera Corporation, as well as an Associate Editor of ACM Ubiquity. He hangs out, paperlessly, at espen.com and appliedabstractions.com.

Source: Ubiquity Volume 9, Issue 21 (May 27, 2008 - June 2, 2008)

COMMENTS

POST A COMMENT
Leave this field empty