acm - an acm publication

Articles

Reflections on challenges to the goal of invisible computing

Ubiquity, Volume 2005 Issue May | BY Arun Kumar Tripathi 

|

Full citation in the ACM Digital Library

"Technology becomes subordinate to values through economics, government, or the professions. Our biggest problem is learning to recognize that we do have options, albeit often limited ones. Our tendency is to just create more technology rather than ask why." (Carl Mitcham, as he articulates the thesis of Albert Borgmann on the relationship between contemporary technologies and human values)


Introduction

We know that computers are complex beasts in their own right, but for all of their internal complexity computers are just as complicated in their embedding in the outside world, even though the complexity of this embedding is largely invisible to the people who design computers, and to people who make a living promoting their use. And it is possible that computers[2] might have the power to change us even when we engage with them unconsciously, as when we relate to a tool through the performance of a skill like driving or typing. Some years ago, I ran across an article on the Challenges to Invisible Computing in the November issue of COMPUTER, which has inspired me to write this short essay. In this essay, I shall try to explore the challenges of invisible computing and simplify them, to make them visible for research on Ubiquitous Computing. First of all, I will discuss what ubiquitous computing is and what it is not. Then, I will attempt to refine some paramount issues of invisible computing: What are the impacts of this kind of computing in our society? What are the embedded computers and how are they of consequence to human beings?

Making computers ubiquitous[3] is not enough; we should also strive to make them invisible. But, in doing so, we will face many research challenges. Computers are everywhere. Information is everywhere, nowhere, immaterial, abstract and ubiquitous[4]. In her forthcoming book, The God in the Machine: Why We Need Computers to Become Human, Janet Murray (leading information design specialist) discussed the question of How do we design for digital media so that we can turn an increase in information into an advance in human knowledge?

Interaction in a Human-Centered Architecture and Ubiquitous Computing

As computers become smaller, cheaper, and more mobile, people expand the ways in which they interact with information. Information that previously could only be accessed by a trip to the library can now be found on everyone's desktop, and before long will be on their cell phone screens or on the wall next to them. Although there has been much promotion of the wonders of ubiquitous information, there are also many problems. Information on the go no longer has the contextual backing that it did within information-providing institutions. Interfaces that worked well on paper don't transfer easily to the screen, and less so to the mini-screen. Research in human-computer interaction is moving away from desktop window-based interfaces to consider this larger picture.

Human-Computer Interaction (HCI)[5] is concerned with the design, implementation and evaluation of interactive computer-based systems, as well as with the multi-disciplinary study of various issues affecting this interaction. The aim of HCI is to ensure the safety, utility, effectiveness, efficiency, accessibility and usability of such systems. In recent years, HCI has attracted considerable attention in the academic and research communities, as well as in the Information Society Technologies industry. The on-going paradigm shift towards a knowledge-intensive Information Society has brought about radical changes in the way people work and interact with each other and with information. Computer-mediated human activities undergo fundamental changes and new ones appear continuously, as new, intelligent, distributed, and highly interactive technological environments emerge, making available concurrent access to heterogeneous information sources and interpersonal communication. The progressive fusion of existing and emerging technologies is transforming the computer from a specialist's device into an information appliance. This dynamic evolution is characterized by several dimensions of diversity that are intrinsic to the Information Society. These become evident when considering the broad range of user characteristics, the changing nature of human activities, the variety of contexts of use, the increasing availability and diversification of information, knowledge sources and services, the proliferation of diverse technological platforms, etc.

HCI plays a critical role in the context of the emerging Information Society, as citizens experience technology through their contact with the user interfaces of interactive products, applications and services. Therefore, it is important to ensure that user interfaces provide access and quality in use to all potential users, in all possible contexts of use, and through a variety of technological platforms. The field of HCI is now experiencing new challenges. New perspectives, trends and insights enrich the design, implementation and evaluation of interactive software, necessitating new multidisciplinary and collaborative efforts. The issue of interaction in a human-centered architecture in the context of UC[6] is to look out the pragmatic reply to; How do we interact effectively with information on a multiplicity of devices in a variety of places? Even more importantly, how can these interactions be made understandable and usable for a wide spectrum of users, ranging from information specialists to novice users? We should find a way to establish an effective interaction with information on a multiplicity of devices in a variety of places.

Ubiquitous computing, in the words of the late Mark Weiser (the father of Ubiquitous Computing), "the third wave in computing" is "just now beginning." First, with shared mainframe computers, and now in the personal computing era with persons and machines starting uneasily at each other across the desktop. Next comes the era of ubiquitous computing or the age of calm technology (better said, it has already arrived) - calm technology meant to be working in the background of human beings and supporting their lives, is also called as Invisible Computing or Invisible Computer at work. Alan C. Kay describes this as Third Paradigm computing. The first wave of computing, from 1940 to about 1980, was dominated by many people serving one computer. The second wave still peaking, has one person and one computer in an uneasy symbiosis, staring at each other across the desktop without really inhabiting each other's worlds. And, the third wave, just beginning, has many computers serving each person everywhere in the world. This is called as Invisible Computing (Cf. The research of Professor Don Norman). Norman argues in his book The Invisible Computer that the PC was aimed at the "early adopters" (this terminology is popularized by Geoffrey Moore). The first step is to design information appliances for the mass market. Norman advocates a "user-centered, human-centered, humane technology of appliances where the technology of the computer disappears behind the scenes into task-specific devices that maintain all the power without the difficulties." [See "Preface" in Norman, 1998]. So, the vision of Don Norman seems to me justified, by visualizing the visible problems of the Invisible Computer.

Some main points related to UC:
Not just laptops
24-hour access to Computer and Internet Infrastructure
Mobility: "any place/any time"
Personal Student/Computer relationship
Comprehensive e-services
Access to quality support

How can physical work, play, and living spaces be enhanced through digital information systems? How can the long biological experience of humans in manipulating physical objects be exploited as an interface to information systems? Researchers propose that contemporary models that focus on the computer as a separate appliance will seem like an anachronism in the digitally enhanced future. Sensing, computing, and communication functions will become invisible and integrated into the manufacture of many objects and the architectural arrangements of spaces.

UC can also be defined as an art of technology transparency or tangible computing. In the words of Bill Buxton (1998): "Rather than turning inward to an artificial world, ubiquitous media encourages us to look outward. It expands our perception and interaction in the physical world.."



What Ubiquitous Computing Isn't
Ubiquitous Computing is roughly the opposite of virtual reality (VR), where virtual reality puts people inside a computer-generated three dimensional world, Ubiquitous computing forces the computer to live out here in the world, make connections with people. VR is primarily a horsepower problem: whereas ubiquitous computing is a difficult integration of human factors, components of computer science, engineering, and social sciences.

Early work in UC
The initial incarnation of ubiquitous computing was in the form of tabs, pads, and boards, built at Xerox PARC, 1988-1994. Recently, ubiquitous computing has kicked off the recent boom in the areas of mobile computing research, although it is not the same thing as mobile computing, neither a superset nor a subset. UC has roots in many aspects of computing. The current mode of computing was articulated by Mark Weiser in 1988 at the Computer Science Lab at Xerox PARC.

Mark Weiser described the UC in two forms:
  1. The social scientists, philosophers, and anthropologists at PARC have envisioned the computing and networking. Activate the world.


  2. For the past nearly 30 years interface design and computer design has gone through some dramatic developments. The highest ideal of the development is to make computer so interesting and so wonderful that we have never want to be without it.
In the b. form, Mark Weiser had seen a "less-traveled" path he called the invisible, and he called the computers embedded in the background of humans as Ubiquitous Computing. So, he was the first computer scientist to envision the era of UC.

Gradually, the people at PARC focused on Ubiquitous Computing research. Weiser also realized that they were working on technologies that evoke the suspicion of people, and he therefore defined the following Principles of Inventing Socially Dangerous Technology:

Build it as safe as you can, and build into it all the safeguards to personal values that you can imagine. Tell the world at large that you are doing something dangerous.


Telecooperation Research

Telecooperation is the new wave of technology - an application of information and communication technologies used by individuals and organizations to enhance communications and access to the ubiquitous information. Perceiving the several components of Telecooperation, it needs new computing skills because there is a shortage of computing talent in systemic thinking, problem-solving, communicating, and teaming, as well as in assessing schedule, cost, risk, and potential impediments. Telecooperation enhances the works in the organizations, universities and corporate firms. It uses the skills and techniques of Third Wave computing, known as Ubiquitous Computing (UC)—the other form of UC could be known as The Invisible Computing, which includes several challenges. Individuals who learn and apply the skills of telecooperation gain new leverage, both by having a wider network of "useful connections" and by having better access to timely information. Organizations that successfully apply telecooperation methodologies can enhance customer requirements and supplier communications, dramatically reduce costs, and increase the standing in the community and their influence with policy makers. On the other hand communication in hard copy has been an essential ingredient to human progress. It empowers people to decide if they want to expand their horizons, capture new opportunities, and exercise greater degrees of freedom and choice. Today, the Internet offers promises of distance education and learning at speeds unheard of in the hard copy world.

Telecooperation is an outstanding example for the power of enabling technologies. It stands for the fusion of computer science, telecommunication and multimedia to carry out a cooperative process among organizations and individuals by having better access to timely information over a distance between two or more locations. This can be achieved by means of information, communication and new-media technologies. It comprises procedural and collaborative modes of task and its focus lies on the cooperation in the broader sense. It is concerned with a series of issues ranging from particular application domains such as the global office, innovative services, telework, telemedicine, telelearning and education, to the tools for communication and cooperation. During recent times, Electronic Commerce become the main beneficiary of telecooperation methodologies with an exponential progress prompted by the fast spreading of the Web.

Telecooperation[7] and collaborative work are key openers to changes in different areas such as tasks, structures, cooperation and coordination processes, the workplaces, and the level of employment of an organization. As part of the global knowledge society, public administration is on its way to a virtual organization, networked via telecooperation. Research in this field is often restricted to laboratory studies or to studies in which either the observation period is short or the observation intensity is low. The use of telecooperation is a promising means to meet the rising demands of complexity of work processes for public administration. New forms of Computer Supported Cooperative Work (CSCW) have lead to better efficiency due to time, communication, coordination, and costs. New forms of work (more teamwork, higher autonomy, flatter hierarchies, pro-active initiatives) will emerge and limitations of collaboration by distance and time will fade. Actually researchers are facing the requirements and changes which occur on the way towards these challenging goals. As is often the case in the early phase of development and adoption of new technologies, the window of opportunity is open to shape technology integrating organizational and users needs.

As the amount of information and communication increases dramatically, new working environments must provide efficient mechanisms to maximize the benefits of these developments. In a study, researchers have proposed a telecollaboration environment based on agent technology, which could be used as an information infrastructure for cooperative buildings or virtual enterprises. The number of communication and information services increases rapidly in number and complexity. Therefore, researchers argue mediating components between users and services are required. In the environment being described it is suggested to deploy a personalized agent cluster for each user and network wide directory, broker, and trading service. The agent cluster acts as a surrogate for the user in the system. In each cluster a variable set of personalized agents is aggregated according to the requirements of the user. Exemplarily the architecture and functionality of a communication agent as one part of an agent cluster is described.


Visions of Artificial Intelligence

The paper Reflections on the Limits of Artificial Intelligence (AI) (published in Ubiquity, Volume 5, Issue 38, December 1 - 7, 2004) is interesting and stimulating and at the same time raises concerns about Natural Intelligence. Phenomenologist philosopher Hubert Dreyfus argues that symbolic AI has failed. During 1950, Alan Turing predicted that there would be a machine that would behave intelligently enough to be indistinguishable from a human by the year 2010. Given the state of things today, it is highly unlikely that the prediction will be met, Dreyfus argues. By taking this hypothesis, I see the problem of common-sense knowledge in AI research. The argument made by Alexandru Tugui, "Artificial intelligence is based very much on symbolic logic, and has not succeeded in involving so-called affective logic" is plausible on the limitations of AI. I would like to extend the plausibility of the critics. Artificial Intelligence (AI) in the 50s and 60s suggested that the promise was not unrealistic, as Dreyfus maintains. Newell and Simon (working at RAND) showed with concrete programs that computers could do more than calculate on numbers; they could represent with symbols and programs that operate on these symbols, and could display aspects of intelligence. This symbolic information processing model of the mind was able to produce programs that solved puzzles, played games and proved theorems in logic and mathematics. But problems began to surface, as Marvin Minsky had thought that what we needed to program in common sense was about 100,000 facts. A generation later he is of the opinion that the AI research problem is the hardest science has ever undertaken. So, the reaction of the AI community to problems of these kinds is to investigate micro-worlds, i.e., areas where the solution does not depend on common sense knowledge. The goal of artificial intelligence is the simulation of human intelligence. How do we determine whether artificial intelligence is approaching or has reached its goal? One might reply that a machine should by considered intelligent if it does things that would require intelligence if done by humans. The key for understanding human intelligence is the social competence of human beings. The future construction of intelligent robots will be inspired by our knowledge about the human brain and human social intelligence [Borgmann, 1994 (271-283)].

On the other hand Stuart Dreyfus (younger brother of Hubert Dreyfus) tells us what human intelligence is by arguing that expertise is pattern discrimination and association based on experience. It is intuitive. There is no evidence you can reduce it to rules and theory. Hence, Artificial Intelligence probably can't be produced using rules and principles. That's not what intelligence is, says Dreyfus.

We know by various literatures that the discussion of artificial form of humans is advanced both in the areas of the AI & its intellectualism and the Human-genetics within the physical-corporeal (human embodied) range. The dream of artificial beings is an old one, but only in this century have these fantasies turned into technical reality: robots. As we have entered into the 21st century, I think we have to work out the questions: what have we learned from the developments so far? What will be possible in the near future? And, how will the relationship between robots and human change? Answers to all such questions are worthless and in vain if we can't explore what it means to understand human intelligence - which is the social competence of human beings. Is the future, the construction of intelligent robots will be inspired by our knowledge about the human brain and human social intelligence.

Technology[8] today allows us to record, analyze, and evaluate the physical world to an unprecedented degree. From supernovae to subatomic phenomena, the large and the small have been observed with increasingly greater accuracy, and we can expect organizations in the new millennium to depend increasingly on precise measurements to ensure they meet their mission requirements. Our Information Age owes a great debt to the pioneers of the 18th and 19th centuries who developed the mathematical tools for scientific measurement. We also owe a debt to those who courageously defended their convictions that the objects of their study - whether light, germs, or chemicals - are subject to meaningful analysis. In the 20th century, most definitive techniques for measurement were to be found in the physical sciences, with psychology and education achieving more modest and less definitive gains in the measurement of human mental achievement and potential. As the first decade of the 21st century sees a rapidly-increasing and global need for learning which far exceeds the current delivery capacity of educational and training institutions, the measurement of human capabilities and achievement will require significant progress in the near future.

Dees Stallings presents different perspectives concerning the future of human intellectual capabilities. Devlin, Dean of the School of Science at Saint Mary's College, California, as well as Senior Researcher at Stanford's Center for the Study of Language and Information and the author of a number of books on human reasoning, is an enthusiastic promoter of the development of human capabilities. Ray Kurzweil, MIT visionary and author of The Age of Spiritual Machines and numerous other works on artificial intelligence, has supported the dominance of computer-based instruction over human teaching. Kurzweil develops a convincing argument that computers will surpass humans in all areas of intelligence by 2020, with "significant new knowledge, created by machines with little or no human intervention." In education, he predicts, "Learning at a distance ... is commonplace." He continues, "To the extent that teaching is done by human teachers, the human teachers are not in the local vicinity of the student." Devlin's position is that our quest for artificial intelligence has already run its course in the last century, and it is time to focus on a new goal - the development of greater human capabilities. In his words, we "have come to realize that the truly difficult problems of the information age are not technological; rather, they concern ourselves - what is it to think, to reason, and to engage in conversation." Devlin is a proponent of the argument that our assimilation of the most significant developments of contemporary mathematicians will usher in a new "Golden Age" of human capabilities, not unlike that in which the Greeks produced Euclidean geometry and Socratic dialectic.

At this point, let me propose some classical futuristic ideas about the scenarios of the technological future (technology of the future? - this can be understood in two different ways: how technology could be in the future and how the future is anticipated through technology and how our Lifeworld is shaped). The world-wide classroom is preparing for a future where new platforms and service architectures enable the virtual highways to bring information to end users via wireless, mobile, fixed lines, fiber cable and finally to the Internet, converging seamlessly with boundless energy. The Internet gives diversified learning styles an opportunity not provided previously by other means of communications - allowing people to think critically and communicate freely, boundlessly, independent of time or place, from their home, business or vehicle. The Internet has grown in recent years from a fringe cultural phenomenon to a significant site for cultural production and transformation. The Net offers us a chance to be a true global community. It also gives us a challenge to think critically about our own lives. It has given us the responsibility to govern ourselves. The Net has some unique advantages: It takes away many logistical difficulties of space and time and allows information to flow faster and more efficiently. Here it seems that the many possibilities accomplishing the goals of learning and education via the Net are possible.

These perspectives will be increasingly important to decision making about education and training for the coming decades. The roles that humans and technology will play in education and training, especially in distance education, will shape the future of human activities more than we perhaps realize. Because the powerful influence of technology on the workplace and education will only increase, it is likely that we must revise our concepts of teaching and learning. For educational institutions this will require new mission statements, revised catalogs and other materials, different learning environments and methods of instruction, and, perhaps most significantly, new standards for measuring success." Computers are becoming more powerful at an ever-increasing rate, but will they ever become conscious? Artificial intelligence guru Ray Kurzweil thinks so and explains how we will "download" our software (our minds) and "upgrade" our hardware (our bodies) to become immortal — before the dawn of the 22nd century. In this debate with his critics, including several Discovery Institute Fellows, Kurzweil defends his views and sets the stage for the central question: "What does it mean to be human?"


Ethics of Information and Technology

Albert Borgmann precisely suggests to us <When computer-aided design came on the scene, information could become so massive and complex that a human being was no longer able to command it directly but came to depend on a computer that was able to store and process the information and make it available to human comprehension. Technological information had arrived.> During conversation, the inhabitants engaged in the focal nearness of the house. In reading, Borgmann argues <We fall silent and become temporarily solitary though we still have to draw on our immediate experience to bring the austerity of print to life, and we are able to pause, to read a passage to spouse or partner, and to invite comment or conversation.> The news on the radio is still rather spare in presentation compared with television, but less so than print and, important, implacable in its pace and progress. A newscast we want to listen to carefully dictates silence to everyone present. Information technology has added outlets (as well as funnels) of information through computer screens, and these are proliferating—a computer in the den, another in one's pocket or purse, a third through the television set, a small one on the kitchen counter, one for each of the children in their rooms, etc. Culturally considered, the home is no longer an enclosure but a multiple opening to cyberspace. Similarly Neil Postman has argued that technology seems to be developing faster than our ability to understand what we are using it for. What are the positive effects of new and emerging technologies? Are there ways to maximize the benefits of the Internet and e-mail while minimizing the possible negative effects on society? Television is a medium where a producer or reporter has complete control over the programming content. Cyberspace decentralizes the information distribution process. Isn't that a good thing? People with fast modems and powerful machines can participate in the creative and thought-provoking experiments on the Internet, but those without the right equipment cannot. The ubiquitous command of cyberspace is possible only in a world without distance. The actual world is, strictly speaking, metric. Distances matter. In geometry there are no intermediate spaces between the metric and topological ones. But informally and with regard to the experience of contemporary culture, we can say that in the actual world distances are losing their rigidity and extension. Every year improvements in automotive technology shrink and soften the distances we travel, cushioning us from the rigors of the road and dispelling boredom through more varied and refined entertainment and communication. The human subject that matches the levity of cyberspace is the unencumbered self that can take up any role it pleases and can defect from any position without penalty.

Borgmann in The Depth of Design[9] argues that province of design is the world of engagement, "the symmetry that links humanity and reality." In his opinion, engagement is declining in the aesthetics of contemporary life, partly as a result of the growing rift between design and engineering. Information in its core sense is the tissue that connects humans with the wider world, wider in space, time, and imagination. And as Aristotle has it, there is in principle no limit to the scope of information[10]. Information that is conveyed by natural signs and comes alive in human intelligence we may call natural information. Such information is about reality, and yet it also shades over into information for the construction of reality. All this has come about through the rise of a kind of information we can call technological, and the ascendancy of technological information has come to imperil if not eviscerate the craft of design, or so it seems to the lay observer. The information that goes into building first detached itself from the embodiment in practices when writing and drawing became common skills. Carl Mitcham (the author of Thinking Through Technology: The Path Between Engineering and Philosophy) like his intellectual mentors Ivan Illich and Jacques Ellul, Mitcham whole career to think deeply, critically and constructively about technology and its place in the life and work - has been proposing a thesis of interface between human action & ethics and technology, as he invites us all to address technology more cautiously, to consider that there may be more to life than the promotion of technoscientific progress. In principle, philosophy of technology concerns with questions of fundamental understanding of technology and its various reciprocal effects on the human existence and experience. Technology can no longer be taken for granted. Its impact on and implications for the social, ethical, political, and cultural dimensions of our world must be considered and addressed. (Ihde, Don. Philosophy of Technology: An Introduction, 1993). Thus the question philosophy of technology refers to: "Have we access to the techniques or technologies, that we need? Do we need the technology, that we have?" —which are based on the connection, which affect in the long run everybody and everyone in its everyday life. Questions concerns to me in the context of philosophy and phenomenology of technology are: Are the new technologies in fact helping to create a more informed and communicative society, as well as more cohesive communities? Or are they more of a diversion, in education or in other fields? Are they inhibiting genuine human interaction and understanding as much, or more, than they are helping? How can we think more precisely about this issue? In the literatures, we have been reading - that recent philosophy of technology has taken an empirical turn away from the transcendental orientation of early philosophy of technology toward a more practical, contextual interpretation. Technology is not seen as interdependent in relation to the society rather than independent of it. Technology and society form an inseparable pair, but neither is intelligible without reference to each other. This new generation of philosophers[11] reinterprets the relationship between technology and society to explore all of the different ways that our devices and systems mediate our lives. My main thesis about the interface of philosophy of technology and computer science is "Techniques and Technology is always constructing its own norms, traditions and values in the technical and scientific civilisation and building its own worldviews. It is the not the problem with techniques and technology, rather we should develop a new mode of action to deal with technological development. A proper "Umgangswissen"[12] with technological knowledge (Technisches Wissen) is needed. Therefore, in the end we can make a plea for a NEW ETHICS, which can be defined as "Technological Ethics" and to develop a suitable "technikethik"[13] with "Umgangswissen."

Cited and Non-cited References (including materials on HCI and Ubiquitous Computing):

Research and Projects at Telecooperation at University of Karlsruhe at http://www.teco.edu/index2.html

Ubiquitous Computing Publications at http://www.teco.edu/index2.html

Ubiquitous Computing publications of the FCE Group at http://www.cc.gatech.edu/fce/publications.html

Classroom 2000 publications at http://www.cc.gatech.edu/fce/c2000/pubs/

Architectures for Context Terry Winograd Computer Science Department, Stanford University 2001 at http://hci.stanford.edu/~winograd/papers/context/context.pdf
The aim of HCI is to ensure the safety, utility, effectiveness, efficiency, accessibility and usability of such systems, at http://www.ercim.org/publication/Ercim_News/enw46/intro.html
In Nomadic Computing, mobile users are supported by contextualised information presentation and interaction. At the GMD Institute for Applied Information Technology, prototypes and services are currently being developed in the framework of two projects: 'Crumpet', a European project with five partners, focuses on localisation of the user and personalisation of information; 'SAiMotion', a co-operation between GMD and Fraunhofer Gesellschaft, concentrates on context modelling and Human-Computer Interaction, at http://www.ercim.org/publication/Ercim_News/enw46/oppermann.html

Roomware(r) consists of computer-augmented room elements with integrated information and communication technology facilitating new forms of human-computer interaction. http://www.ercim.org/publication/Ercim_News/enw46/streitz.html
As part of the European IST programme, 'Future and Emerging Technology' launched the proactive initiative 'The Disappearing Computer' (DC). An overview of the projects can be found at http://www.disappearing-computer.net/projects.html
FAIRWIS (Trade FAIR Web-based Information Services) is an ongoing project at the University of Bari, funded by the European Union. http://www.ercim.org/publication/Ercim_News/enw46/costabile.html
Design v. Computing: Debating the Future of Human-Computer Interaction http://www.acm.org/sigchi/chi97/proceedings/panel/ts.htm
Collection of Web sites on Human Computer Interaction http://www.hal.t.u-tokyo.ac.jp/~pasqual/hci.html

Albert Borgmann, Artificial Intelligence and Human Personality in Research in Philosophy and Technology, Volume 14, Technology and Everyday Life, 1994, pp. 271-283, JAI Press Inc.

Dey, A. K., Salber, D., Abowd, G. D. (2001). A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications at http://www.cc.gatech.edu/fce/ctk/pubs/HCIJ16.pdf

Denning, J.P., Metcalfe, M.R., Beyond Calculation: The Next Fifty Years in Computing, Springer-Verlag, New York, 1997.

Julie V. Iovine, "The Last Gasp of the American Living Room," The New York Times on the Web, available on 29 January 1999 at http://www.nytimes.com/library/style/012899design-notebook.html

G. A. Moore, 1991. Crossing the Chasm: Marketing and Selling Mainstream Customers. N.Y.: HarperCollins, 1991.
Norman, D.A., The Psychology of Everyday Things, New York: Basic Books, 1988.
D. A. Norman, 1998. The Invisible Computer: Why Good Products Can Fail, the Personal Computer Is So Complex, and Information Appliances Are the Solution. Cambridge, Mass.: MIT Press.
M. Odlyzko, 1998. "Smart and stupid networks: Why the Internet is like Microsoft," ACM netWorker (December), pp. 38-46; available at http://www.research.att.com/~amo

Reichwald, R.; M�slein, K.; Sachenbacher, H.; Englberger, H.; Oldenburg, S., Telekooperation: Verteilte Arbeits- und Organisationsformen, ISBN 3-540-62013-3, Springer-Verlag, Berlin et al. 1998.
E. M. Rogers, 1983. Diffusion of Innovations. 3rd ed. N.Y.: Free Press.
Weiser, M., The Computer for the 21st Century, Scientific American Ubicomp paper, September 1991, http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html
Weiser, M., Seely Brown, J., Designing Calm Technology, Xerox PARC, December 21, 1995, http://www.ubiq.com/hypertext/weiser/calmtech/calmtech.htm
Weiser, M., Seely Brown, J., The Coming Age of Calm Technology", Xerox PARC October 5, 1996, http://www.ubiq.com/hypertext/weiser/acmfuture2endnote.htm


Footnotes

[1] Computing is defined as a performance art in the context of ubiquitous computing and telecooperation research; It is not the question, What We Are - Rather What Might We Become?

[2] Peter Lyman, Computing As Performance Art, Educom Review: Volume 30, No.4.

[3] Gaetano Borriello, "The Challenges to Invisible Computing" in COMPUTER, Nov. 2000.

[4] In other words, it is also defined as Ubiquitous Computing (UC).

[5] Cf. The research of Prof. Terry Winograd. Architectures for Context Terry Winograd Computer Science Department, Stanford University 2001 at http://hci.stanford.edu/~winograd/papers/context/context.pdf

[6] UC is Ubiquitous Computing.

[7] German researchers have argued; Telekooperation bezeichnet nach Aussage der Autoren "die mediengest�tzte arbeitsteilige Leistungserstellung zwischen verteilten Aufgabentr�gern, Organisationseinheiten und/oder Organisationen." Another significant approach regarding the telecooperation is maintained by Telecooperation Research Group at the Darmstadt University of Technology, where researchers have structured his group into two major areas called uBIZ and uLEARN, where uBiZ stands for ubiquitous business, information, and zest and uLEARN stands for ubiquitous Learning. See for details at http://www.tk.informatik.tu-darmstadt.de/Forschung/Schwerpunkte/
http://www.tk.informatik.tu-darmstadt.de

[8] See Dees Stallings in Jan 2002 JAL, contrasting futurist Ray Kurzweil and mathematician Keith Devlin.

[9] See Richard Buchanan and Victor Margolin (eds.) Discovering Design: Explorations in Design Studies, University of Chicago Press, 1995.

[10] See Albert Borgmann; Holding On to Reality: The Nature of Information at the Turn of the Millennium (Chicago: University of Chicago Press, 1999).

[11] Albert Borgmann, Philip Brey, Hubert Dreyfus, Paul Durbin, Andrew Feenberg, Larry Hickman, Don Ihde, Carl Mitcham, Peter-Paul Verbeek, and Langdon Winner

[12] It is defined as the knowledge of dealing with action in the technologically and culturally mediated lifeworld. (In other words, Umgangswissen is also defined as the Art of dealing with Knowledge in the technologically and culturally mediated lifeworld.)

[13] It is defined as the relationship of ethical theory with the technology.

Source: Ubiquity Volume 6, Issue 17 (May 17 - May 24, 2005) http://www.acm.org/ubiquity



Forum

Printer Friendly Version





[Home]   [About Ubiquity]   [The Editors]  


Ubiquity welcomes the submissions of articles from everyone interested in the future of information technology. Everything published in Ubiquity is copyrighted �2005 by the ACM and the individual authors.

COMMENTS

POST A COMMENT
Leave this field empty