acm - an acm publication

2010 - November

  • Edging Toward the Semantic Web: Protocols, Curation, and Seeds

    The evolution from an interactive Internet (often called Web 2.0) toward a more intelligent, semantic web will not happen as a result of dramatic new inventions or jointly agreed standards, but through a gradual evolution and recombination of existing technologies. To get to a Web 3.0, we will need to first create (and maybe be satisfied with) a Web 2.5, and that will happen through the gradual evolution of effective, user-based interaction protocols (based on user dialogues) and the use of queries as information passing mechanisms.

  • Ubiquity symposium 'What is computation?': Computation is process

    Various authors define forms of computation as specialized types of processes. As the scope of computation widens, the range of such specialties increases. Dennis J. Frailey posits that the essence of computation can be found in any form of process, hence the title and the thesis of this paper in the Ubiquity symposium discussion what is computation. --Editor

  • Ubiquity symposium 'What is computation?': Opening statement

    Most people understand a computation as a process evoked when a computational agent acts on its inputs under the control of an algorithm. The classical Turing machine model has long served as the fundamental reference model because an appropriate Turing machine can simulate every other computational model known. The Turing model is a good abstraction for most digital computers because the number of steps to execute a Turing machine algorithm is predictive of the running time of the computation on a digital computer. However, the Turing model is not as well matched for the natural, interactive, and continuous information processes frequently encountered today. Other models whose structures more closely match the information processes involved give better predictions of running time and space. Models based on transforming representations may be useful.

  • Ubiquity symposium 'What is computation?': Computation is symbol manipulation

    In the second in the series of articles in the Ubiquity Symposium What is Computation?, Prof. John S. Conery of the University of Oregon explains why he believes computation can be seen as symbol manipulation. For more articles in this series, see table of contents in the http://ubiquity.acm.org/article.cfm?id=1870596 Editors Introduction to the symposium. --Editor

  • An Interview with Prof. Andreas Zeller: Mining your way to software reliability
    In 1976, Les Belady and Manny Lehman published the first empirical growth study of a large software system, IBMs OS 360. At the time, the operating system was twelve years old and the authors were able to study 21 successive releases of the software. By looking at variables such as number of modules, time for preparing releases, and modules handled between releases (a defect indicator), they were able to formulate three laws: The law of continuing change, the law of increasing entropy, and the law of statistically smooth growth. These laws are valid to this day. Belady and Lehman were ahead of their time. They understood that empirical studies such as theirs might lead to a deeper understanding of software development processes, which might in turn lead to better control of software cost and quality. However, studying large software systems proved difficult, because complete records were rare and companies were reluctant to open their books to outsiders. Three important developments changed this situation for the better. The first one was the widespread adoption of configuration management tools, starting in the mid 1980s. Tools such as RCS and CVS recorded complete development histories of software. These tools stored significantly more information, in greater detail, than Belady and Lehman had available. The history allowed the reconstruction of virtually any configuration ever compiled in the life of the system. I worked on the first analysis of such a history to assess the cost of several different choices for smart recompilation (ACM TOSEM, Jan. 1994). The second important development was the inclusion of bug reports and linking them to offending software modules in the histories. This information proved extremely valuable, as we shall see in this interview. The third important development was the emergence of open source, through which numerous and large development histories became available for study. Soon, workers began to analyze these repositories. Workshops on mining software repositories have been taking place annually since 2004. I spoke with Prof. Andreas Zeller about the nuggets of wisdom unearthed by the analysis of software repositories. Andreas works at Saarland University in Saarbrücken, Germany. His research addresses the analysis of large, complex software systems, especially the analysis of why these systems fail to work as they should. He is a leading authority on analyzing software repositories and on testing and debugging. -- Walter Tichy, Editor

2018 Symposia

Ubiquity symposium is an organized debate around a proposition or point of view. It is a means to explore a complex issue from multiple perspectives. An early example of a symposium on teaching computer science appeared in Communications of the ACM (December 1989).

To organize a symposium, please read our guidelines.

 

Ubiquity Symposium: Big Data

Table of Contents

  1. Big Data, Digitization, and Social Change (Opening Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic
  2. Big Data and the Attention Economy by Bernardo A. Huberman
  3. Big Data for Social Science Research by Mark Birkin
  4. Technology and Business Challenges of Big Data in the Digital Economy by Dave Penkler
  5. High Performance Synthetic Information Environments: An integrating architecture in the age of pervasive data and computing By Christopher L. Barrett, Jeffery Johnson, and Madhav Marathe
  6. Developing an Open Source "Big Data" Cognitive Computing Platform by Michael Kowolenko and Mladen Vouk
  7. When Good Machine Learning Leads to Bad Cyber Security by Tegjyot Singh Sethi and Mehmed Kantardzic
  8. Corporate Security is a Big Data Problem by Louisa Saunier and Kemal Delic
  9. Big Data: Business, technology, education, and science by Jeffrey Johnson, Luca Tesei, Marco Piangerelli, Emanuela Merelli, Riccardo Paci, Nenad Stojanovic, Paulo Leitão, José Barbosa, and Marco Amador
  10. Big Data or Big Brother? That is the question now (Closing Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic