acm - an acm publication

Jeffrey H. Johnson Collection

  • COVID-19 and computation for policy

    Governments across the world are formulating and implementing medical, social, economic and other policies to manage the COVID-19 pandemic and protect their citizens. Many governments claim that their policies follow the best available scientific advice. Much of that advice comes from computational modeling. Two of the main types of model are presented: the SIR (Susceptible, Infected, Recovered) model developed by Kermack and McKendrick in the 1920s and the more recent Agent Based Models. The SIR model gives a good intuition of how epidemics spread; including how mass vaccination can contain them. It is less useful than Agent Based Modeling for investigating the effects of policies such as social distancing, self-isolation, wearing facemasks, and test-trace-isolate.

    Politicians and the public have been perplexed to observe the lack of consensus in the scientific community and there being no single 'best science' to follow. The outcome of computational models depends on the assumptions made and the data used. Different assumptions will lead to different computational outcomes, especially when the available data are so poor. This leads some commentators to argue that the models are wrong and dangerous. Some may be, but computational modeling is one of the few ways available to explore and try to understand the space of possible futures. This lack of certainty means that computational modeling must be seen as just one of many inputs into the political decision making process. Politicians must balance all the competing inputs and make timely decisions based on their conclusions---be they right or wrong. In the same way that democracy is the least worst form of government, computational modeling may be the least worst way of trying to understand the future for policy making.

    ...
  • Big data: big data or big brother? that is the question now.

    This ACM Ubiquity Symposium presented some of the current thinking about big data developments across four topical dimensions: social, technological, application, and educational. While 10 articles can hardly touch the expanse of the field, we have sought to cover the most important issues and provide useful insights for the curious reader. More than two dozen authors from academia and industry provided shared their points of view, their current focus of interest and their outlines of future research. Big digital data has changed and will change the world in many ways. It will bring some big benefits in the future, but combined with big AI and big IoT devices creates several big challenges. These must be carefully addressed and properly resolved for the future benefit of humanity.

    ...
  • Big Data: Business, Technology, Education, and Science: Big Data (Ubiquity symposium)

    Transforming the latent value of big data into real value requires the great human intelligence and application of human-data scientists. Data scientists are expected to have a wide range of technical skills alongside being passionate self-directed people who are able to work easily with others and deliver high quality outputs under pressure. There are hundreds of university, commercial, and online courses in data science and related topics. Apart from people with breadth and depth of knowledge and experience in data science, we identify a new educational path to train "bridge persons" who combine knowledge of an organization's business with sufficient knowledge and understanding of data science to "bridge" between non-technical people in the business with highly skilled data scientists who add value to the business. The increasing proliferation of big data and the great advances made in data science do not herald in an era where all problems can be solved by deep learning and artificial intelligence. Although data science opens up many commercial and social opportunities, data science must complement other science in the search for new theory and methods to understand and manage our complex world.

    ...
  • High Performance Synthetic Information Environments
    An integrating architecture in the age of pervasive data and computing: Big Data (Ubiquity symposium)

    The complexities of social and technological policy domains, such as the economy, the environment, and public health, present challenges that require a new approach to modeling and decision-making. The information required for effective policy and decision making in these complex domains is massive in scale, fine-grained in resolution, and distributed over many data sources. Thus, one of the key challenges in building systems to support policy informatics is information integration. Synthetic information environments (SIEs) present a methodological and technological solution that goes beyond the traditional approaches of systems theory, agent-based simulation, and model federation. An SIE is a multi-theory, multi-actor, multi-perspective system that supports continual data uptake, state assessment, decision analysis, and action assignment based on large-scale high-performance computing infrastructures. An SIE allows rapid course-of-action analysis to bound variances in outcomes of policy interventions, which in turn allows the short time-scale planning required in response to emergencies such as epidemic outbreaks.

    ...