Volume 2013, Number March (2013), Pages 1-16
Phil Yaffe has provided numerous commentaries on various aspects of professional communication, which have helped readers more effectively articulate their own ideas about the future of computing. Here he tells us about how scientists see the worldthe "scientific approach," he calls itbecause he thinks many non-scientists see the world in a similar way. This realization can lower barriers of communication with scientists.
Peter J. Denning
Many people believe understanding science requires a special kind of thinking, i.e. that your brain must be wired differently from other people. According to Albert Einstein, who knew a thing or two about it, "The whole of science is nothing more than a refinement of everyday thinking."
In other words, scientific thinking is just an extension of the way you already think. This is excellent news, because it means that people who say they are incapable of understanding science are probably wrong. They do understand science, but were never aware of it.
Realizing this is extremely important, because most people who think they don't understand science are constantly being called upon to vote on issues about science. To cite just a few examples:
- Should we be so concerned about global warming as to spend billions and billions of dollars to fight it?
- Should we ban nuclear energy as being too dangerous and too untrustworthy?
- Are genetically modified foods potentially so damaging that they should be prohibited even if they may be the best hope for feeding the world's burgeoning populations?
- Should homeopathy and other alternative medical treatments be recognized as legitimate, and paid for by government and private insurance plans?
What distinguishes scientific thinking from ordinary thinking is only that scientists do it in a more rigorous way. Scientists are practiced in constructing packages of evidence to support their claims. Other people admire that.
Almost everyone has heard of the scientific method and understands it has something to do with setting up hypotheses and then running experiments to test these hypotheses. This is essentially correct, and not difficult to understand. What seems to cause most of the problems is lack of understanding of the "scientific approach," without which the scientific method would be useless.
The scientific approach is a set of fundamental principles to which all scientists should adhere. These fundamental principles are the intellectual raw materials from which the scientific method was fashioned.
There are 16 of these principles or "character traits" of the scientific approach. Don't be put off by this apparently unwieldy number. Many of the traits are closely interrelatedand all of them are just common sense. So remembering and using them should present little difficulty.
1. Science is Based on Faith
Don't be shocked by this statement; it has no religious content. It simply means that science relies on assumptions. If you remember your high school geometry, you will recall everything depended on a handful of axioms, i.e. accepted but unproved assumptions on which geometers rely to build their proofs. But geometry is not the only branch of mathematics that depends on unproved assumptions. All mathematics doesas does all science.
The fact that science is based on faith (axioms) seems to be a mystery to people who should know better. A few years ago, a leading international newspaper ran an editorial article by a "thinker" denouncing science for not admitting that, like religion, it is based on faith, which evoked the following response.
Re: Having faith in science
Every scientist worthy of the name knows that science is based on faith. Even mathematics, the purest of the sciences, is based on axioms, i.e. unproved but necessary laws. The difference between science and religion is that scientific faith is constantly being tested and, when found wanting, constantly being adjusted (sometimes acrimoniously) to fit the facts. Non-Euclidian geometries, quantum mechanics, matter-antimatter, and relativity are telling examples.
Paul Davies asserts: "(Science's) claim to be free of faith is manifestly bogus." Hogwash! It is the claim that science makes such a claim that is bogus. Mr. Davies, of all people, should know this.
2. Science Thrives on Lack of Faith
Again don't be shocked by this statement; it has no religious content. It simply means that whenever one of its axioms (assumptions) comes into doubt, science is ready to investigate it and, if found wanting, to throw it out.
For example, one of the axioms you learned in high school geometry was that if you have a straight line and a point located off the line, only one new line can be drawn through that point parallel to the first line. In other words, given a situation like this ( / . ), only one line can be drawn through the point to give a second line parallel to the first (/ /).
In the mid-19th century, some mathematicians seriously began questioning if this apparently "self-evident truth" was really correct. So they did some investigating and concluded that it wasn't. These investigations were the source of what became known as "non-Euclidian geometry."
What are the fundamental assumptions of science?
This is not an easy question to answer because different people come up with different lists. Here are four assumptions on which most people seem to agree.
- The world is real. In other words, we are not lying in a coma, imagining that things are happening. They really are.
- The real (physical) world is knowable and comprehensible. If we take the time and trouble to do so, we can progressively come to understand the world.
- There are laws that govern the real world. The world operates as a system with internal logic.
- The laws that govern the real world are knowable and comprehensible. We can discover these laws and learn to use them to understand what has already happened, what is currently happeningand most importantlywhat is going to happen. This is called "prediction."
For many people, these four assumptions are enough. However, in this age of space exploration, a fifth assumption is becoming increasingly important.
- The laws that govern the real world are essentially the same everywhere in the universe. In other words, the laws of gravity we know on Earth are the same as those that operate on other planets around other stars; the chemical reactions we know on Earth are the same as those that operate on other planets around other stars, etc.
Therefore, when we finally travel to other planets and other stars, everything we have learned about how things work here on Earth will still be valid way out there. However, if we discover things work differently out there, we shouldn't be surprised. After all, it is an assumption.
3. Science is Surprising
It is common sense to believe the Sun moves while the Earth is stationary; we see this happening everyday. But we know it isn't true. It is also common sense not to believe a small lump of uranium could release as much energy as 20,000 tons of dynamite, but we know this is true.
Scientists sometimes make discoveries so surprising that they totally transform our perception of the world and fundamentally challenge our most deep-seated ideas about how the world works.
Imagine the shock when Anton van Leeuwenhoek (16321723), a pioneering inventor of the microscope, turned his crude instrument on what he thought was just a bowl of pond water and found it teeming with thousands of tiny, previously invisible plants and animals. The microscope had revealed a whole new plane of life no one had ever suspected existed. His discoveries eventually led to major advances in physiology and Pasteur's germ theory of disease, on which much of modern medicine is based.
In short, the raison d'être of science is to go beyond the obvious to determine what is realin the physical, not the metaphysical senseoften with very surprising, counter-intuitive results. Counter-intuitivity seems to be why many people are suspicious of science. While they accept the Earth orbits the Sun, they do not accept there were ever astronauts on the Moon because intuitively they believe it to be impossible.
Common sense (intuitivity) and science should never be put into opposition, for the very good reason that they have little or nothing to do with each other.
As noted by biochemist (and nonpareil science and science-fiction writer) Isaac Asimov: "The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! (I found it!), but That's funny... ."
4. Science is Open-minded
In discussing eminent astronomer and science popularizer Carl Sagan, Joshua Gough wrote the following:
The dogmatist knows all the answers. He or she accepts no criticism and opens no ears. Merely questioning the dogmatist amounts to overt and intolerable criticism ipso facto in his or her mind. The dogmatist listens to no questions and throws all criticism onto the scrap heap.
The true scientist has more questions than answers. He or she explicitly seeks criticism and opens ears to others. Strong questions about the scientist's ideas afford opportunities to confirm or deny their validity. The scientist must accept questions and readily understands that he or she may be forced to throw cherished ideas onto the scrap heap.
Even the most distinguished of scientists sometimes tend to put up bulwarks against new ideas when they differ too radically from those they already hold. Albert Einstein, whose thoughts on time, space, matter, and energy so surprised his professional colleagues (let alone the general public) in the early 20th century, was himself so astounded by some aspects of the then-emerging science of quantum physics that he simply refused to believe them. He was particularly vexed by quantum physics' contention that at the subatomic level, cause and effect break down, i.e. you can never tell with certainty which of several possible effects will be produced by a given cause. To which he retorted, "God does not play dice with the universe."
Practically and psychologically, it is never easy to jettison what one already knows and in which one has confidence in order to replace it with something radically new whose bona fides have not yet been fully established. The continuing battle about global warming between the majority of scientists who, after years of discussion, now accept the idea and the minority who are still skeptical about it is only the latest case in point. However, the number of recruits to the cause has now become so overwhelming the naysayers are less and less seen as legitimate critics, but irrational holdouts.
This is frequently how science proceeds:
- First, someone advances a hypothesis.
- Second, many others subject it to severe scrutiny.
- Third, only when it accumulates enough allies does it become scientific "truth."
It is of course always possible for a hypothesis later to be discredited by new evidence. However, until that time (if ever), the hypothesis takes its place as another sturdy plank in science's ever more solid foundation of reliable ideas.
5. Science is Skeptical
No scientific "truth" is ever fully safe because it is never possible to prove a positive, only a negative. For example, if you do something 1,000 times in a row and it always works, you cannot be absolutely certain that it won't fail on the 1,001 time. If it again succeeds on try 1,001, you have the same problem with try 1,002, 1,003, 1,004, etc.
To put it into less abstract terms, suppose a racehorse wins 10 times in a row. You cannot be absolutely certain that it will win the 11th time; however, this would probably be the way to bet.
Here is another example of this idea, which exposes a common scientific fallacy.
Suppose you flip a coin and it comes up heads nine times in a row. Someone bets you $50 against $5 that on the next flip it will come up tails. Should you take the bet? Absolutely! The coin has no memory. It doesn't know the flip has already come up heads nine times in a row, so the chances that it will come up heads again are just the same as if the previous nine flips had never happened, i.e. 50-50 (one out of two).
However, if someone bets you $50 against $5 that you couldn't flip and get 10 heads in a row before you start, you would be very wise to refuse. The odds against flipping 10 heads in a row from the first flip are more than a thousand to one; you would almost certainly lose.
On the other hand, a theory that consistently predicts the results of an experiment or application (theories are supposed to be able to predict things) is not a 100 percent guarantee that the theory is correct. It is always possible that you have not yet tried an experiment or application that would show up a flaw. As Einstein said about relativity, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."
6. Science is Arbitrary
Many scientific terms are arbitrary; someone or some committee has decided that this is how we should talk about these things, so that is what we do. This is not unreasonable. Unless we agree on what we are talking about, it is unlikely that we will have a useful discussion.
For example, you may have heard of the controversy about whether Pluto, discovered in 1930, should still be considered a planet or classified as something else.
Closer to home, you may have heard that the tomato is really a fruit, not a vegetable. But why? Ask yourself what you mean by the words "fruit" and "vegetable" and you will probably discover that you really don't know. It's just that as you were growing up, you always heard people referring to certain plants as fruits and others as vegetables, so you learned to do the same.
Scientists can't be so casual. Botanists have very carefully defined what they mean by "fruit" and "vegetable." Study of the tomato has shown that it meets the definition of a fruit. You might be surprised to learn that eggplants, cucumbers, and squashes such as zucchini and pumpkins also meet the definition of a fruit. In common conversation, if you prefer you may still refer to these plants as vegetables, but you should be aware that this is custom, not science.
On a more serious note, fairly recently in medicine there has been a change in the definition of "death." At one time death was considered to have occurred when someone's heart stopped beating; however, modern medical equipment can keep the heart beating long after the body would be able to do so itself. This led to revising the definition death to be when the brain is no longer capable of function, so-called "brain death."
7. Science is Cumulative
Some people harbor the idea that many scientific laws and theories are eventually overthrown, so nothing in science can ever be trusted.
This is an overstatement. People who say Einstein proved Newton was wrong, and that someday someone will prove Einstein was wrong, are making a serious error. Einstein extended Newton by considering cases Newton did not think of.
By definition, science seeks to provide the best possible description or explanation of what is currently known. It is normal that as new things are discovered, descriptions and explanations need to be modified. This does not mean that the previous descriptions and explanations were wrong, only inadequate. Since we never know everything about anything, this is only normal.
Or to quote Newton, who had reason to know what he was talking about, "If I have seen far, it is because I have stood on the shoulders of giants." Newton is still very much a giant, as is Einstein.
8. Science is Simplicity
A fundamental quest among scientists is to uncover commonality among things that on the surface seem to be dramatically different. For example, it is not at all evident that the reason objects fall to the ground is the same reason that the Earth and other planets orbit around the Sun. Isaac Newton's elaboration of this idea greatly simplified things by showing that one and the same force, gravity, is responsible for both phenomena.
Another way science seeks simplicity is in how it defines units of measure. Whenever possible, units are defined in such a way that they describe the mathematics needed to use them. For example, a calorie is defined as the amount of heat energy required to raise the temperature of 1 milliliter of water by 1° Celsius.
Suppose, as an example, that we want to calculate how much energy is needed to raise the temperature of 5 milliliters of water by 8°C. The definition talks about 1 ml, but we have 5, so we must multiply by 5. It also talks about 1 degree, but we have 8 degrees, so we must also multiply by 8. 5 x 8 = 40. The answer is: it requires 40 calories to raise the temperature of 5 ml of water by 8°C.
If you are not used to calories, which is a metric unit, try the same thing in British Thermal Units. A BTU is defined as the amount of heat energy required to raise the temperature of 1 pound of water by 1° Fahrenheit. The same logic, working with this unit, tells us the number of BTUs to raise the temperate of water.
Scientists frequently "follow the units" to calculate answers. Suppose, for example, that a scientist runs field experiments from a generator. At the end of the month, the lab records show 2,000 kilowatt-hours of electric usage. If the price for the generator to produce 1 kilowatt-hour is $2, the monthly cost of the generator is 2,000 kilowatt-hours x $2/kilowatt-hour = $4,000. The average power output of the generator that month is 2,000 kilowatt-hours / 24 hours per day / 30 days per month = 2000/(30x24) = 2.8 kilowatts.
9. Science is Precision
Scientists love to measure things. Mainly to see if their measurements confirm their theories, but also to see if they reveal unexpected patterns that can be converted into new theories.
The problem is, no matter how accurately things are measured, we can never be certain that they are accurate down to the last decimal point. In the vast majority of cases, the level of accuracy we can reach is sufficient, but occasionally it isn't. Discovering an extremely small variation in an extremely accurate measurement sometimes reveals new and unexpected scientific knowledge.
Another way scientists are precise is in the way they speak.
Suppose you enter a room where there are two other people and say, "It's very hot today." One of those people comes from Alaska; in his mind "hot" means 25°C (77°F). The other one comes from Texas; to him "hot" means 45°C (113°F).
You are off to a rather bad start because each one has a totally different idea of what you want to them to know. But suppose you say, "It's very hot today; the temperature is 30°C (86°F)." Now there is no room for confusion. They both know quite clearly that it is 30°C outside and that you consider this to be very hot.
When scientists write or speak, they need for the audience's mind to go only where they direct it and nowhere else. This is also what the audience wants. After all, they are paying attention to find out what the scientist thinks; they already know what they think.
Because they can be interpreted in unknown ways, ambiguous terms (so-called "weasel words") such as "hot," "cold," "big," "small," "good," "bad," etc. allow the audience's mind to wander off into totally unpredictable directions, which defeats the purpose of the communication. This is why scientists are usually very precise in what they say.
Avoiding weasel words is good practice not only for scientists, but also for almost everyone, whatever their walk of life.
10. Science is Predictability
An absolute, fundamental, indispensable sine qua non of science is predictability, i.e. the ability of science to predict what is going to happen long before it does.
For our early ancestors, the most obvious things to be predicted were the changes of the seasons, the rising and setting of the Sun, the phases of the Moon, and the movement of heavenly bodies across the sky. The ability to predict lunar eclipses, and especially solar eclipses, was close to wizardry. Even today, we are still obsessed by predicting the weather.
Scientific predictions don't always have to go forward in time. Based on what we have already found out and analyzed, sciences such as anthropology, geology, and paleontology predict what still may be hidden, then set about to find it. Physics does the same thing. The Hadron Collider, buried in a 27-kilometer (17-mile) long tunnel in Switzerland, was built in part to look for certain subatomic particles predicted by theory but not yet actually seen in fact.
In short, any theory that cannot predict what is still undiscovered is not a scientific theory.
This does not mean that predictions must be 100 percent accurate, but they must be sufficiently close to reality to inspire confidence. Any disparities from predictions should be treated as invitations to investigate further in order to refine the theory or eventually replace it by a better one.
11. Science is Probability
No scientific theory can be proved, only disproved; yet scientists must live in the day-to-day world. So what do they do? They rely on probability. Scientists select the theories where the overwhelming preponderance of evidence suggests they are correct and assume they are correct until such time as they discover a negative example.
F=ma, Newton's second law of motion, is a pertinent example. We assume it is correct because we have yet to discover a case where it fails; however, as noted in character trait 2, we assume physical laws are the same everywhere in the universe. Newton's second law may be deterministically true in our little sliver of the universe, but it could fail somewhere else. Based on long experience, the assumption that the law is universal has a probability of being correct, but it is still an assumption, not a certainty.
Medicine provides a more down-to-earth example. Before a new pharmaceutical product can be released for use, approval from national health authorities must first be satisfied. In general, this means submitting a registration file containing massive amounts of data, most importantly with regard to efficacy and safety. Clinical trials (tests on volunteers) are organized under very strict rules to obtain this information.
Not all patients will respond to the drug, or respond in the same way. So there are two crucial questions:
- Does the drug actually do anything, i.e. does it produce results significantly better than a placebo (sugar pill).
- Will the new drug do enough good for enough patients that it deserves to be put into the hands of doctors? Clinical trials are designed to answer this and other questions.
The results are usually expressed in terms of probability, i.e. among all persons suffering the malady the drug is destined to treat, what percentage are likely to substantially benefit from it? If the probability of benefit is high enough and the drug meets other criteria, it is released to the medical community. Otherwise, it must return for additional trials, additional development, or be abandoned.
12. Science is Reproducibility
Many of us have engraved on our minds the picture of Archimedes sitting in a bath, contemplating. Suddenly he jumps up, shouts "Eureka!" (I have found it), then runs out into the street wearing a great big smile on his face, and nothing else. He had just discovered the principle of water displacement, which provided a means of measuring the volume of irregular objects. This represented a major gain in knowledge, and was probably accepted almost immediately.
Today, scientific breakthroughs almost never take place overnight, or at least they do not become accepted overnight.
Science is all about trying to discover things that are universally true, that do not depend on place and time. Therefore, it is not enough for one scientist to announce he has done something in his laboratory and it worked. He must give a detailed description of what he did so other scientists can try the same thing in their laboratories to see if still works. This is why most scientific research papers include a lengthy section describing exactly what the author did, with what equipment, and following what procedure.
This is known as "reproducibility." Only when a number of scientists, independent of each other, have reproduced the experiment and come to the same conclusion can a discovery be integrated into the general body of science. Reproducibility is a cornerstone of modern scientific knowledge because it is essential to acquiring a sufficient number of allies for a hypothesis for it to become scientific "truth."
The recent claim from CERN that the Higgs Boson exists shows this principle at work. The initial press release said they were "sure" they found the boson but admitted some slight uncertainty. Another team performed its own analysis of the data and compared it with the first team. The second team said they were "really really sure" and put the possibility of error at one chance in 550 million. Reproducibility does not require that trials be run in separate locations, only that they be run totally independent of each other. Additionally, the raw data will probably be submitted to and analyzed by different experts around the world.
13. Science is Parsimonious
In searching for new ideas to explain how the world works, scientists follow the guiding principle of seeking simplicity to reduce complexity.
This principle is generally known as Occam's razor. It is attributed to the 14th century English logician and Franciscan friar Father William of Occam. Although the idea was known and accepted long before, Occam's formulation is the one most widely quoted: "Entities must not be multiplied beyond necessity." Another popular formulation is: "Plurality should not be posited without necessity." In more up to date language, Einstein said, "Everything should be as simple as possible, but not simpler."
Adhering to Occam's razor does not mean among competing scientific theories the simplest one should always be preferred, because the simplest theory may be wrong. What scientists are looking for is the simplest possible theory that actually works, i.e. can be tested and shown to be accurate.
14. Science is History
The way scientists talk about things often depends on what was said about them when they were invented or discovered.
For example, why is the power of engines generally rated in horsepower and the power of light bulbs in watts? Quite simply, because when engines were first developed, their power was compared to horses, which were doing much of the work engines were designed to replace. The power of light bulbs is expressed in watts in honor of the pioneering efforts of James Watt (17361819) in understanding and applying electricity for useful purposes.
You may have noticed that history is about to change. Introduction of the new low-energy light bulbs means the watt is no longer useful for expressing their lighting performance; in fact, it never was. The watt is a unit of electrical power, i.e. how fast an instrument is using electrical energy. For light bulbs, it is therefore now being replaced by the "lumen," which is a measure of lighting intensity. Given their exceptionally long life, meaning very infrequent changes, it is likely that within a generation hardly anyone buying a new light bulb will even remember that watts and lighting ever had anything do with each other.
15. Science is Human
Preconceived notions (preconceptions) are ideas that are believed to be true without any firm reason for the belief. Certain groups reject results of science because they do not conform to their preconceived notions of what they believe must be true rather than what can de demonstrated to be actually true.
Even scientists fall into this trap. For example, from the time of the ancient Greeks up until about the 16th century, astronomers believed the circle was geometrically "perfect" while other curves were somehow less noble. Therefore, the orbits of the Sun and planets around the Earth (when it was still believed that the Earth was the center of the universe) had to be circles.
This caused numerous problems because the observed movement of the Sun and planets in the sky did not seem to be circular. Astronomers therefore had to invent complex combinations of circles within circles in order to make the "anomalies" disappear.
The idea of the perfection of the circle persisted even when it had been generally agreed that the Sun is the center of the solar system and that the Earth and other planets orbit around it. Again, astronomers had to invent complex designs of circles within circles to explain the anomalies.
Finally, when Johannes Kepler (15711630) suggested these orbits might be ellipses, everything fell into place. Although "imperfect" compared to the circle, considering obits to be elliptical gave accurate calculations and predictions. Suddenly everything became clear.
A similar thing happened in the history of chemistry. Organic chemistry (carbon-based molecules) was making great progress in the early and mid-1800s. However, there was a fundamental problem. Through experimentation, chemists were learning more and more about what reactions would take place by putting specific molecules together under specific conditions, but they didn't really know why.
At the time they believed that organic molecules were strings of atoms forming chains of different lengths. However, in many cases this idea did not allow them to look at the structure of different chains and predict how they would react with each other.
Then in 1865 Friedrich Kekulé proposed benzene, which did not behave as if it were a chain, might in fact be a ring. In other words, instead of the ends moving about freely, they joined up to form a kind of circle.
This idea revolutionized organic chemistry.
16. Science is Humble
True scientists are humble about their achievements because they know that whatever they have accomplished, there is still so much more to be done. Each new level of understanding opens new mysteries to be explored. This aspect of science is best expressed in two quotations from Isaac Newton, one of which has already been cited.
- "I was like a boy playing on the seashore, and diverting myself now and then finding a smoother pebble or a prettier shell than ordinary, while the great ocean of truth lay all undiscovered before me."
- "If I have seen far, it is because I have stood on the shoulders of giants."
Science, the Eternal Frontier
The opening lines of "Star Trek," the iconic science-fiction television series, are:
Space: The Final Frontier
These are the voyages of the Starship Enterprise.
Its five-year mission,
To explore strange new worlds,
To seek out new life and new civilizations,
To boldly go where no man has gone before.
These few words dramatically set the scene for "Star Trek," one of the most successful TV shows of all time. However, it contains a serious misstatement. If you are talking about physical exploration, space may very well be the final frontier. If you are talking about intellectual exploration, space is only a small part of science, whose explorations are never-ending.
It probably wouldn't have been so dramatically effective, but a more accurate rendition of the Star Trek introduction would have been:
Science: The Eternal Frontier
These are the voyages of the human spirit.
Their never-ending mission,
To explore strange new phenomena,
To seek out new ideas and new insights,
To boldly go where no human mind has gone before.
And to keep going forever and ever. The universe may have had a beginning (Big Bang) and may have an end (Big Crunch). But between these two endpoints, mankind will always seek to discover and understand what is going on around them.
Science is a great gameand the universe is its playground.
This article is excerpted from the author's book Science for the Concerned Citizen: What You Don't Know CAN Hurt You.
Philip Yaffe was born in Boston, Massachusetts, in 1942 and grew up in Los Angeles, where he graduated from the University of California with a degree in mathematics and physics. In his senior year, he was also editor-in-chief of the Daily Bruin, UCLA's daily student newspaper.
He has more than 40 years of experience in journalism and international marketing communication. At various points in his career, he has been a teacher of journalism, a reporter/feature writer with The Wall Street Journal, an account executive with a major international press relations agency, European marketing communication director with two major international companies, and a founding partner of a specialized marketing communication agency in Brussels, Belgium, where he has lived since 1974. He is author of 14 books, which can be found easily in Amazon Kindle.
©2013 ACM $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
A Ubiquity symposium is an organized debate around a proposition or point of view. It is a means to explore a complex issue from multiple perspectives. An early example of a symposium on teaching computer science appeared in Communications of the ACM (December 1989).
To organize a symposium, please read our guidelines.
Ubiquity Symposium: Big Data
- Big Data, Digitization, and Social Change (Opening Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic
- Big Data and the Attention Economy by Bernardo A. Huberman
- Big Data for Social Science Research by Mark Birkin
- Technology and Business Challenges of Big Data in the Digital Economy by Dave Penkler
- High Performance Synthetic Information Environments: An integrating architecture in the age of pervasive data and computing By Christopher L. Barrett, Jeffery Johnson, and Madhav Marathe
- Developing an Open Source "Big Data" Cognitive Computing Platform by Michael Kowolenko and Mladen Vouk
- When Good Machine Learning Leads to Bad Cyber Security by Tegjyot Singh Sethi and Mehmed Kantardzic
- Corporate Security is a Big Data Problem by Louisa Saunier and Kemal Delic
- Big Data: Business, technology, education, and science by Jeffrey Johnson, Luca Tesei, Marco Piangerelli, Emanuela Merelli, Riccardo Paci, Nenad Stojanovic, Paulo Leitão, José Barbosa, and Marco Amador
- Big Data or Big Brother? That is the question now (Closing Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic