Home > Articles

Potions, Plot, Personalities: Understand How Science Progresses and Why Scientists Sometimes Disagree

  • Print
  • + Share This
Because the science that comes to us in our daily lives is usually science-in-the-making, to make sense of it, it is essential to understand how science really progresses.

In the sixth Harry Potter book, Harry Potter and the Half-Blood Prince, Harry developed a flair for making potions by following instructions handwritten in the margins of his potions textbook by the book’s previous owner. To make a Draught of Living Death, for instance, the handwritten notes in Harry’s book advised him to stir his potion clockwise after seven stirs in the opposite direction. The tiny tweak in the procedure helped Harry achieve potion perfection. Meanwhile, Harry’s brilliant friend, Hermione, who carefully followed the original textbook instructions line by line, became frustrated when she could not get her potions to turn out properly. Of course, at Hogwarts School of Witchcraft and Wizardry, potion making relies on magic. Surely, in a university laboratory outside J. K. Rowling’s magical world, the synthesis of chemicals would not be affected by something as insignificant as how the chemicals are stirred? Surprisingly, when a published chemical reaction—the cleaving of bonds between carbon atoms—inexplicably stopped working, a frustrating eight-month investigation did indeed trace the problem to how the solution was stirred. Iron was leaching out of the well-used magnetic stir bar of the chemist who developed and published the chemical reaction. It turned out that the metal was important for catalyzing the reaction. Researchers attempting to replicate the reaction had unwittingly removed the catalyst because they were using a new stir bar with its metal core well sealed in its plastic casing. There was no need to invoke the supernatural to explain the mystery of the failed reaction—the findings were published in the sedate chemistry journal Organometallics—but this example shows that science, like Harry Potter, has a plot with unexpected twists and turns. Because the science that comes to us in our daily lives is usually science-in-the-making, to make sense of it, it is essential to understand how science really progresses.

Brewing chemicals in a laboratory is a stereotype that comes to mind when we hear the word “scientist,” but scientists actually engage in a wide range of activities. Many scientists—for example, ecologists, archeologists, climatologists, and geologists—spend much of their time doing field research. This may involve documenting the behavior of animals in the wild to understand population declines, collecting ice cores in Antarctica and using gas bubbles trapped within them to gain information about changes in the earth’s atmosphere over time, or recording seismic activity near volcanoes or fault lines.

Of course, scientists often do spend considerable time in a laboratory, but the work they do there differs depending on several factors. Some of these include: whether the laboratories are affiliated with universities, hospitals, companies, zoos, or the government; how many scientists work there; how much funding they have; what kinds of research questions they focus on; what kind of equipment is used; and even where the labs are located. For example, physicists who study neutrinos—one of the fundamental particles that make up the universe—use special laboratory facilities a mile or more beneath the earth’s surface.

It should come as no surprise, then, that despite what most science textbooks may lead you to believe, there is no single method of doing science. This is one of three aspects of science frequently misrepresented by precollege and even college science courses. The second problem with these courses is that they leave the learner with the impression that science is merely an accretion of new ideas. However, in reality, controversy and revolutions in scientific thought are common features of science. Third, despite stereotypes of scientists as loners, interactions between scientists play many important roles in the progress of science. This chapter dispels the myths about these aspects of scientific progress and reveals how dispelling each myth can make one a more critical consumer of the claims about science that come through the media and other sources.

“The scientific method”—not as easy as pi

Introductory science textbooks often lay out a neat set of steps they refer to as “the scientific method” and leave readers with the impression that this is all they need to know about how science is done. The steps most texts describe can be summarized more or less as follows:

  1. Develop a hypothesis.
  2. Design an experiment to test the hypothesis.
  3. Perform the experiment and collect data.
  4. Analyze the data collected.
  5. Decide if the data support or refute the hypothesis.

This view of science is oversimplified, incomplete, and sets people up for failure when they try to make sense of science in the real world. While it might be reasonable to give children a simplified view of science to begin with, the problem is that many people, even college students who major in science, never get to see what authentic science is like. With some notable exceptions, undergraduate science laboratories are cookbook exercises, and undergraduate lecture courses are just that—lectures, usually more about presenting facts to be memorized than discussing how those facts came to be. For those who go on to graduate school in the sciences, it is often a shock when it takes months to figure out why experiments are not working, that what initially seemed to be an exciting result is an error, or (for the lucky ones) that what seemed to be an error turns out to be an exciting result.

The process of testing hypotheses is not nearly as cut-and-dried as the textbook scientific method would lead one to believe. First, multiple hypotheses are possible, but the one that ultimately stands up to the test may not be apparent from the start. It may only be proposed after several other hypotheses have been eliminated. Second, there may be more than one type of experiment that can be done to test a hypothesis, and each possible experimental test will have its own set of pros and cons. These include time and cost required, expected accuracy of the results, feasibility of applying the results to other situations, ease of acquiring the necessary equipment, and amount of training needed to use that equipment. Then again, the tools or techniques required to rigorously test the hypothesis may not exist. For example, geologists cannot physically probe the center of the earth. Instead they must make inferences about it based on seismic data. Third, data analysis is rarely simple and straightforward. Decisions must be made about whether to include data that appear spurious, what to do if experimental subjects dropped out of an experiment before it was over, and, as discussed in the next section, how to interpret data that was collected using new technologies. Finally, it may be possible to draw more than one conclusion from the same data. For example, if multiple factors can each play a role in causing something, it will likely take more than one experiment to tease them apart. A discussion of these caveats of designing experiments and interpreting data is usually absent from media reports about science.

With new tools, researchers can answer new questions—but only after the bugs are worked out

Over time, as new technologies develop, scientists can begin to test hypotheses they could not have tested in the past. But for the conclusions drawn from experiments using new procedures or new technologies to be accepted by the scientific community, other scientists must agree that the new technique does measure the effect of interest, and that what is being “observed” is real.

For example, chemists often want to know the structure of particular molecules. This information is used in many ways, including drug design. One way to determine a molecule’s structure is Nuclear Magnetic Resonance (NMR). NMR relies on the fact that when a molecule is placed in a magnetic field and probed using radio waves, the behavior of the nucleus of each atom depends on the identity of its neighboring atoms. A chemist can load a vial containing a sample of the molecules of interest into an NMR machine and get a graph that consists of a series of peaks. The structure of the molecule is inferred from this graph. The key word is “inferred.” The chemist operates on the assumption that the peaks correspond to atoms, and are not some artifact of the procedure like electrical surges or vibrations in the room.

NMR is a well-accepted experimental technique used everyday by scientists all over the world. For a technique like NMR to become accepted, it must withstand a series of tests. For instance, if an older technique measures the same thing (presumably less efficiently), then the output of the new technique can be compared to that of the old. Alternatively, researchers can study the output of the new technique when it is used to analyze a set of known standards. For a new NMR technique, scientists could take chemicals that have a known molecular structure, run NMRs, and have other scientists, who did not know what the original samples were, interpret the graphs. If this can be done accurately and consistently over a wide range of samples, the technique can be used to identify unknown samples.

Even when the procedure or technology has been used for a time in one context, or to collect one type of data, applying it to collect another type of data, or to collect data under different conditions, may lead to disputes about what is really being observed. For example, a test that measures the concentration of a specific chemical may work well when the solution being tested is simple. On the other hand, when many other chemicals are present, they may participate in side reactions that interfere with the analysis. So the test may give accurate readings for well water or lake water, but may give false readings when applied to the analysis of blood samples or industrial waste. For this reason, new applications of procedures require careful consideration and verification.

Furthermore, although scientists may agree with each other on what they are observing with a given procedure, they may not agree on what the observations mean. For example, some brain scans allow scientists to measure blood flow to different regions of the brain. By studying changes in blood flow when people engage in different tasks—such as solving jigsaw puzzles, listening to music, memorizing a list of words—scientists infer what regions of the brain are necessary for those tasks. But an increase in blood flow does not necessarily mean that region of the brain is “thinking.” Other scientists could accept that the scan is indeed measuring blood flow, while arguing that the increase in blood flow means that more messages are being sent through that region of the brain, rather than being processed there, or that the blood flow is due to an increase in cell maintenance and repair that occurs after a region of the brain has finished thinking. They might suggest further tests of the technique to address their concerns.

Uncertainty about what tool or procedure to use, and the risk that results are not what they appear to be, are problems common to all the scientific disciplines. The development of new tools allows scientists to answer questions they could not answer in the past, and the answers to those questions will lead to new questions, and so on. Therefore, new technologies and procedures are crucial to the progress of science. At the same time, other scientists unfamiliar with a new tool may express skepticism and call for others to replicate the experiments. Because this skepticism often comes to us in the form of sound bites, and because uncertainty about experimental tools is an aspect of science that is not familiar to most people, even people with a bachelor’s degree in science, the skepticism may seem like waffling. Waffling is annoying when you are trying to make decisions on the basis of the scientific information that comes your way. However, if a new technique is the source of the uncertainty, time and future experiments will confirm or disconfirm its usefulness and clear up uncertainty.

Models play a critical role in the progress of science

Volcanoes are a real hit with kids. Build a hollow, cone-shaped structure from some simple household items, throw in some vinegar, red food dye, and baking soda, and whoosh—the eruption makes a big, foaming mess. Of course, while these science fair model volcanoes bear a superficial resemblance to real volcanoes, they function in a completely different way. Obviously, scientists looking for a system on which to conduct laboratory tests to better understand volcanic eruptions would not turn to the popular science fair volcano. This highlights a critical feature that distinguishes the kinds of models that were used to teach us science and the kinds of models that scientists use to understand the world. On the one hand, teachers and parents use model volcanoes to create excitement and give young students a physical object to which they can tie the earth science concepts they are learning. Likewise, a teacher may use ping pong balls to show how molecules of a gas bounce off each other and the sides of a container. For the purpose of helping students understand difficult scientific concepts, it does not matter that real magma behaves very differently than baking soda and vinegar, or that ping pong balls do not really mimic the behavior of gas molecules. These models make science more visual and are practical teaching tools. On the other hand, if the goal is to use a model to test hypotheses about how things work in the real world, the features of an ideal model are very different. In that case, the model does not have to look like its real world counterpart; it just has to act like it. For example, to understand what is happening in a cell when it switches between different types of fuel (carbohydrate, fat, protein), a plastic model of the cell showing all of the cell’s organelles is completely inadequate. Considerably more useful is a computer program that simulates all of the major processes and chemical reactions in the cell.

Scientists use many different types of models, but in recent decades as computers have become increasingly powerful, computer simulations have become essential tools for scientists studying all kinds of complex systems. For example, computational models are used to understand the biological processes occurring within organisms, the functioning of ecosystems of organisms, the evolution of the universe, and climate changes. One kind of computer simulation is like the simulations used to make special effects in movies and computer games in that it aims to create a visual representation of reality (or unreality, in the case of some games and movies). Scientists use these kinds of simulations, for example, to determine the three-dimensional structure of proteins that play a role in different diseases. Knowing the structure of a protein makes it feasible to design a drug that can bind to the protein and modify its function. The second type of computer simulation is considerably more abstract and mathematical. Its output may not visually represent reality at all. Instead, it is used to determine what may occur given a specific set of initial conditions. Will the death of a star of a certain size give rise to a black hole? Given certain patterns of use of a new antibiotic, how long will it take before bacteria that are resistant to that antibiotic become widespread? How many degrees will global temperatures rise if we continue to emit greenhouse gases at the current rate?

Discussions in the media about global climate change frequently mention climate models, and “model-bashing” is a favorite pastime of climate change skeptics. The term “climate model” may bring to mind the familiar television weather map with its movements of air masses, clouds, and precipitation, but climate models are more mathematical and complex than weather forecasts. Rather than predicting the movements of air masses a few days in advance (which is a challenge in itself—no matter what the Weather Channel says, pack an umbrella just in case), climate models deal with larger regions over longer time scales. A considerable number of factors (in scientific lingo—parameters) must be included in climate models. What are the patterns of greenhouse gas emissions, and what quantity of greenhouse gases can be expected to accumulate during the time period under consideration? How much will each greenhouse gas (carbon dioxide, methane, water vapor, and so on) contribute to warming? How will the increase in concentration of water vapor in the atmosphere affect cloud formation? How will the clouds influence temperature? What will be the concentration of atmospheric particles like soot that can act as seeds to trigger cloud formation? What other effects will the atmospheric particles have? How significantly will the warming reduce ice and snow cover, and how much will the resulting decrease in reflectivity further enhance the heating at the earth’s surface? How will the uptake of carbon dioxide by plants and the ocean be affected by warming? How could the warming predictions be affected by other natural sources of climate variation, such as cyclic variations in the sun’s output or volcanic activity on Earth? Whew!

The need to take all of these different parameters into account means that climate models require tremendous computational power. Supercomputers are often used to do the number crunching. In addition, developing the climate model is not simply a matter of devising mathematical equations to account for each parameter. None of the values of the parameters is known for certain, and each is the focus of ongoing research. As new data become available, models are updated accordingly. Models must also be tested. The models are used to make predictions about the world, and then refined based on their ability to mimic reality. As a result, models improve with time and further research. Current climate models are better than past models, but because so many factors are still uncertain, predictions of future temperature increases vary widely. The range of these predictions will likely narrow as each of the parameters becomes better understood.

  • + Share This
  • 🔖 Save To Your Account