5 Things Scientists Wish More Non-Scientists Understood


A passion for science is increasingly fashionable.

You might also enjoy…

Read a popular novel written in the early 20th century and you’ll see scientists treated with a certain degree of suspicion. They’re likely to be a bit weird, possibly obsessive, and almost certainly smelly (whether that’s body odour or the various noxious chemicals they’re assumed to be using). Your hero (or less often, heroine) is likely to be more brave than intelligent, certainly when it comes to ‘book smarts’, and will see that as nothing to be ashamed of.
In the early 21st century, by comparison, we expect our heroes to have a little more flair for matters academic – and especially science. Take the Avengers. Thor and Captain America follow the early 20th century pattern of more brawn than brain (plus a whole lot of heart) but Bruce Banner is a brilliant scientist and Tony Stark a genius engineer, while Black Widow uses her intellect in other ways, but clearly has a sharp, analytical mind.
Our cultural enthusiasm for science becomes all the more obvious when you compare this to any other academic field – it’s hard to think of a fictional hero who could analyse a poem, give you an accurate potted history of the French Wars of Religion or tell you whether the Tropic of Cancer is north of the Tropic of Capricorn. But a hero who can program a computer, analyse a chemical or explain how genome editing works? However implausibly the science might be written, we have lots of those.
Unfortunately, a cultural enthusiasm for science doesn’t always translate into a good cultural understanding of science. If you love science and you want to avoid annoying scientists, here’s what you’ll need to know.

1. ‘Theory’ doesn’t mean the same thing to us as it does to you

If is hasn’t been confirmed through experiment, it’s not a theory – yet.

Probably the single most annoying thing you can say to any scientist is, “but it’s only a theory…” It’s not surprising that this is such a problem when the word ‘theory’ is used in quite a different way by scientists compared to the general population.
The term ‘theory’ in general usage means an idea or supposition; something that may be based on solid evidence but which is by no means certain. It makes sense that we use it in this way, given the origins of the word; it comes from the Greek ‘theōria’, meaning contemplation or speculation. A sense of doubt or uncertainty is assumed. If the well-meaning but dim sidekick in a detective novel says that he or she has a theory about who the murderer is, you can be sure they’ll be proven wrong in a chapter or two’s time.
Understanding what a scientific theory is depends on your understanding of the scientific method: roughly speaking, that you observe something, form a hypothesis, predict the outcomes of experiments if that hypothesis were correct, perform the experiments, and amend the hypothesis if the experiments don’t come out the way that you predicted. Your hypothesis gets upgraded to a scientific theory only at the point when it has repeatedly been confirmed through observation and experiment. The element of doubt only exists insofar as all science is founded on the principle of doubt – that if eventually you come up with an experiment that disproves the theory, you’ll accept that it was mistaken (rather than denying the facts) and either amend it to take the new evidence into account or scrap it altogether. But by the time anyone’s prepared to call it a theory, the chances of anything coming to light that would lead to it being scrapped altogether are remote.
It’s no wonder that people get confused; even dictionaries don’t necessarily help. Oxford Living Dictionaries, for instance, is quite happy to conflate the two, providing the definition “A supposition or a system of ideas intended to explain something, especially one based on general principles independent of the thing to be explained” (a good description of a non-scientific theory) with the example “Darwin’s theory of evolution” (a scientific theory that is a good deal more than “a supposition”). So restrain yourself from ever telling a scientist that something is “only a theory”; there’s nothing ‘only’ about it.

Image is a button that reads, "Browse all Careers articles."

2. You can’t base too many conclusions on a single study

Less interesting or successful studies never see the light of day.

In particular, this is something that scientists would like tabloid journalists to learn. You’ll undoubtedly have seen headlines saying things like, “Drink three cups of coffee per day to beat cancer” or “People with blue eyes more likely to get diabetes”. Health-related stories are reliably the most popular for tabloid headlines, but the same principle holds for almost any unlikely claim.
There are some problems with these kinds of stories that you might already be able to spot. For instance, it might turn out that the study on coffee and cancer was carried out by feeding caffeine to mice rather than anything taking place in humans, in which case you can tell that this is only a hypothesis, and the same result might not occur if the experiment was tested with humans. Or it might turn out that the sample of blue-eyed and non-blue-eyed people was absurdly small, or full of confounding factors, or with blatant confusion between correlation and causation. For example, the blue-eyed people might mostly turn out to be Americans following their typical diet (rate of diabetes in the US population: 10.8%) and the non-blue-eyed people mostly turn out to be Japanese people following their typical diet (rate of diabetes in the Japanese population: 5.7%).
But you can’t draw too many conclusion from one study even if it lacks these obvious flaws.
This cartoon demonstrates one reason: studies that show positive results are more likely to be published, so there might be one study that seems to show something interesting, and ten more that don’t, but it’s only the first one that makes the headlines. A particular outcome needs to be repeated in different studies for scientists to base conclusions on it, for the simple reason that so many studies don’t replicate in this way. Until they’ve been repeated, their results could have come about because of variables that weren’t considered, because of hidden flaws in the methodology, or even because of freak coincidence.

3. Nature and nurture are both more complicated than that

So many different factors affect who you are.

A recurring debate in popular science is the question of nature versus nurture. Do your personality traits, your intelligence, your morality come from – is it the way you were raised or is it all in your genes? It’s a particularly tricky question when it gets applied to problems such as the root of criminality. In such cases, politics comes into play, as a ‘nature’ explanation suggests that the chances of a criminal reforming are much lower than a ‘nurture’ explanation.
But the question of whether something – from a personality trait to some diseases – is caused by our environment or our genes is much more complex than this simple binary makes it seem. There are a few famous examples where a trait corresponds to a single gene, such as the gene for cystic fibrosis or Huntington’s disease, but most of the time it’s a combination of different factors, usually combining the environmental and the genetic. Cancer is a good example; you’re more likely to get certain cancers if there’s family history, but environmental factors from age to diet all play their role as well.
That’s before you get into the world of epigenetics. This is when a gene’s expression is changed through environmental factors in a way that can be inherited despite there being no changes to the DNA sequence. For example, stress can change the way that genes are expressed, and the offspring of people who have experienced these effects can experience the same epigenetic changes even if they didn’t experience the same stress; nature and nurture combining to make them who they are.

4. We don’t know as much as you think we do

Cures for many diseases were discovered by accident.

A lot of people think that science is cool because it’s the closest thing we have to magic. And at times, modern science can seem pretty magical; from putting a man on the Moon to changing people’s lives through medical science, the developments seen within the past hundred years and that are likely to come within the next 50 are incredible. It’s easy to imagine that scientists have pretty much got things worked out; that what’s left to do is mostly tidying up around the edges of our complete explanation of life, the universe and everything.
That’s not the case. In a way, the reality is even more awe-inspiring, which is that we’ve managed to achieve these things despite the vast amount that we don’t know. For instance, in medicine, there are many examples of treatments that were discovered long before the disease that they cured was fully understood. But this can also lead to upsetting conspiracy theories, such as the unfortunately widespread theory that cancer has already been cured and the cure somehow suppressed. The argument that conspiracy theorists usually make is that medical science is so advanced and cancer research so well funded that a cure for cancer must surely have been found by now. Given that no cure has been made available to the public, they argue, it must surely be because it’s being suppressed so that pharmaceutical companies can make more money from cancer patients than they would do from cancer survivors.
It’s a theory that casts the world in a deeply depressing light – pharmaceutical companies allowing millions to die for the sake of their income, and millions of medical professionals willingly collaborating with them – but it’s also based on a false premise. There’s a surprising amount that scientists simply haven’t worked out yet, such as how plate tectonics work, how the placebo effect works, why there is more matter than antimatter and why we sleep. So while it’s exciting to admire all the progress that scientists have made, remember that there’s still so much more to be discovered, and don’t expect magic.

5. Negative results are just as valuable as positive results

Disproving a hypothesis can be depressing, but you’re one step closer to the truth.

An experiment or trial that produces a negative result is by no means a failure. Of course, in some circumstances it will feel like one. If you’re trialling a new drug for a fatal disease and the trial demonstrates that the drug doesn’t do what you’d hoped it would, it’s not going to be an occasion for champagne. Similarly, if the experiment is the basis of your PhD and you’d been counting on proving something really exciting that you could get several publications and a post-doctoral place out of, you won’t be happy if your hypothesis is disproven. Personally, a ‘failed experiment’ can feel like a disaster. But scientifically, it’s just as important and valuable as a success.
That’s because of two of the points made above – because you can’t base too many conclusions on a single study, and because a theory is something that takes a lot of proving. Successful experiments work towards confirming a hypothesis. Failed experiments help to disprove a hypothesis. In both cases, you’ve learned something about the world.
If your experiment was an attempt to replicate someone else’s results, and it doesn’t work, then you’ve contributed something even more useful: you’ve shown that their hypothesis needs to be changed in light of your results. Perhaps their method was flawed, or perhaps you’ve simply come across a circumstance for which their experiment didn’t allow. Either way, you’ve contributed something, because science doesn’t take place in a vacuum. The discoveries you make will inform real-world decisions, whether it’s someone’s decision to up the amount of vegetables they eat in order to reduce their chances of getting a nasty disease, or the choice of materials to cloak a spacecraft, or the fertiliser used by farmers trying to grow crops in a drought. That the fertiliser won’t work, or the material doesn’t hold under extremes of heat and pressure, or the disease isn’t affected by vitamin C consumption are all just as worth knowing as the inverse. One of the great joys of working in science is knowing that even these personally and professionally depressing results are ultimately adding to the sum total of human knowledge, to the benefit of everyone.
Images: albert einstein; laboratory technician; father and baby; two scientists at work; pouring an experiment away; plants in bottles; female scientist with syringe






Your email will not be shared and you can unsubscribe whenever you want with a simple click.