• White Instagram Icon
  • White Facebook Icon
  • research gate white
  • LinkedIn - White Circle

© 2018 by Isabelle Kingsley

Does science communication work? Show me the evidence.

October 6, 2017

 

It is a bit controversial to stand up in front of a room full of science communication and outreach professionals and say that there is little evidence that science communication and outreach is effective. Yet that's what I did last week at the International Astronautical Congress in Adelaide.

 

I wasn't trying to be controversial. I just wanted to shine light on an issue — that we as science communication professionals are basing the effectiveness of our work on weak evidence and assumptions.

 

There’s a general lack of scientific rigour applied to the evaluation of science communication and outreach. This issue was highlighted years ago in 1998 by a group of experts at NASA’s Space Science Laboratory who reviewed the research on science communication and were shocked by the overall lack of rigour, “especially as contrasted with the very rigorous scientific environment in which this communication arises”1.

 

Even now in 2017, not much has changed. We seem to be satisfied with counting bums on seats or asking people their opinions and feelings about a science event they attended. But what evidence is that providing about the learning, attitudinal and behavioural impacts of science communication and outreach? Why aren’t we more critical and analytical of our own field? 

 

What are we doing wrong?

 

When we look at what science communication and outreach is trying to achieve, it can range from changing public knowledge and understanding, attitudes and perceptions of science, to simply getting people to notice and be more aware of science and its value to their lives and society.

 

Depending on the objectives, we need to be better at measuring the actual impacts of science communication and outreach — whether it is measuring learning independent of what people say they think they know or learnt, or measuring changes in interest or attitudes, and how that translates into action; from young people taking more science courses, pursuing a career in science or just everyday people taking a more active interest in scientific developments and news.

 

Yet this isn’t happening. Most evaluation research is done in-house and informally by the people and organisations delivering the programs and activities, and usually as an afterthought (and yes, I’m guilty as charged!). I myself have on a few occasions hastily thrown together an evaluation form a few hours before an event.

 

There are also few tools available that effectively measure the impacts of science communication and outreach. The tools that are available are weak and unreliable. For example, major funding bodies for science engagement in Australia recommend evaluating science events using various evaluation methods such as bean polls, observations of attendees’ actions and facial expressions and survey questions such as the following:

 

  1. Did you learn something new today?

    • No, nothing | I learned a little | I learned a lot

  2. Were you inspired today?

    • Not at all | A little | A lot

  3. Did this event affect the way you think about science?

    • Not at all | A little | A lot

  4. Are you going to change your behaviour based on what you learned today?

    • Yes | Maybe | No

 

These methods provide very little evidence about the actual impacts of a science program or activity on the learning, attitudes and behaviours of those who attended. Why? Firstly, these methods are based on self-reports — what people think or say about their own feelings, attitudes, behaviours and learning — and are susceptible to various kinds of bias. Surveys also suffer from low response rates and those who complete them are usually self-selected. Furthermore, the majority of surveys are not tested for validity and reliability to make sure that they measure what they claim to measure. Overall, these tools deliver weak and unconvincing evidence as to the actual impacts of science communication and outreach. Yet, these are the standard of tools typically used to report on science communication initiatives — initiatives that are often funded by the government.

 

Informing investment in science engagement

 

In Australia, the National Innovation and Science Agenda is investing $1.1 billion over four years to drive science and innovation ‘to deliver the next age of economic prosperity in Australia’. Of this, $48 million has been allocated to supporting school and community education, communication and engagement in science, technology, engineering and mathematics (STEM) to inspire Australians to engage with, appreciate and study STEM.

 

As a science communicator and educator myself, I whole-heartedly support and welcome this investment in science communication and outreach. But if this much of our hard-earned taxpayer dollar is going to fund science engagement, shouldn’t these decisions be based on a strong foundation of compelling evidence that directs and supports the investment?

 

Raising the bar

 

Science is based on empirical and measurable evidence and uses precise, sensitive and accurate processes and tools. The field of science communication needs to hold itself to the same standard of rigour as the field that it is communicating.

 

I think we need to change our ways. First, we need to treat evaluation research as an integral part of science communication and outreach rather than as an appendix. This might mean allocating larger chunks of science engagement resources and funding to the planning, design and implementation of evaluation research. There’s also a need for more sensitive and accurate evaluation tools that can more effectively measure changes in learning, attitudes and behaviours — tools that are tested and validated to ensure higher reliability and accuracy of data.  We also need standardised tools that can be used across programs and activities so that the measures can be compared. And finally, we need to become a more critical community of practitioners that will hold the field accountable to a higher standard. 

 

But if nothing else, we need to be conscious of this evidence gap in the effectiveness of science communication and outreach. As Douglas Adams said, “assumptions are what we don’t know we are making.” But once we find out we are making them, we need to stop and change our ways. So let’s start being more rigorous with our evaluation of the impacts of science communication and outreach and start building a strong evidence base that even science would stand by.

 

 

 

 

1 Borchelt, R. E. (2001) Communicating the Future: Report of the Research Roadmap Panel for Public communication of Science and Technology in the twenty-first century. Science Communication, 23, 194-211.

 

Share on Facebook
Share on Twitter
Please reload

Related Posts
Please reload

This site was designed with the
.com
website builder. Create your website today.
Start Now