Policy-making is an area laden with risks - some of them with tremendous consequences. The Prime Minister's Chief Science Adviser, Professor Sir Peter Gluckman, is writing a trilogy of discussion papers looking at how science and evidence can help. He talks to science reporter Jamie Morton.

Q. You've just released the first of three discussion papers you've written around decision-making and risk assessment. What has warranted this series and what would you like it to achieve?


Most decisions that individuals and governments make on our behalf are made with some level of uncertainty involved, even though we may not recognise it.

Concepts such as precaution, risk and tradeoffs are at the core of complex decision-making.


Science in many cases helps reduce the level of uncertainty but many other factors play into decision-making - perceptions of risk vary as do concepts of precaution.

The series is aimed at increasing the understanding of how we make decisions as individuals and as a country.

My hope is that this may be of value to policy-makers, politicians and indeed all of us so that we will be better placed to discuss the many issues we confront as society.

We need to understand how science can assist decision-making but personal biases, values and beliefs also affect decision-making.

The issues are complex and nuanced and hence I have chosen to write a series of hopefully accessible papers.

Q. In New Zealand, how often would you expect that policy-makers are confronted with making significant decisions in the face of scientific uncertainty? Is this a universal challenge for all of the sciences?
A. The simple answer is constantly in virtually every area of policy- making; the reality is as explained in my paper that virtually all decisions we make involve some level of uncertainty and therefore risk.

But science can assist enormously in informing the policy process.

The question for all of us is how we perceive that level of risk and we all perceive that differently and that affects our decision-making.

In resolving these differences part of it can be based on science but part is beyond science and that is the realm of policy-making and politics.

The longer answer would be that the role of many scientists includes analysing and interpreting evidence that can help inform individual or societal decisions as objectively as possible.

But it is not possible for science to necessarily, if ever, to provide all the answers and it is certainly not their role to dictate to society what decision to make. Rather our role is to develop the evidence, define what we know and do not know and the implications of this knowledge and the options that emerge.

In that way we can better inform decision-making at an individual, societal and government level.

The recent Royal Society of New Zealand report on the mitigation of climate change is an excellent example of doing just that.

The choice of strategies that New Zealand should adopt extends well beyond the role of science into both the policy and political domains but clearly involves the assessment of tradeoffs, costs, benefits and risk associated with each.

All scientific inquiry is based on some level of uncertainty - and there are different types of uncertainty, as is explained in the first paper.

There is uncertainty related to things that we don't yet know - for example, the very long-term effects of a new drug - scientific study can gather more information, through experimentation, observation, modelling, etc. But we will never be able to know everything. The nature of scientific knowledge is always provisional and evolves as new evidence is acquired.

And the reality is most decisions cannot wait and must be made on the basis of what we know now, the science we start today can only inform decisions into the future.

There is also statistical uncertainty - that is uncertainty that has a probability that can be measured.

Betting on the throw of dice is a risk based on an uncertain outcome, but one that is measurable and calculable.

But there are also many areas of scientific inquiry where probabilities can't be assigned with certainty.

For example, scientists are asked to make predictions about the frequency or magnitude of uncertain events like earthquakes - where the frequency can only be estimated from past events and cannot be given mathematical precision as in the case of throwing dice.

The limited knowledge and the constraints on the predictive ability of science are not signs of poor science.

Rather decision-making must acknowledge what is not known and taking it into account in planning for the future.

We don't need to agree on all the findings but we need to agree to the process of acquiring relevant knowledge, and to continuing to adapt our decisions according to new knowledge.

Q. Can you cite some high-profile examples or instances where this has been a particularly problematic issue for scientists? You point to some interesting examples with coastal hazards, the Fukushima incident and diagnostic error.
A. Science can and should help choose a course of action, but virtually all decisions will be based on incomplete data or observations.

A very obvious example is the case of earthquake prediction and its relationship to both planning and crisis responses.

Despite a clear history of earthquakes in Christchurch over the past 150 years, its potential implications were not clear enough, in part because of limited geological information and perhaps because of inadequate understanding of what was known so as to restrict building in some areas that were to liquefy.

Climate change is another example where the complexity of sciences and the many unknowns have created much frustration in the science community about getting real action from the global policy community.

In medicine, the unexpected and rare side-effects of medicines are a constant concern. In the area of technological innovation, especially in the life sciences, there is a constant tension between new innovation and issues of social licence and precautionary concerns based on different perceptions of what is known and of risk. The report discusses this tension in some detail because it is important to understand that all innovation involves some steps into the unknown - the issue is what level of uncertainty exists and what is the appetite for risk which, in turn, depends on our individual and collective assessment of risk and the perceptions of upsides and downsides of any decision.

Q. You've included a handy terminology, covering likelihood (ranging from "rare" to "almost certain") consequence ("insignificant" to "extreme") and confidence (A/very high to E/very low). How helpful do you think this might be for scientists in communicating science or evidence to policy-makers and the public?
A. This issue of terminology has been recognised by risk communicators as an important issue.

Standardising the terminology used to convey information to the public and policy-maker may help to reduce potential confusion and allow for easier interpretation of the scientific conclusions - both to inform policy and for individuals to decide on what level of risk they find acceptable.

It can also help when action is required to mitigate the risk.

It allows us to translate statistical probabilities into terms that are commonly understood, in a way that most people understand them.

Q. What do you think the public often misunderstands about the concept of uncertainty in science? Take, for example, how "uncertainty" is used by the UN Intergovernmental Panel on Climate Change (IPCC) when projecting future climate change scenarios, and how that differs from "uncertainty" as we use the term generally.
A. It is not always generally understood that there is always uncertainty in science and that in general science can disprove something but the concept of absolute proof is generally not possible (except in mathematics and some of the physical sciences).

Hence concepts of precaution can be misinterpreted or exaggerated.

Rather science is an iterative process that eventually produces a consensus of scientific opinion.

This is not the same as personal opinion, but rather the professional and collective understanding of the relevant and engaged scientific community.

Scientific knowledge is constantly evolving, and the consensus can change over time. But the innate uncertainty in science is sometimes exploited in attempts to discredit the scientific consensus based on the evidence in order to support a particular viewpoint.

We have seen this particularly in the case of issues like climate change.

The IPCC uses sets of terms to describe the likelihood of events or degrees of change and the level of confidence that the experts have in the information being presented, based on the consistency and the strength of the evidence.

This essentially conveys how sure they can be about a particular conclusion - or conversely, how much uncertainty there is in the data.

Despite the uncertainties that have not yet been reduced and may never be because they arise out of the inherent variability and the sheer complexity of the climate system, scientists have come to a strong consensus on the data trends and their implications for all of us.

Q. What is the underlying purpose of this series: what would you like the public to take away from it?
A. My hope is that this series will assist the general public and policy makers in better understanding what factors go into making complex and necessary decisions, and the value on the one hand, and the limits, on the other, of what science can and cannot tell us in different situations.

I hope it will assist those engaged in scientific communication - both scientists and journalists.

The issues are complex but important; short sound-bites often hide these essential nuances.

The series explains how different values and worldviews affect the way different people might interpret the same information - and often come to different conclusions because decision-making and risk assessment are not solely scientific processes: they involve our beliefs, worldviews, experiences and biases.