Jamie Morton

Jamie Morton is science reporter at the NZ Herald.

Quake experts reach towards holy grail

On this day in 2011, a 6.3-magnitude quake delivered our second-deadliest disaster and Christchurch's darkest day. How far are we towards being able to predict the next one? Science reporter Jamie Morton investigates

Fire Service personnel look for survivors in the collapsed PGC (Pyne Gould Corporation) building on Cambridge Terrace. Photo / Geoff Sloan
Fire Service personnel look for survivors in the collapsed PGC (Pyne Gould Corporation) building on Cambridge Terrace. Photo / Geoff Sloan

Two years ago today, with a force equal to 15,000 tonnes of TNT, came the devastating aftershock that scientists hoped they would never see.

It struck suddenly and violently at 12.51pm, with a 6.3 magnitude from a shallow hypocentre 5km deep, about 10km southeast of the Christchurch city centre - an earthquake that would kill 185 people.

Even after the 7.1 quake had struck at Darfield six months earlier, there was no way seismologists could have predicted the time and location of the aftershock - one among thousands - that would change a city forever.

Two years on, pinpointing the next big one still remains beyond the capability of even the world's most advanced science.

Why?

"Here's a nice analogy," explained Professor Euan Smith, of Victoria University's School of Geography, Environment and Earth Sciences.

Imagine you're a weather forecaster, standing in a room without windows. You are given data about the change in windspeed, rainfall and sunshine at a number of sites in New Zealand - but not the absolute amounts. Now, go ahead and forecast the weather.

The point is, Professor Smith said, seismologists can measure changes in stress, and perhaps other useful indicators, but not the absolute levels.

The Darfield and Christchurch faults were unknown to them before the September 4, 2010, quake.

"Try forecasting the weather if there is a front coming for which you have no information."

Professor Smith doubted science could ever create a black-and-white reliable prediction system, but it could build a forecasting method that would deal in probabilities and use statistical methods. The big stumbling block was that unlike predicting weather, our database of previous seismic events still remained very small.

"We don't have enough of a historical record to see any predictable pattern - and we still don't know enough about the physics of the earthquake mechanism," said Victoria University's Dr Tim Stern.

"For example, weather occurs on a scale of days, weeks or maybe months. Earthquakes occur on a scale of years, hundreds or thousands of years."

In attempting to forecast quakes, scientists would face two situations - predicting aftershock events after a main event, such as the September 4 quake - and the far more difficult feat of predicting them from a cold start. In New Zealand, this would be done based on the existence of known active faults and their return time of movement based on geological data.

But the problem was that not all faults were visible - and geological processes created an ever-moving feast. Christchurch, for example, is built on a rapidly growing flood plain fed by the eroding rocks from the alps.

"And indeed, of the 19 magnitude-six earthquakes that have occurred in New Zealand in historical times, only about three to four have occurred on known active faults," Dr Stern said.

Yet some steps had been taken towards the holy grail of earthquake science. The past few years have seen the establishment of the Collaboratory for the Study of Earthquake Predictability (CSEP), an international collaboration started in California. This aimed to develop a virtual, distributed laboratory that could support scientific prediction experiments in multiple regional or global natural laboratories. Its approach sought to answer two questions: how should scientific prediction experiments be conducted and evaluated; and what was the intrinsic predictability of the earthquake rupture process?

CSEP has now set up forecast testing centres in California, Japan, Europe, and in New Zealand under GNS Science. These allow researchers freedom to submit their models to testing centres and have them rigorously and transparently tested.

A model is essentially a computer code that will estimate the future rate of occurrence of earthquakes in the next day, three months or in a year, in tens of thousands of bins of space and magnitude within a defined testing region. Models are also updated with new earthquake data at the end of each time interval.

Dr David Rhoades, GNS principal scientist, said there were about 20 such models being tested in the New Zealand centre, and many more in California and Japan.

The testing centres run these codes and apply standard statistical tests that measure how well the model is performing, compare the models with one another, and measure the probability gain over reference models. The testing experiments are intended to run for many years.

"It will be possible to refine and improve the models as the experiments proceed, and there is also the possibility of forming ensemble models which are more informative than any of their components," Dr Rhoades said.

As we better understood the processes going on beneath our feet, he said, so our ability to predict quakes would improve. When it came to forecasting, the earthquake catalogue was the world's best asset for forecasting future events. Catalogues began over a century ago, and their quality had gradually improved over time.

"As the seismograph network becomes more dense, the magnitude threshold of completeness gets lower, and we can learn more about the relations between small and large earthquakes," Dr Rhoades said. "Most large earthquakes are preceded by small earthquakes in the medium-to-long term - years or decades - and it is very well known that large earthquakes are followed by smaller ones - so-called aftershocks - according to a power law, the Omori-Utsu law."

These features were already being exploited by existing forecasting models, he said.

In a five-year forecasting experiment recently completed in California, a model that made good use of the locations of the smallest earthquakes in the catalogue to estimate the spatial distribution of earthquakes over the five-year test period performed much better than all other models, which were based on a variety of data and physical modelling ideas.

"So I think there is still much room to improve our forecasts, just by learning from existing earthquake catalogues."

Other newer data streams, like GPS observations of crustal deformation, and existing data like the locations of active faults and their slip-rates, could be used to improve earthquake forecasts, he said.

"Some faults, like the alpine fault in the South Island, are known to have ruptured in large earthquakes many times in the past, so we can be pretty confident that they will continue to do so in the future.

"Unfortunately, the time interval between successive events is not regular enough to allow us to accurately predict when the next big earthquake will be."

The fault data was already a major component of standard long-term earthquake hazard models and, ironically, these standard models were the most difficult to test because of the long timeframe involved.

Physics-based models, which calculated changes to the stress field after an earthquake and stress interactions between faults, also held some promise for short-to-medium term forecasting, Dr Rhoades said.

There was a large research community working in the area of electromagnetic phenomena, which had produced anecdotal evidence of electromatromagnetic precursors before earthquakes. In this field, one of the most persistent suggestions was of ionospheric disturbances as short-term earthquake precursors, usually about three days.

But Dr Rhoades said this was a controversial area, partly because there was no agreed physical mechanism for such precursors.

In New Zealand, fresh studies into what is termed tremor held particularly exciting possibilities in understanding what was happening within our most potentially destructive faultline.

This year, a report will be released on years of research into this recently discovered seismic process, described as a collection of tiny earthquakes usually not detected on seismic networks, and thought to be a long-lasting slip on faultlines which creates a creaking and groaning kilometres below the earth's surface. The first case of tremor in New Zealand was found on the Alpine Fault only a few months ago.

Eventually, tremor would be found to indicate a build-up of stress before a big earthquake - an invaluable asset to quake forecasting - or merely a safety valve to release stress putting off the likelihood of a major shake.

"Studies on tremor are helping us get a better understanding of the fundamentals of the earthquake source," said Dr Stern, who is leading New Zealand studies. "An increase in tremor may one day be shown to portend an approaching large earthquake - that is still to be seen."

By the numbers:

7.1 The magnitude of the September 4, 2010, earthquake near Darfield, Canterbury, that preceded thousands of aftershocks.

185 The number of people killed in the most violent aftershock, a shallow 6.3 jolt near Christchurch on February 22, 2011.

19 The number of quakes higher in magnitude than six in New Zealand over historical times, of which just a few had occurred on known active faults.

20 The number of statistical forecast models being tested in New Zealand.

308 The number of people believed killed when an earthquake struck near L'Aquila, Italy. Six scientists were later jailed after being accused of downplaying the threat days before.

1000 The years of rupture history on the South Island's alpine fault hoped to be gained by scientists in a new deep drilling project that could shed light on the timing, size and style of big quakes on the fault.

- NZ Herald

© Copyright 2014, APN New Zealand Limited

Assembled by: (static) on red akl_a3 at 19 Sep 2014 11:52:49 Processing Time: 375ms