This is #2 in a series. (Click here for #1 on modeling hurricanes)
Earthquake modeling seems to generate a wide range of emotions, vitriol, successes, and failures - so wide as to need a logarithmic scale, a la Richter.
The Parkfield Earthquake Experiment, now running for over 22 years, has been the development test bed and experimental "lab" for US Geological Survey/State of California efforts to develop physical models of earthquakes that will lead to viable predictions. The USGS site contains a wealth of information on the experiment, and good background on the history of earthquake prediction, which is still highly hit or miss. An interesting excerpt from the site neatly illuminates the need for prediction based on an understanding of physical causes as opposed to one based on statistical correlation only:
Early scientific efforts toward earthquake prediction in the U.S. were directed primarily toward the measurement of physical parameters in areas where earthquakes occur, including seismicity, crustal structure, heat flow, geomagnetism, electrical potential and conductivity, gas chemistry. Central to these efforts was the concept that a precursor might be observed in one or more of these measurements. However, the connection between a commonly accepted precursor and the earthquake was often speculative and uncertain. A coherent physical model was lacking. A model on which a scientific prediction could be based began to be developed in the late 1970's and early 1980's, and is described in three seminal papers. Allan Lindh of the USGS, proposed a multi-year, integrated observation program at Parkfield, combining seismic, geodetic, creep, strain, tilt and magnetic measurements with theoretical models of fault mechanics in 1978
This site will be an essential resource for the next version of the chaos course. There are two others that will serve as complementary resources…
A debate in Nature titled Is the reliable prediction of individual earthquakes a realistic scientific goal? Moderated by Ian Main, the site is a fascinating give and take on the topic by a number of scientists. For the debate, earthquake prediction is considered in light of one of four scenarios:
- Time-independent hazard… in which past occurrences of earthquakes are associated with specific land areas
- Time-dependent hazard …in which prediction is based on simple correlative models, including clustering in space and time
- Earthquake forecasting … in which forecasts are made based on a precursory signal - say some unexpected plate movement or low-level fore-shock
- Deterministic prediction …in which earthquakes are assumed to be inherently predictable.
Main provides a provocative call to arms at the beginning of the debate:
Time-independent hazard has now been standard practice for three decades, although new information from geological and satellite data is increasingly being used as a constraint. In contrast, few seismologists would argue that deterministic prediction as defined above is a reasonable goal in the medium term, if not for ever. In the USA, the emphasis has long been shifted to a better fundamental understanding of the earthquake process, and on an improved calculation of the seismic hazard, apart from an unsuccessful attempt to monitor precursors to an earthquake near Parkfield, California, which failed to materialize on time. In Japan, particularly in the aftermath of the Kobe earthquake in 1995, there is a growing realization that successful earthquake prediction might not be realistic. In China, thirty false alarms have brought power lines and business operations to a standstill in the past three years, leading to recent government plans to clamp down on unofficial 'predictions'. So, if we cannot predict individual earthquakes reliably and accurately with current knowledge, how far should we go in investigating the degree of predictability that might exist?
One person who believes that quakes can be predicted but that current accepted models are hopelessly wrong is geologist Jim Berkland, who claims that his Seismic Window Theory - based on tidal forces associated with Sun/Moon alignment and "abnormal animal behavior" - is a much better predictor. Berkland’s web site - Syzygy Job - is fascinating reading because it not only presents up-to-date earthquake news, but also contains Berkland’s description of his ostracization by main-stream earthquake scientist. In his words:
Despite my successes in earthquake prediction (using tides and abnormal animal behavior), I found it almost impossible to publish on the subject in scientific journals... Mainstream scientists generally try to debunk various aspects of my earthquake predictions or to ridicule me personally, with epithets such as crackpot or clown. My response is to question their own records in earthquake prediction, and to point out that the main action of a stream is not near the center, but closer to the edge. Near the fringes, with eddies and cross-currents, erosion and deposition are more effective, sometimes leading to changes in the course of the stream... The experts of High Science state that earthquake prediction is currently a scientific impossibility. I maintain that the topic is too important to leave to the experts and I continue to do the impossible with a better than 75% battering average , which is more than 300% greater than chance.
"Battering average"? A chance typo, or revealing slip of the tongue, in a field where slippage of tectonic plates is a sign that it is too late to predict - the quake is already here.