Earthquake Hazard Mitigation Science and Policy


[Image]

[Image]

Challenge: How to Make Sensible Policy Given Uncertainty

Colleagues, including my late father Jerome Stein of Brown University, students, and I are looking into various issues involved in trying to mitigate earthquake hazards. The challenge is to develop policies that make sense for society, given that we have limited resources, limited knowledge about earthquake processes, and thus limited ability to estimate possible future damage.

The issue can be posed in terms of finding a level of hazard mitigation that minimizes total cost to society. That cost is the sum of mitigation costs, such as earthquake resistant construction or tsunami defences, plus the expected loss for future earthquakes assuming a given level of mitigation. The expected loss is the sum of losses in various expected events times the assumed probability of each event. As shown, the optimal level of mitigation is the minimum of the cost curve. Less mitigation decreases construction costs but increases the expected loss and thus total cost. More mitigation gives less expected loss but higher total cost.

Similar situations arise for other natural hazards including river flooding and hurricanes, and in dealing with the effects of global warming. The optimal level of mitigation minimizes the total of losses and mitigation costs, and depends on the assumed hazard model. Our ability to find this optimal level of mitigation that balances resources used for hazard mitigation with other societal needs (schools, hospitals, etc.), thus depends on our ability to estimate the probabilities of future events and their effects, and the uncertainties in these estimates. One of our key lines of research is exploring how to make policy given these uncertainties.

For an overview of the work with Jerome Stein, click here for an article excerpted from the Brown Daily Herald, or here. This work is also the subject of a new textbook "Playing Against Nature" (Wiley-Blackwell).

About the book and ordering information







Why Do Earthquake Hazard Maps Often Fail?

[Image]

The uncertainties in estimating hazards are illustrated by the fact that it is becoming increasing clear that standard earthquake hazard mapping methods often fail to predict what happens in large destructive earthquakes. We are exploring why this is the case. In general, the problem is that large earthquakes are much more variable in space, time, and magnitude than we often infer from the limited historical record.

A striking example is the March 2011 M 9.1 earthquake off Tohoku, Japan, which occurred in an area shown by the Japanese national earthquake hazard map to have relatively low hazard.

The figure shown, from Geller ("Shake-up time for Japanese seismology": Nature, 472, 407-409, 28 April 2011) (download pdf of article here) illustrates his point that

"The regions assessed as most dangerous are the zones of three hypothetical 'scenario earthquakes' (Tokai, Tonankai and Nankai; see map). However, since 1979, earthquakes that caused 10 or more fatalities in Japan actually occurred in places assigned a relatively low probability. This discrepancy -- the latest in a string of negative results for the characteristic earthquake model and its cousin, the seismic-gap model -- strongly suggests that the hazard map and the methods used to produce it are flawed and should be discarded."

The hazard map reflected the assumption that magnitude 9 earthquakes would not occur off the Tohoku coast, which led to tsunami defences being built only for a much smaller earthquake than actually occurred, as described by the New York Times (download pdf of article here).

A M9 earthquake involves more slip on a larger fault area, resulting in a larger tsunami because the maximum tsunami run-up height is typically about twice the fault slip (Okal and Synolakis, 2004). Thus the March earthquake generated a huge tsunami that overtopped even 10-meter sea walls, causing enormous damage including crippling nuclear power plants. (For a copy of a video showing the tsunami overtopping the seawall, click here.)

[Image]

For our paper (Stein and Okal, 2011) about why the hazard mapping failed to expect a magnitude 9 earthquake, click here. Among the problems was the perception that such large earthquakes would not occur subduction zones like this, where the subducting plate is geologically old, as discussed here.

Such problems occur around the world in different tectonic environments, as recently summarized by R. Kerr (Seismic crystal ball proving mostly cloudy around the world, Science, 332, 912-913, 2011.)

A striking example, noted by Mian Liu (University of Missouri), is that the 2008 Wenchuan (Sichuan province) M 7.9 earthquake occurred on a fault predicted by the hazard map to have low seismic hazard.

Another example is the convergent boundary between Africa and Eurasia in North Africa (Swafford and Stein, 2007). The 1999 Global Seismic Hazard Map showing peak ground acceleration expected at 10% probability in 50 years predicts a prominent hazard "bull-eye" at site of the 1980 M 7.3 El Asnam earthquake. The largest subsequent earthquakes to date, the May 2003 M 6.8 Algeria and February 2004 M 6.4 Morocco events (stars) did not occur in the predicted high hazard regions.

The 2010 M 7.1 Haiti earthquake similarly occurred on a fault mapped in 2001 as having low hazard, producing ground motion far greater than the map predicted would occur in 500 years.

As a result, new hazard maps are sometimes made after a big earthquake that the earlier maps missed. The question is whether and how much better the new map predicts than the older one, which may take a while - sometimes hundreds of years - to assess.

A major problem is that although such hazard maps are widely used in many countries, their results have never been objectively tested. Such testing would show how well the maps worked, give a much better assessment of their true uncertainties, and indicate whether or not changes in the methodology over time resulted in improved performance. Most fields of science have had the embarrassment of learning way from objective testing that some commonly used methods don't actually work. For example, a study in 2002 found that although more than 650,000 arthroscopic knee surgeries at a unit cost of roughly $5,000 were being performed each year, a controlled experiment showed that "the outcomes were no better than a placebo procedure" (download pdf of article here).

The failures illustrate that hazard maps often have very large uncertainties, due to our limited knowledge and limited data available, which are often not recognized by their users. Hence we are exploring two related questions discussed in subsequent pages:

How can these uncertainties be assessed and communicated?

What causes these uncertainties?

For overviews of this work to date, see

Recent 2013 overview lecture "Why earthquake hazard maps often fail and what to do about it." Click here for powerpoint

A five 5-minute video summary presented at the 2012 UNAVCO science workshop "Bad assumptions or bad luck: Tohoku's embarrassing lessons for earthquake hazard mapping". Click here to watch

Keynote lecture from the 2011 ESF conference on Extreme Geohazards: "Why natural hazard maps (forecasts, warnings, etc) often fail and what to do about it." (Stein, S., R. Geller, and M. Liu). Click here to watch lecture or click here for powerpoint

Lecture from the 2012 conference on mathematics in the geosciences: "Playing against nature: formulating cost-effective natural hazard policy given uncertainty" (Stein, S. and J. Stein). Click here for powerpoint

Listen to Voice of America radio story here or read the story here

Listen to BBC radio story here or (mp3) here

AGU Blogosphere story "Simple Math Gives Readers X-Ray Vision"

Innovation Blog story "Deep Uncertainty"

Recent publications for this project:

Review paper "Why earthquake hazard maps often fail and what to do about it" by S. Stein, R.J. Geller, and M. Liu, Tectonophysics, 562-563, 1-25, 2012. Click here for pdf

Paper "Communicating uncertainties in natural hazard forecasts" by S. Stein and R.J. Geller, EOS, 93(38), 361-362, 2012. Click here for pdf

Paper "Bad assumptions or bad luck: why earthquake hazard maps need objective testing" by S. Stein, R. Geller, and M. Liu, Seis. Res. Lett., 82, 623-626, 2011). Click here for pdf

Paper "Rebuilding Tohoku: a joint geophysical and economic framework for hazard mitigation" by J. L. Stein and S. Stein, GSA Today, 22(9), 42-44, October 2012. Click here for pdf

Paper "Gray Swans: Comparison of natural and financial hazard assessment and mitigation" by J. L. Stein and S. Stein (Natural Hazards, DOI 10.1007/s11069-012-0388-x, 2012) Click here for pdf. This analysis comparing weaknesses in hazard assessment and mitigation policies shown by the Tohoku earthquake and 2008 U.S. financial disaster draws on concepts from the book Stochastic Optimal Control and the U.S. Financial Debt Crisis by J. L. Stein (Springer, 2012)

Paper "Formulating Natural Hazard Policies Under Uncertainty" by J. L. Stein and S. Stein, SIAM News, 45/10, 6-7, 2012. Click here for pdf.

Paper "Formulating Natural Hazard Policies Under Uncertainty" by J. L. Stein and S. Stein, SIAM/ASA J. on Uncertainty Quantification, 1, 42-56, 2013. Click here for pdf.

Paper "How good do natural hazard assessments need to be?" by S. Stein and J.L. Stein, GSA Today, 23, no. 4/5, doi: 10.1130/GSATG167GW.1 (April/May), 2013. Click here for pdf

Paper "Shallow versus deep uncertainties in natural hazard assessments" by S. Stein and J.L. Stein, EOS, 94, 14, 133-140 (2 April), 2013. Click here for pdf

Paper "How much can we clear the crystal ball?" by S. Stein and A. Friedrich, Astronomy and Geophysics, 55, 2.11-2.17, 2014. Click here for pdf

Preprint: "Metrics for assessing earthquake hazard map performance," by Stein, S., B. Spencer, and E. Brooks, submitted, 2014. Click here for pdf

[Image]

[Image]

[Image]

[Image]

For a general discussion about understanding earthquake hazard maps from Earth Magazine in 2009 click here

For papers illustrating the uncertainties in hazard maps (pdf) click here and here

For a paper discussing biases that can cause future earthquake sizes to be misestimated (pdf) click here

For a discussion about the effects of the spatial sampling window (pdf) click here




Additional references for this project

Stein, S. and E. Okal, The size of the 2011 Tohoku earthquake needn't have been a surprise, EOS, 92, 227-228, 2011. For pdf click here

Liu, M., S. Stein, and H. Wang, 2000 years of migrating earthquakes in North China: How earthquakes in mid-continents differ from those at plate boundaries, Lithosphere, 3, doi: 10.1130/L129, 2011. For pdf click here

Stein, S., Why seismologists can't predict earthquakes, 2010 Academy of Europe Lecture. For pdf click here

Stein, S. and J. Hebden, Time-dependent seismic hazard maps for the New Madrid seismic zone and Charleston, South Carolina areas, Seis. Res. Lett., 80, 12-20, 2009. For pdf click here

Swafford, L. and S. Stein, Limitations of the short earthquake record for seismicity and seismic hazard studies, in Continental Intraplate Earthquakes, Special Paper 425, 49-58, S. Stein and S. Mazzotti, eds., GSA, Boulder, CO, 2007. For pdf click here

Stein, S., A. Friedrich, and A. Newman, Dependence of possible characteristic earthquakes on spatial sampling: illustration for the Wasatch seismic zone, Utah, Seis. Res. Lett., 76, 432-436, 2005. For pdf click here

Stein, S. and A. Newman, Characteristic and uncharacteristic earthquakes as possible artifacts: applications to the New Madrid and Wabash seismic zones, Seis. Res. Lett., 75, 173-187, 2004. For pdf click here

A. Newman, J. Schneider, S. Stein, and A. Mendez, Uncertainties in seismic hazard maps for the New Madrid Seismic Zone, Seis. Res. Lett., 72, 653-667, 2001. For pdf click here



Return to Home page.