Thursday 29 May 2014

New paper: Estimates of error in micro-earthquake magnitude estimation


With excellent timing, on the same day as the new BGS report into the shale oil potential of the Weald Basin, a new paper, written by two co-workers at Bristol Uni and myself, has been released in Geophysical Prospecting. In it, we examine the uncertainties in estimates of event magnitude made on small earthquakes.

This paper is significant for shale gas extraction in the wake of DECC's traffic light system (TLS) proposal for fracking operations. Under the TLS, operational decisions during the fracking process must be taken on events as small as magnitude 0.0 (the amber level), with complete cessation of activities for events larger than magnitude 0.5.

As most people are aware, a magnitude 0 event is very small, at the limit of what can be detected using conventional seismographs (see our efforts at Balcombe, for example). Expensive downhole microseismic monitoring systems are required to robustly detect smaller magnitudes.

The TLS pre-supposes that earthquake magnitudes at this low level can be accurately determined. The purpose of the TLS was to provide a simple-to-understand system to re-assure the public. Uncertainties in event magnitude estimation could undermine this, generating more controversy, not less.

We show in our paper that event magnitude estimations at these low levels can be very uncertain: you can get different answers depending on what methods you use and assumptions you make. It doesn't take too much imagination to think of a scenario where one group reporting on a fracking operation concludes that an induced earthquake was just below the TLS threshold, but another group using a different method finds that the earthquake did exceed it. The current debate over shale gas extraction is febrile enough as it is, can you imagine the recrimination and the confusion that such an eventuality would generate?



So, what did we do in this paper. A warning here for anyone not particularly interested in geophysics, I'd skip the next few sections to get to the end if I were you, as things might get a little technical (as ever though, I'll do my best to present things as simply as possible).

The first thing we looked at was the two main methods to compute the seismic moment released by an event (from which seismic magnitude is determined via the moment magnitude scale). You can either work in the time domain, computing the area underneath the seismic wiggle, or you can work in the frequency domain, fitting the displacement power spectrum with a source model. The two methods are illustrated below (area under time domain wiggle on left, fitting power spectra on right):

In the following figure we compare event magnitudes determined using the two methods. If both methods were to give exactly the same results, every dot would land exactly on the dashed line. They don't- there is some scatter between the methods. While they don't give wildly different results, the difference typically varies by up to 0.3 of a magnitude unit. Which might not seem like that much, but it's enough for one group using one method to find an event below the TLS threshold while another using the other gets a value above the threshold.

The next confounding issue is related to focal mechanisms. Put as simply as I can for non-geophysicists: the amount of radiated energy a person experiences as a result of an earthquake depends on where you are in relation to the fault plane. Therefore, the magnitude you estimate at each position will vary. You can make a correction for this if you know the focal mechanism. It is possible to work out the focal mechanism of an event with excellent quality data

With poor quality data, as you might expect for a small event that is at the limit of detectability, focal mechanism determination may not be possible. Instead, average values for both P and S waves can be used. However, using these average values can introduce errors. The following plot shows the errors in magnitude estimation produced by using average radiation patterns rather than their true values. There are different correction values depending on whether you are using P-wave or S-wave arrivals, so the results are plotted separately.
You can begin to see substantial errors creeping in - more than 0.5 magnitude units - especially when the S-waves are used for the calculation (and S-wave typically have a larger amplitude than P-waves, so would typically be the preferred choice for magnitude estimation).

Which brings me on to an even bigger issue. The typical method for computing magnitudes is to compute the moment based on both P and S arrivals at each receiver, and then take the average value. The question is, how much variation is there between sensors, and between phases? In the following plot we compare magnitudes computed using the P-wave arrivals, and then using the S-wave arrivals. If both phases produce the same answer, then the dots should again plot on the dashed line.

You can again see substantial differences creeping in, regardless of whether true or averaged radiation pattern corrections are used. In fact, differences can be larger than 0.5 magnitude units. This poses a real difficulty for the TLS. One group using P-waves might claim that an event has magnitude less than 0.0, below the lowermost limit for the TLS meaning that the operator can press on full steam ahead, while a second group measures the S-waves and declares that the event has exceeded the maximum TLS threshold of 0.5 and the operator must cease all operations immediately.

We also consider a number of other factors that can affect magnitude calculations in a similar manner, including the sampling rate of the seismometers doing the measuring, how you decide your window length if you are working in the time domain, and the signal-to-noise ratio of the waveforms you have recorded. We finish with some recommendations that can ensure your magnitude estimates are as accurate as possible, summarised in the following table.


(Non-geophysicists who are skipping through - you can start reading again here)
One hopes that the TLS will not be necessary for UK shale gas extraction. The vast majority of hydraulic stimulations in the USA have not produced earthquakes anywhere near to the level of the TLS cut-off. However, if such an event does occur we have our concerns how accurately earthquake magnitudes are calculated for small-magnitude events. These uncertainties could mean that the TLS causes more issues, and more public confusion, than it solves.

My view is that the aim for operators should be to avoid creating earthquakes that can be felt by people at the surface. This threshold will vary depending on conditions near to the surface, the depth of any induced event, and a host of other parameters. Typically, UK regulation is goal-setting rather than proscriptive - it should be up to the operators to spend the money to determine how best to achieve these goals. There are many things an operator can do with good baseline 3D seismic surveys, with good quality microseismic data during stimulation, even by monitoring pressure changes and injection rates carefully during injection, that can be useful indicators that they might be at risk of triggering a seismic event. In my personal view I would rather operators focussed on doing these things, rather than rely on a TLS scheme to avoid induced seismicity (to their credit, many of them are considering these things, but then in my view regulations should reflect this).


Before I finish, long-time readers will recall that I have occasionally commented on open access trends in the publishing industry. I'll admit to still being on the fence in this regard. However, my opinions matter little, because as UKRC-funded researchers we are required to publish open-access regardless, with the university picking up the tab via a block grant. So the good news is you get to read the paper without hitting any paywalls.




   


2 comments:

  1. Interesting stuff. I can't see an obvious way to avoid some controversy about near limit events, wherever you set the limit, and whatever the estimation method is, if that method has significant errors.

    I agree that goal setting is more than important than proscribing events above a certain level, but we must take account of public opinion and fears, even (or maybe especially) when they're irrational. It would be lovely to deal with a rational and informed public, but we must deal with the world as it is, not as it should be. If what the public want is to turn on the gas taps and flick the light switches, and totally take for granted where that ability came from, we must find a way to provide it, at least until (or unless) they can be persuaded/educated to think differently.

    ReplyDelete
  2. Some numpty in Lancashire is afraid of 8.4 or 9.4 quakes from fracking

    ReplyDelete