time series for geo

time series for geo

(Parte 1 de 5)

Contents

1 Introduction 9 1.1 The digital revolution 9 1.2 Digital Recording 1 1.3 Processing 13 1.4 Inversion 15 1.5 About this book 18

Part one: PROCESSING 23 2 Mathematical Preliminaries: the and Discrete Fourier Transforms 25 2.1 The -transform 25 2.2 The Discrete Fourier Transform 29 2.3 Properties of the discrete Fourier transform 34 2.4 DFT of random sequences 43 3 Practical estimation of spectra 47 3.1 Aliasing 47 3.2 Aliasing 51 3.3 Spectral leakage and tapering 51 3.4 Examples of Spectra 57 4 Processing of time sequences 65 4.1 Filtering 65 4.2 Correlation 71 4.3 Deconvolution 73 5 Processing two-dimensional data 82 5.1 The 2D Fourier Transform 82 5.2 2D Filtering 84 5.3 Travelling waves 87

Part two: INVERSION 93 6 Linear Parameter Estimation 95 6.1 The linear problem 95

Contents 7

6.2 Least squares solution of over-determined problems 9 6.3 Weighting the data 102 6.4 Model Covariance Matrix and the Error Ellipsoid 110 6.5 “Robust methods” 113 7 The Underdetermined Problem 120 7.1 The null space 120 7.2 The Minimum Norm Solution 122 7.3 Ranking and winnowing 123 7.4 Damping and the Trade-off Curve 125 7.5 Parameter covariance matrix 127 7.6 The Resolution Matrix 131 8 Nonlinear Inverse Problems 139 8.1 Methods available for nonlinear problems 139 8.2 Earthquake Location: an Example of Nonlinear Parameter Estimation 141 8.3 Quasi-linearisation and Iteration for the General Problem 144 8.4 Damping, Step-Length Damping, and Covariance and Resolution

Matrices 145 8.5 The Error Surface 146 9 Continuous Inverse Theory 154 9.1 A linear continuous inverse problem 154 9.2 The Dirichlet Condition 155 9.3 Spread, Error, and the Trade-off Curve 157 9.4 Designing the Averaging Function 159 9.5 Minimum Norm Solution 160 9.6 Discretising the Continuous Inverse Problem 163 9.7 Parameter Estimation: the Methods of Backus and Parker 165

Part three: APPLICATIONS 173 10 Fourier Analysis as an inverse problem 175 10.1 The Discrete Fourier Transform and Filtering 175 10.2 Wiener Filters 177 10.3 Multitaper Spectral Analysis 180 1 Seismic Travel Times and Tomography 185 1.1 Beamforming 185 1.2 Tomography 192 1.3 Simultaneous Inversion for Structure and Earthquake Location 198 12 Geomagnetism 203 12.1 Introduction 203 12.2 The Forward Problem 204 12.3 The Inverse Problem: Uniqueness 206 12.4 Damping 208

8 Contents

12.5 The Data 212 12.6 Solutions along the Trade-off Curve 216 12.7 Covariance, Resolution, and Averaging Functions 218 12.8 Finding Fluid Motion in the Core 221 Appendix 1 Fourier Series 228 Appendix 2 The Fourier Integral Transform 234 Appendix 3 Shannon’s Sampling Theorem 240 Appendix 4 Linear Algebra 242 Appendix 5 Vector Spaces and the Function Space 250 Appendix 6 Lagrange Multipliers and Penalty Parameters 257 Appendix 7 Files for the Computer Exercises 260 References 261 Index 265

List of Illustrations

2.2 In the Argand diagram, our choice for ,

, lies always on the unit circle because

. Discretisation places the points uniformly around the unit circle. In this case and there are 12 points around the unit circle. 30

2.3 Boxcar 32 2.4 Cyclic convolution 38 2.5 Stromboli velocity/displacement. 39 3.1 Aliasing in the time domain. 48 3.2 Aliasing in the frequency domain 49 3.3 Normal modes for different window lengths, boxcar window. 52 3.4 Tapers 5 3.5 Proton magnetometer data. Top panel shows the raw data, middle panel the spectrum, lower panel the low frequency portion of the spectrum showing diurnal and semi-diurnal peaks. 59 3.6 Airgun source depth 60 3.7 Microseismic noise 61 4.1 Aftershock for Nov 1 event 6 4.2 Butterworth filters, amplitude spectra 67

4.3 Low pass filtered seismograms: effect of . 68

4.4 Causal and zero-phase filters compared 69 4.5 Water level method 74 5.1 2D Filtering. 83 5.2 2D Aliasing. 84 5.3 Upward continuation 86 5.4 Separation by phase speed 87

5.5 Example of - -filtering of a seismic reflection walk-away noise test. See text for details. 89 6.1 Two-layer density model 97

4 List of Illustrations

6.2 Picking errors 103 6.3 Dumont D’Urville jerk 105 6.4 Error ellipse 1 6.5 Moment of Inertia and Mass covariance 112

6.6 Histograms of residuals of geomagnetic measurements from a model of the large scale core field. (a) residuals from an -norm model compared with the double-exponential distribution (solid line) (b) residuals from a conventional least squares model without data rejection compared with the Gaussian distribution (c) least squares with data rejection. 116

7.1 Null space for mass/inertia problem 121 7.2 Trade-off curve 125 8.1 Hypocentre partial derivatives 142 8.2 Error Surfaces 147 9.1 Data kernels for mass and moment of inertia 155

9.2 Averaging function designed to give an estimate of the mass of the Earth’s core (solid line) and the desired averaging function 161

10.1 The first 4 optimal tapers found by solving (10.29). Note that tapers 1 and 3 are symmetrical, 2 and 4 antisymmetrical. Taper 1 has least spectral leakage but does not use the extremes of the time series; higher tapers make use of the beginning and end of the record. The corresponding eigenvalues (bandwidth retention factors) are (1) 1.0 (2) 0.99976 (3) 0.99237 (4) 0.89584. 182

1.1 Top: array records from a borehole shot to 68 geophones on land with 10 m spacing. Middle: results from the initial model. Trace 9 is the beam formed by a simple stack with unit weighting. Bottom: results of the inversion. 189

1.2 A regional event (1 November 1991 , S W, Ms

6.7) at the broad-band, three-component array operated by the University of Leeds, UK, in the Tararua Mountain area of North Island, New Zealand. The time axes do not correspond to the origin time of the earthquake. 190

1.3 Results of the filtered P and S wavelets in the frequency band 1 to 2 Hz for the regional event shown in Figure 5. Label stands for data, for initial model and for inverted model. The numbers at the left of each trace of inverted model are stacking weights. A SP-converted phase is present in the vertical and NS components of ltw1 and ltw3, close to the S wave. It is suppressed in the S wave beams. 191

1.4 1D tomography 193

1.5 Geometry of rays and receivers for the ACH inversion. Only one ray path passes through both blocks and , so there will be a trade-off between velocities in the two, creating an element of the null space. The same applies to blocks and . Ray paths 1 and 2 both pass through block but have no others in common, which will allow separation of anomalies in block from those in any other sampled block. 196

List of Illustrations 5

12.1 Data kernels for the continuous inverse problem of finding the radial component of magnetic field on the core surface from a measurement of radial component ( and horizontal component ( ) at the Earth’s surface as a function of angular distance between the two points. Note that radial component measurements sample best immediately beneath the site, but horizontal components sample best some 23 away. 210 12.2 Data distribution plots. From the top: epoch 1966, dominated by total intensity from the POGO satellite. Epoch 1842. Epoch 1777.5, the time of Cook’s voyages. Epoch 1715, containing Halley’s voyages and Feuilliee’s measurements of inclination. 213 12.3 Trade-off curve 216 12.4 Solutions for epoch 1980 for 4 values of the damping constant marked on the trade-off curve in Figure 12.3. is plotted at the core radius. They show the effect of smoothing as the damping is increased. 218 12.5 Contour maps of the errors in the estimated radial field at the core-mantle boundary for (a) 1715.0; (b) 1915.5; and (c) 1945.5. The contour interval is 5 and the units are 10 T; the projection is Plate Carre. 225 12.6 Resolution matrices for three epochs with quite different datasets. (a) epoch 1980 (b) 1966 (c) 1715. 226 12.7 Averaging functions for the same three models as Figure 12.7 and two points on the core mantle boundary. Good resolution is reflected in a sharp peak centred on the chosen site. Resolution is very poor in the south Pacific for epoch 1715.0. 227 A1.1 Square wave 232

1 Introduction

1.1 The digital revolution

Recording the output from geophysical instruments has undergone four stages of development during this century: mechanical, optical, analogue magnetic, and digital. Take the seismometer as a typical example. The principle of the basic sensor remains the same: the swing of a test mass in response to motion of its fixed pivot is monitored and converted to an estimate of the velocity of the pivot. Inertia and damping determine the response of the sensor to different frequencies of ground motion; different mechanical devices measured different frequency ranges. Ocean waves generate a great deal of noise in the range 0.1–0.5 Hz, the microseismic noise band, and it became normal practice to install a short period instrument to record frequencies above 0.5 Hz and a long period instrument to record frequencies below 0.1 Hz.

Early mechanical systems used levers to amplify the motion of the mass to drive a pen. The classic short period, high-gain design used an inverted pendulum to measure the horizontal component of motion. A large mass was required simply to overcome friction in the pen and lever system.

An optical lever reduces the friction dramatically. A light beam is directed onto a mirror, which is twisted by the response of the sensor. The reflected light beam shines onto photographic film. The sensor response deflects the light beam and the motion is recorded on film. The amplification is determined by the distance between the mirror and film. Optical recording is also compact: the film may be enlarged to a more readable size. Optical recording was in common use in the 1960’s and 1970’s.

Electromechanical devices allow motion of the mass to be converted to a voltage, which is easy to transmit, amplify, and record. Electromagnetic feedback seismometers use a null method, in which an electromagnet maintains the mass in

10 Introduction a constant position. The voltage required to drive the electromagnet is monitored and forms the output of the sensor.

This voltage can be recorded on a simple tape recorder in analogue form. There is a good analogy with tape recording sound, since seismic waves are physically very similar to low frequency sound waves. The concept of fidelity of recording carries straight across to seismic recording. A convenient way to search an analogue tape for seismic sources is to simply play it back fast, thus increasing the frequency into the audio range, and listen for bangs. Analogue magnetic records could be converted to paper records simply by playing them through a chart recorder.

The digital revolution started in seismology in about 1975, notably when the

World Wide Standardised Seismograph Network (WWSSN) was replaced by Seismological Research Observatories (SRO). These were very expensive installations requiring a computer in a building on site. The voltage is sampled in time and converted to a number for input to the computer. The tape recording systems were not able to record the incoming data continuously so the instrument was triggered and a short record retained for each event. Two channels (sometimes three) were output: a finely-sampled short period record for the high frequency arrivals and a coarsely-sampled channel (usually one sample each second) for the longer period surface waves. Limitations of the recording system meant that SROs did not herald the great revolution in seismology: that had to wait for better mass storage devices.

The great advantage of digital recording is that it allows replotting and processing of the data after recording. If a feature is too small to be seen on the plot, you simply plot it on a larger scale. More sophisticated methods of processing allow us to remove all the energy in the microseismic noise band, obviating the need for separate short and long period instruments. It is even possible to simulate an older seismometer simply by processing, provided the sensor records all the information that would have been captured by the simulated instrument. This is sometimes useful when comparing seismograms from different instruments used to record similar earthquakes. Current practice is therefore to record as much of the signal as possible and process after recording. This has one major drawback: storage of an enormous volume of data.

The storage problem was essentially solved in about 1990 by the advent of cheap hard disks and tapes with capacities of several gigabytes. Portable broadband seismometers were developed at about the same time, creating a revolution in digital seismology: prior to 1990 high-quality digital data was only available from a few permanent, manned observatories. After 1990 it was possible to deploy arrays of instruments in temporary sites to study specific problems, with only infrequent visits to change disks or tapes.

1.2 Digital Recording 1

1.2 Digital Recording

The sensor is the electromechanical device that converts ground motion into voltage; the recorder converts the voltage into numbers and stores them. The ideal sensor would produce a voltage that is proportional to the ground motion but such a device is impossible to make (the instrument response would have to be constant for all frequencies, which requires the instrument to respond instantaneously to any input, see Section 2). The next best thing is a linear response, in which the output is a convolution of the ground motion with the transfer function of the instrument.

Let the voltage output be . The recorder samples this function regularly in time, at a sampling interval

, and creates a sequence of numbers: (1.1)

The recorder stores the number as a string of bits in the same way as any computer.

Three quantities describe the limitations of the sensor: The sensitivity is the smallest signal that produces non-zero output; the resolution is the smallest change in the signal that produces non-zero output; and the linearity determines the extent to which the signal can be recovered from the output. For example, the ground motion may be so large that the signal exceeds the maximum level; the record is said to be “clipped”. The recorded motion is not linearly related to the actual ground motion, which is lost.

The same three quantities can be defined for the recorder. A pen recorder’s linearity is between the voltage and movement of the pen, which depends on the electronic circuits and mechanical linkages; its resolution and accuracy are limited by the thickness of the line the pen draws. For a digital recorder linearity requires faithful conversion of the analogue voltage to a digital count, while resolution is set by the voltage corresponding to one digital count.

The recorder suffers two further limitations: the dynamic range, , the ratio of maximum possible to minimum possible recorded signal, usually expressed in deciBel: dB; and the maximum frequency that can be recorded. For a pen recorder the dynamic range is set by the height of the paper, while the maximum frequency is set by the drum speed. For a digital recorder the dynamic range is set by the number of bits available to store each member of the time sequence, while the maximum frequency is set by the sampling interval or sampling frequency (we shall see later that the maximum meaningful frequency is in fact only half the sampling frequency).

(Parte 1 de 5)

Comentários