We present an R package for computing univariate power spectral density estimates with little or no tuning effort. We employ sine multitapers, allowing the number to vary with frequency in order to reduce mean square error, the sum of squared bias and variance, at each point. The approximate criterion of Riedel and Sidorenko (1995) is modified to prevent runaway averaging that otherwise occurs when the curvature of the spectrum goes to zero. An iterative procedure refines the number of tapers employed at each frequency. The resultant power spectra possess significantly lower variances than those of traditional, non-adaptive estimators. The sine tapers also provide useful spectral leakage suppression. Resolution and uncertainty can be estimated from the number of degrees of freedom (twice the number of tapers). This technique is particularly suited to long time series, because it demands only one numerical Fourier transform, and requires no costly additional computation of taper functions, like the Slepian functions. It also avoids the degradation of the low-frequency performance associated with record segmentation in Welch's method. Above all, the adaptive process relieves the user of the need to set a tuning parameter, such as time-bandwidth product or segment length, that fixes frequency resolution for the entire frequency interval; instead it provides frequency-dependent spectral resolution tailored to the shape of the spectrum itself. We demonstrate the method by applying it to continuous borehole strainmeter data from a station in the Plate Boundary Observatory, namely station B084 at the Pinon Flat Observatory in southern California. The example illustrates how pad elegantly handles spectra with large dynamic range and mixed-bandwidth features-features typically found in geophysical datasets. (C) 2013 Elsevier Ltd. All rights reserved.
We examined the transverse electric mode of 2D magnetotelluric sounding for a simple system comprising a laterally variable thin sheet over an insulator terminated by a perfectly conducting base. We found, by asymptotic analysis and a numerical example, that the phase of the c response, or that of the corresponding entry in the impedance tensor, is completely unrestricted. This behavior is unlike that of 1D systems or transverse magnetic mode induction, where the phase is confined to a single quadrant.
Numerical solution of global geomagnetic induction problems in two and three spatial dimensions can be conducted with commercially available, general-purpose, scripted, finite-element software. We show that FlexPDE is capable of solving a variety of global geomagnetic induction problems. The models treated can include arbitrary electrical conductivity of the core and mantle, arbitrary spatial structure and time behaviour of the primary magnetic field. A thin surface layer of laterally heterogeneous conductivity, representing the oceans and crust, may be represented by a boundary condition at the Earthspace interface. We describe a numerical test, or validation, of the program by comparing its output to analytic and semi-analytic solutions for several electromagnetic induction problems: (1) concentric spherical shells representing a layered Earth in a time-varying, uniform, external magnetic field, (2) eccentrically nested conductive spheres in the same field and (3) homogeneous spheres or cylinders, initially at rest, then rotating at a steady rate in a constant, uniform, external field. Calculations are performed in both the time and frequency domains, and in both 2-D and 3-D computational meshes, with adaptive mesh refinement. Root-mean-square accuracies of better than 1 per cent are achieved in all cases. A unique advantage of our technique is the ability to model Earth rotation in both the time and the frequency domain, which is especially useful for simulating satellite data.
Magnetotelluric surveys on the seafloor have become an important part of marine geophysics in recent years. The distorting effects of topographic relief on the electromagnetic fields can be far-reaching, but local terrain is also important. Thus, computational techniques that can treat a large area containing fine-scale topography could find widespread application. We describe a new solution to the problem based on a well-established theory of electromagnetic induction in thin sheets. The procedure requires taking the Fourier transform of the integral equations derived by Dawson and Weaver in 1979, and by McKirdy, Weaver and Dawson in 1985. The equations in the transformed electric field are solved iteratively by a new technique. We prove the new iterative procedure is always convergent, whereas the original scheme diverges when the grid spacing of the discretization is small. We also give a means of correcting for distant features that need not be specified in as great detail. Preliminary tests confirm the new process is very efficient and that topographic data sets of several million points will be handled with ease.
A closed-form solution is given for a 2-D, transverse electric mode, magnetotelluric (MT) problem. The model system consists of a finite vertical thin conductor with variable integrated conductivity over a perfectly conducting base. A notable property of the solution is that the frequency response possesses a single pole in the complex plane. Systems with finitely many resonances play a central role in the 1-D MT inverse problem based on finite data sets, but until now, no 2-D system of this kind was known. The particular model is shown to be just one of a large class of thin conductors with same the property, and further examples are given. The solutions of the induction problem for members of this family can often be written in compact closed form, making them the simplest known solutions to the 2-D MT problem.
P>The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
Weidelt and Kaikkonen showed that in the transverse magnetic (TM) mode of magnetotellurics it is not always possible to match exactly the 2-D response at a single site with a 1-D model, although a good approximation usually seems possible. We give a new elementary example of this failure. We show for the first time that the transverse electric (TE) mode responses can also be impossible to match with a 1-D response, and that the deviations can be very large.
The spectral analysis of geological and geophysical data has been a fundamental tool in understanding Earth's processes. We present a Fortran 90 library for multitaper spectrum estimation, a state-of-the-art method that has been shown to outperform the standard methods. The library goes beyond power spectrum estimation and extracts for the user more information including confidence intervals, diagnostics for single frequency periodicities, and coherence and transfer functions for multivariate problems. In addition, the sine multitaper method can also be implemented. The library presented here provides the tools needed in multiple fields of the Earth sciences for the analysis of data as evident from various examples. (C) 2008 Elsevier Ltd. All rights reserved.
We describe a new technique for implementing the constraints on magnetic fields arising from two hypotheses about the fluid core of the Earth, namely the frozen-flux hypothesis and the hypothesis that the core is in magnetostrophic force balance with negligible leakage of current into the mantle. These hypotheses lead to time-independence of the integrated flux through certain 'null-flux patches' on the core surface, and to time-independence of their radial vorticity. Although the frozen-flux hypothesis has received attention before, constraining the radial vorticity has not previously been attempted. We describe a parametrization and an algorithm for preserving topology of radial magnetic fields at the core surface while allowing morphological changes. The parametrization is a spherical triangle tesselation of the core surface. Topology with respect to a reference model (based on data from the Oersted satellite) is preserved as models at different epochs are perturbed to optimize the fit to the data; the topology preservation is achieved by the imposition of inequality constraints on the model, and the optimization at each iteration is cast as a bounded value least-squares problem. For epochs 2000, 1980, 1945, 1915 and 1882 we are able to produce models of the core field which are consistent with flux and radial vorticity conservation, thus providing no observational evidence for the failure of the underlying assumptions. These models are a step towards the production of models which are optimal for the retrieval of frozen-flux velocity fields at the core surface.
We develop a method to obtain confidence intervals of earthquake source parameters, such as stress drop, seismic moment and corner frequency, from single station measurements. We use the idea of jackknife variance combined with a multitaper spectrum estimation to obtain the confidence regions. The approximately independent spectral estimates provide an ideal case to perform jackknife analysis. Given the particular properties of the problem to solve for source parameters, including high dynamic range, non-negativity, non-linearity, etc., a log transformation is necessary before performing the jackknife analysis. We use a Student's t distribution after transformation to obtain accurate confidence intervals. Even without the distribution assumption, we can generate typical standard deviation confidence regions. We apply this approach to four earthquakes recorded at 1.5 and 2.9 km depth at Cajon Pass, California. It is necessary to propagate the errors from all unknowns to obtain reliable confidence regions. From the example, it is shown that a 50 per cent error in stress drop is not unrealistic, and even higher errors are expected if velocity structure and location errors are present. An extension to multiple station measurement is discussed.
We examine the nonlinear inverse problem of electromagnetic induction to recover electrical conductivity. As this is an ill-posed problem based on inaccurate data, there is a critical need to find the reliable features of the models of electrical conductivity. We present a method for obtaining bounds on Earth's average conductivity that all conductivity profiles must obey. Our method is based completely on optimization theory for an all-at-once approach to inverting frequency-domain electromagnetic data. The forward modeling equations are constraints in an optimization problem solving for the electric fields and the conductivity simultaneously. There is no regularization required to solve the problem. The computational framework easily allows additional inequality constraints to be imposed, allowing us to further narrow the bounds. We draw conclusions from a global geomagnetic depth sounding data set and compare with laboratory results, inferring temperature and water content through published Boltzmann-Arrhenius conductivity models. If the upper mantle is assumed to be volatile free we find it has an average temperature of 1409-1539 degrees C. For the top 1000 km of the lower mantle, we find an average temperature of 1849-2008 degrees C. These are in agreement with generally accepted mantle temperatures. Our conclusions about water content of the transition zone disagree with previous research. With our bounds on conductivity, we calculate a transition zone consisting entirely of Wadsleyite has < 0.27 wt.% water and as we add in a fraction of Ringwoodite, the upper bound on water content decreases proportionally. This water content is less than the 0.4 wt.% water required for melt or pooling at the 410 km seismic discontinuity. Published by Elsevier B.V.
The power spectral density of geophysical signals provides information about the processes that generated them. We present a new approach to determine power spectra based on Thomson's multitaper analysis method. Our method reduces the bias due to the curvature of the spectrum close to the frequency of interest. Even while maintaining the same resolution bandwidth, bias is reduced in areas where the power spectrum is significantly quadratic. No additional sidelobe leakage is introduced. In addition, our methodology reliably estimates the derivatives (slope and curvature) of the spectrum. The extra information gleaned from the signal is useful for parameter estimation or to compare different signals.
We present two approaches to invert geophysical measurements and estimate subsurface properties and their uncertainties when little is known a priori about the size of the errors associated with the data. We illustrate these approaches by inverting first-arrival traveltimes of seismic waves measured in a vertical well to infer the variation of compressional slowness in depth. First, we describe a Baye-Sian formulation based on probability distributions that define prior knowledge about the slowness and the data errors. We use an empirical Bayes approach, where hyperparameters are not well known ahead of time (e.g., the variance of the data errors) and are estimated from their most probable value. given the data. The second approach is a non-Bayesian formulation that we call spectral, in the sense that it uses the power spectral density of the traveltime data to constrain the inversion (e.g., to estimate the variance of the data errors). In the spectral approach, we vary assumptions made about the characteristics of the slowness signal and evaluate the resulting slowness estimates and their uncertainties. This approach is computationally simple and starts from a few assumptions. These assumptions can be checked during the analysis. On the other hand, it requires evenly spaced traveltime measurements, and it cannot be extended easily (e.g., to data that have gaps). In contrast, the Bayesian framework is based on a general theory that can be generalized immediately, but it is more involved computationally. Despite the conceptual and practical differences, we find that the two approaches give the same results when they start from the same assumptions: The allegiance to a Bayesian or non-Bayesian formulation matters less than what one is willing to assume when solving the inverse problem.
We analyze the problem of reliably estimating uncertainties of the earthquake source spectrum and related source parameters using Empirical Green Functions (EGF). We take advantage of the large dataset available from 10 seismic stations at hypocentral distances (10 km < d <50 km) to average spectral ratios of the 2001 M5.1 Anza earthquake and 160 nearby aftershocks. We estimate the uncertainty of the average source spectrum of the M5.1 target earthquake by performing propagation of errors, which, due to the large number of EGFs used, is significantly smaller than that obtained using a single EGF. Our approach provides estimates of both the earthquake source spectrum and its uncertainties, plus confidence intervals on related source parameters such as radiated seismic energy or apparent stress, allowing the assessment of statistical significance. This is of paramount importance when comparing different sized earthquakes and analyzing source scaling of the earthquake rupture process. Our best estimate of radiated energy for the target earthquake is 1.24×1011 Joules with 95% confidence intervals (0.73×1011, 2.28×1011). The estimated apparent stress of 0.33 (0.19, 0.59) MPa is relatively low compared to previous estimates from smaller earthquakes (1MPa) in the same region.
Stacks of globally distributed relative paleointensity records from sediment cores are used to study temporal variations in the strength of the geomagnetic dipole. We assess the intrinsic accuracy and resolution of such stacks, which may be limited by errors in paleointensity, non-dipole field contributions, and the age scales assigned to each sediment core. Our approach employs two types of simulations. Numerical geodynamo models generate accurate predictions of time series of magnetic variations anywhere in the world. The predicted variations are then degraded using an appropriate statistical model to simulate expected age and paleointensity errors. A series of experiments identify the major contributors to error and loss of resolution in the resulting stacks. The statistical model simulates rock magnetic and measurement errors in paleointensity, and age errors due to finite sampling and approximations inherent in interpolation, incomplete or inaccurate tie point information, and sedimentation rate variations. Data sampling and interpolation to a designated age scale cause substantial decorrelation, and control the maximum level of agreement attainable between completely accurate records. The particular method of interpolation appears to have little effect on the coherence between accurate records, but denser tie point data improve the agreement. Age errors decorrelate geomagnetic signals, usually at shorter periods, although they can destroy coherence over a broad range of periods. The poor correlation between neighboring paleomagnetic records often observed in real data can be accounted for by age errors of moderate magnitude. In a global dataset of 20 records, modeled after the SINT800 compilation and spanning 300 kyr, our results show that dipole variations with periods longer than about 20 kyr can be recovered by the stacking process. Reasonable contributions to error in the paleointensity itself have a modest influence on the result, as do non-dipole field contributions whose effect is minor at periods longer than 10 kyr. Modest errors in the ages of tie points probably account for most of the degradation in geomagnetic signal. Stacked sedimentary paleomagnetic records can be improved by denser temporal sampling and careful selection of independent high-quality tie points. (C) 2004 Elsevier B.V. All rights reserved.
In many interferometers, two fringe signals can be generated in quadrature. The relative phase of the two fringe signals depends on whether the optical path length is increasing or decreasing. A system is developed in which two quadrature fringe signals are digitized and analyzed in real time with a digital signal processor to yield a linear, high-resolution, wide-dynamic-range displacement transducer. The resolution in a simple Michelson interferometer with inexpensive components is 5 X 10(-13) m Hz(-1/2) at 2 Hz. (C) 2004 Optical Society of America.
[1] The high-amplitude magnetic anomalies observed by the Mars Global Surveyor imply the presence of a large intensity of magnetization in the Martian crust. We investigate the mathematical question of determining the distribution of magnetization that has the smallest possible intensity, without any assumptions about the direction of magnetization. The greatest lower bound on intensity found in this way depends on an assumed layer thickness. An analytical expression is discovered for the optimal magnetization, and numerical methods are described for solving the equations that determine the distribution. Some relatively small scale numerical calculations illustrate the theory. These calculations enable us to conclude, for example, that if the magnetization of Mars is confined to a 50-km thick layer, it must be magnetized with an intensity of at least 4.76 A/m.
[1] The magnetic field originating within the Earth can be divided into core and crustal components, which can be characterized by the geomagnetic power spectrum. While the core spectrum is determined quite well by satellite studies, models of the shorter wavelength crustal spectrum disagree considerably. We reexamine aeromagnetic data used by O'Brien et al. [1999] to obtain a new, improved estimate of the crustal geomagnetic power spectrum. O'Brien et al. 's model somewhat failed to give a satisfactory connection between the longer-wavelength satellite studies and a reliable crustal model. We show that this was caused by an inadequate processing step that aimed to remove external variations from the data. We moreover attempt to bound the long-wavelength part of the spectrum using constraints of monotonicity in the correlation of the magnetization. However, this proves to be a weak constraint. Reversing the process, though, we are able to evaluate the correlation function using the reliable part of our geomagnetic spectrum. Thus we can obtain a sensible estimate for the long-wavelength part of the spectrum that is not well constrained by the data. Our new model shows better agreement with earlier satellite studies and can be considered reliable in the spherical harmonic degree range l = 30 to 1200.
We describe the experimental procedure we use to calibrate a cryogenic pass-through magnetometer. The procedure is designed to characterize the magnetometer sensitivity as a function of position within the sensing region. Then we extend a theory developed in an earlier paper to cover inexact observations and apply it to the data set. The theory allows the calculation of a smooth, harmonic, internally consistent interpolating function for each of the nine components of the response tensor of the magnetometer. With these functions we can calculate the response to a dipole source in any orientation and position, and predict the magnetometer signal from any kind of specimen. The magnetometer in the paleomagnetic laboratory onboard the research vessel Joides Resolution is the subject of one such experiment and we present the results. The variation with position of sensitivity is displayed in a series of plane slices through the magnetometer. We discover from the calibration model that the X and Z coils are misaligned so that the magnetic centre of the coils is displaced from the geometric centre by approximately 0.7 cm. We synthesize the signal expected from the magnetometer when a variety of simple cores are measured. We find that, unless appropriate corrections are made, changes in magnetization direction can appear as variations in magnetic intensity, and conversely, fluctuations in the magnetization strength can produce apparent swings in declination and inclination. The magnitude of these effects is not small and is certainly worth taking into account in the interpretation of records from this kind of instrument. In a pilot study on data from a core measured with the shipboard magnetometer, we observe some large distortions, particularly in declination, that are attributable to uncorrected instrumental effects.
A major limitation in the analysis of physical quantities measured from a stratigraphic core is incomplete knowledge of the depth to age relationship for the core. Records derived from diverse locations are often compared or combined to construct records that represent a global signal. Time series analysis of individual or combined records is commonly employed to seek quasi-periodic components or characterize the timescales of relevant physical processes. Assumptions that are frequently made in the approximation of depth to age relationships can have a dramatic and harmful effect on the spectral content of records from stratigraphic cores. A common procedure for estimating ages in a set of samples from a stratigraphic core is to assign, based on complementary data, the ages at a number of depths (tie points) and then assume a uniform accumulation rate between the tie points. Imprecisely dated or misidentified tie points and naturally varying accumulation rates give rise to discrepancies between the inferred and the actual ages of a sample. We develop a statistical model for age uncertainties in stratigraphic cores that treats the true, but in practice unknown, ages of core samples as random variables. For inaccuracies in the ages of tie points, we draw the error from a zero-mean normal distribution. For a variable accumulation rate, we require the actual ages of a sequence of samples to be monotonically increasing and the age errors to have the form of a Brownian bridge. That is, the errors are zero at the tie points. The actual ages are modeled by integrating a piecewise constant, randomly varying accumulation rate. In each case, our analysis yields closed form expressions for the expected value and variance of resulting errors in age at any depth in the core. By Monte Carlo simulation with plausible parameters, we find that age errors across a paleomagnetic record due to misdated tie points are likely of the same order as the tie point discrepancies. Those due to accumulation rate variations can be as large as 30 kyr, but are probably less than 10 kyr. We provide a method by which error estimates like these can be made for similar stratigraphic dating problems and apply our statistical model to an idealized marine sedimentary paleomagnetic record. Both types of errors severely degrade the spectral content of the inferred record. We quantify these effects using realistic tie point ages, their uncertainties and depositional parameters. (C) 2002 Elsevier Science B.V. All rights reserved.
Near-bottom magnetic data contain information on paleomagnetic field fluctuations during chron C5 as observed in both the North and South Pacific. The North Pacific data include 12 survey lines collected with a spatial separation of up to 120 kin, and the South Pacific data consist of a single long line collected on the west flank of the East Pacific Rise (EPR) at 19 degreesS. The North Pacific magnetic profiles reveal a pattern of linear, short-wavelength (2 to 5 km) anomalies (tiny wiggles) that are highly correlated over the shortest (3.8 km) to longest (120 km) separations in the survey. Magnetic inversions incorporating basement topography show that these anomalies are not caused by the small topographic relief. The character of the near-bottom magnetic profile from anomaly 5 on the west flank of the EPR, formed at a spreading rate more than twice that of the North Pacific, displays a remark-able similarity to the individual and stacked lines from the North Pacific survey area, Over distances corresponding to 1 m.y., 19 lows in the magnetic anomaly profile can be correlated between the North and South Pacific lines. Modeling the lows as due to short polarity events suggests that they may be caused by rapid swings of the magnetic field between normal and reversed polarities with little or no time in the reversed state. Owing to the implausibly high number of reversals required to account for these anomalies and the lack of any time in the reversed state, we conclude that the near-bottom signal is primarily a record of pateointensity fluctuations during chron C5. Spectral analysis of the North Pacific near bottom lines shows that the signal is equivalent to a paleointensity curve with a temporal resolution of 40 to 60 kyr, while measurements of the smallest separations of correlatable dips in the field suggest a temporal resolution of 36 kyr.
We present a statistical analysis of magnetic fields simulated by the Glatzmaier-Roberts dynamically consistent dynamo model. For four simulations with distinct boundary conditions, means, standard deviations, and probability functions permit an evaluation based on existing statistical paleosecular variation (PSV) models. Although none closely fits the statistical PSV models in all respects, some simulations display characteristics of the statistical PSV models in individual tests. We also find that nonzonal field statistics do not necessarily reflect heat flow conditions at the core-mantle boundary. Multitaper estimates of power and coherence spectra allow analysis of time series of single, or groups of, spherical harmonic coefficients representing the magnetic fields of the dynamo simulations outside the core. Sliding window analyses of both power and coherence spectra from two of the simulations show that a 100 kyr averaging time is necessary to realize stationary statistics of their nondipole fields and that a length of 350 kyr is not long enough to full characterize their dipole fields. Spectral analysis provides new insight into the behavior and interaction of the dominant components of the simulated magnetic fields, the axial dipole and quadrupole. Although we find spectral similarities between several reversals, there is no evidence of signatures that can be conclusively associated with reversals or excursions. We test suggestions that during reversals there is increased coupling between groups of spherical harmonic components. Despite evidence of coupling between antisymmetric and symmetric spherical harmonics in one simulation, we conclude that it is rare and not directly linked to reversals. In contrast to the reversal model of R. T. Merrill and P. L. McFadden, we demonstrate that the geomagnetic power in the dipole part of the dynamo simulations is either relatively constant or fluctuates synchronously with that of the nondipole part and that coupling between antisymmetric and symmetric components occurs when the geomagnetic power is high.
The Earth's magnetic field can be subdivided into core and crustal components and we seek to characterize the crustal part through its spatial power spectrum, R-1. We process vector Magsat data to isolate the crustal field and then invert power spectral densities of flight-local components along-track for R-1 following O'Brien et al. [1999]. Our model, designated LPPC, is accurate up to approximately spherical harmonic degree 45 (lambda = 900 km): this is the resolution limit of our data and suggests that global crustal anomaly maps constructed from vector Magsat data should not contain features with wavelengths less than 900 km. We find continental power spectra to be greater than oceanic ones and attribute this to the relative thicknesses of continental and oceanic crust.
Parker, R.
2001. When plates were paving stones . Plate tectonics, an insiders history on the modern theory of the Earth. ( Oreskes N, LeGrand HE, Eds.).: WestView Press Abstract