Numerical solution of global geomagnetic induction problems in two and three spatial dimensions can be conducted with commercially available, general-purpose, scripted, finite-element software. We show that FlexPDE is capable of solving a variety of global geomagnetic induction problems. The models treated can include arbitrary electrical conductivity of the core and mantle, arbitrary spatial structure and time behaviour of the primary magnetic field. A thin surface layer of laterally heterogeneous conductivity, representing the oceans and crust, may be represented by a boundary condition at the Earthspace interface. We describe a numerical test, or validation, of the program by comparing its output to analytic and semi-analytic solutions for several electromagnetic induction problems: (1) concentric spherical shells representing a layered Earth in a time-varying, uniform, external magnetic field, (2) eccentrically nested conductive spheres in the same field and (3) homogeneous spheres or cylinders, initially at rest, then rotating at a steady rate in a constant, uniform, external field. Calculations are performed in both the time and frequency domains, and in both 2-D and 3-D computational meshes, with adaptive mesh refinement. Root-mean-square accuracies of better than 1 per cent are achieved in all cases. A unique advantage of our technique is the ability to model Earth rotation in both the time and the frequency domain, which is especially useful for simulating satellite data.
Seafloor and sea surface gravity measurements are used to model the internal density structure of Axial Volcano. Seafloor measurements made at 53 sites within and adjacent to the Axial Volcano summit caldera provide constraints on the fine-scale density structure. Shipboard gravity measurements made along 540 km of track line above Axial Volcano and adjacent portions of the Juan de Fuca ridge provide constraints on the density over a broader region and on the isostatic compensation. The seafloor gravity anomalies give an average density of 2.7 g cm^{−3} for the uppermost portion of Axial Volcano, The sea surface gravity anomalies yield a local compensation parameter of 23%, significantly less than expected for a volcanic edifice built on zero age lithosphere. Three-dimensional ideal body models of the seafloor gravity measurements suggest that low-density material, with a density contrast of at least 0.15 g cm^{−3}, may be located underneath the summit caldera. The data are consistent with low-density material at shallow depths near the southern portion of the caldera, dipping downward to the north. The correlation of shallow low-density material and surface expressions of recent volcanic activity (fresh lavas and high-temperature hydrothermal venting) suggests a zone of highly porous crust. Seminorm minimization modeling of the surface gravity measurements also suggest a low-density region under the central portion of Axial Volcano. The presence of low-density material beneath Axial caldera suggests a partially molten magma chamber at depth.
Summary. We reduce the problem of constructing a smooth, 1-D, monotoni-cally increasing velocity profile consistent with discrete, inexact τ (p) and X(p) data to a quadratic programming problem with linear inequality constraints. For a finite-dimensional realization of the problem it is possible to find a smooth velocity profile consistent with the data whenever such a profile exists. We introduce an unusual functional measure of roughness equivalent to the second central moment or ‘Variance’ of the derivative of depth with respect to velocity for smooth profiles, and we prove that its minimal value is unique. In our experience, solutions minimizing this functional are very smooth in the sense of the two-norm of the second derivative and can be constructed inexpensively by solving one quadratic programming problem. Still smoother models (in more traditional measures) may be generated iteratively with additional quadratic programs. All the resulting models satisfy the τ (p) and X(p) data and reproduce travel-time data remarkably well, although sometimes τ (p) data alone are insufficient to ensure arrivals at large X; then an X(p) datum must be included.
We discuss the use of smoothing splines (SS) and least squares splines (LSS) in nonparametric regression on geomagnetic data. The distinction between smoothing splines and least squares splines is outlined, and it is suggested that in most cases the smoothing spline is, a preferable function estimate. However, when large data sets are involved, the smoothing spline may require a prohibitive amount of computation; the alternative often put forward when moderate or heavy smoothing is -desired is the least squares spline. This may not be capable of modeling the data adequately since the smoothness of the resulting function can be controlled only by the number and position of the knots. The computational efficiency of the least squares spline may be retained and its principal disadvantage overcome, by adding a penalty term in the square of the second derivative to the minimized functional. We call this modified form a penalized least squares spline, (denoted by PS throughout this work), and illustrate its use in the removal of secular trends in long observatory records of geomagnetic field components. We may compare the effects of smoothing splines, least squares splines, and penalized least squares splines by treating them as equivalent variable-kernel smoothers. As Silverman has shown, the kernel associated with the smoothing spline is symmetric and is highly localized with small negative sidelobes. The kernel for the least squares spline with the same fit to the data has large oscillatory sidelobes that extend far from the central region; it can be asymmetric even in the middle of the interval. For large numbers of data the penalized least squares spline can achieve essentially identical performance to that of a smoothing spline, but at a greatly reduced computational cost. The penalized spline estimation technique has potential widespread applicability in the analysis of geomagnetic and paleomagnetic data. It may be used for the removal of long term trends in data, when either the trend or the residual is of interest.
We investigate the power spectra and cross spectra derived from the three components of the vector magnetic field measured on a straight horizontal path above a statistically stationary source. All of these spectra, which can be estimated from the recorded time series, are related to a single two-dimensional power spectral density via integrals that run in the across-track direction in the wavenumber domain. Thus the measured spectra must obey a number of strong constraints: for example, the sum of the two power spectral densities of the two horizontal field components equals the power spectral density of the vertical component at every wavenumber and the phase spectrum between the vertical and along-track components is always pi/2. These constraints provide powerful checks on the quality of the measured data; if they are violated, measurement or environmental noise should be suspected. The noise due to errors of orientation has a clear characteristic; both the power and phase spectra of the components differ from those of crustal signals, which makes orientation noise easy to detect and to quantify. The spectra of the crustal signals can be inverted to obtain information about the cross-track structure of the field. We illustrate these ideas using a high-altitude Project Magnet profile flown in the southeastern Pacific Ocean.
A two- or three-dimensional treatment of magnetic anomaly data generally requires that the data be interpolated onto a regular grid, especially when the analysis involves transforming the data into the Fourier domain. We present an algorithm for interpolation and downward continuation of magnetic anomaly data that works within a statistical framework. We assume that the magnetic anomaly is a realization of a random stationary field whose power spectral density (PSD) we can estimate; by using the PSD the algorithm produces an array incorporating as much of the information contained in the data as possible while avoiding the introduction of unnecessary complexity. The algorithm has the added advantage of estimating the uncertainty of every interpolated value. Downward continuation is a natural extension of the statistical algorithm. We apply our method to the interpolation of magnetic anomalies from the region around the 95.5°W Galapagos propagating rift onto a regular grid and also to the downward continuation of these data to a depth of 2200 m. We also note that the observed PSD of the Galapagos magnetic anomalies has a minimum at low wave numbers and discuss how this implies that intermediate wavelength (65 km <λ < 1500 km) magnetic anomalies are weaker than suggested by one dimensional spectral analysis of single profiles.
It is shown that the procedure of summing (stacking) together a number of surface magnetic anomaly records does not significantly improve the ultimate resolution of short events in the geomagnetic polarity sequence. The limit of resolution (shown elsewhere to be about 3 km for surface data) is determined by the presence of signals not originating in the crust. Single profiles will not usually attain this optimal resolution, however, because of geological irregularities in the magnetic layer. Therefore, stacking offers improvement at longer wavelengths, but not all unwanted effects are properly suppressed: for example, (a) bathymetric features can cause coherent signals in closely spaced surveys and stacking will amplify them; (b) the stacking of phase-shifted anomalies causes smoothing rather than sharpening of transitions.
We present a statistical analysis of magnetic fields simulated by the Glatzmaier-Roberts dynamically consistent dynamo model. For four simulations with distinct boundary conditions, means, standard deviations, and probability functions permit an evaluation based on existing statistical paleosecular variation (PSV) models. Although none closely fits the statistical PSV models in all respects, some simulations display characteristics of the statistical PSV models in individual tests. We also find that nonzonal field statistics do not necessarily reflect heat flow conditions at the core-mantle boundary. Multitaper estimates of power and coherence spectra allow analysis of time series of single, or groups of, spherical harmonic coefficients representing the magnetic fields of the dynamo simulations outside the core. Sliding window analyses of both power and coherence spectra from two of the simulations show that a 100 kyr averaging time is necessary to realize stationary statistics of their nondipole fields and that a length of 350 kyr is not long enough to full characterize their dipole fields. Spectral analysis provides new insight into the behavior and interaction of the dominant components of the simulated magnetic fields, the axial dipole and quadrupole. Although we find spectral similarities between several reversals, there is no evidence of signatures that can be conclusively associated with reversals or excursions. We test suggestions that during reversals there is increased coupling between groups of spherical harmonic components. Despite evidence of coupling between antisymmetric and symmetric spherical harmonics in one simulation, we conclude that it is rare and not directly linked to reversals. In contrast to the reversal model of R. T. Merrill and P. L. McFadden, we demonstrate that the geomagnetic power in the dipole part of the dynamo simulations is either relatively constant or fluctuates synchronously with that of the nondipole part and that coupling between antisymmetric and symmetric components occurs when the geomagnetic power is high.
The standard least squares estimation of the average magnetization of a seamount is generally regarded as unsatisfactory. The reason for its poor performance is, I assert, due to the strong correlation and uneven magnitudes of the unfitted noise component of the magnetic anomaly. Least squares estimation depends for its efficiency and freedom from bias upon the statistical independence and common distribution of the random contamination. The primary contributor to the unfitted field is the part due to nonuniformity of the magnetic material within the seamount. If the magnetic heterogeneities are modeled as a random stationary vector field of known functional form, it is possible to calculate the covariance matrix of the magnetic field noise. Then, to the extent that the statistical description of the magnetic sources is valid, the correlation can be properly accounted for in the least squares procedure by weighting the observations to form an equilibrated set with random components that are independent and identically distributed. The main use of the average magnetization vector is in the determination of the associated paleopole position: I show how the least squares estimates of the magnetization vector and its uncertainty are mapped into a confidence region for the pole position. Application of this theory to a young seamount in the South Pacific is highly satisfactory. The model stationary vector field chosen for the magnetization is one with isotropic directional behavior and a very short length scale of correlation. It is found that the equilibrated residuals are far less correlated than those of the conventional procedure and the misfits are uniformly distributed throughout the equilibrated data. The estimated paleopole position is within a few degrees of the north geographic pole, which is the expected location because the seamount is geologically young. Traditional least squares fitting gave a pole at latitude 60°N. The 95% confidence zone associated with the paleopole position has semiaxes measuring 16° by 13°, and thus the uncertainty is not much larger than those of standard paleomagnetic work.
Confidence tables and other parameters are provided for a misfit statistic based upon the sum of the absolute discrepancies between a model and observation. This one-norm misfit is compatible with the application of linear programing in geophysical modeling, a technique of considerable value when inequality constraints are to be applied in the mathematical model.
A new statistical model is proposed for the geomagnetic secular variation over the past 5 m.y. Unlike previous models, which have concentrated upon particular kinds of paleomagnetic observables, such as VGP or field direction, the new model provides a general probability density function from which the statistical distribution of any set of paleomagnetic measurements can be deduced. The spatial power spectrum of the present-day nondipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is our model of the nondipole field. Assuming that this characterization holds for the fields of the past, we can combine it with an arbitrary statistical description of the dipole. We compute the corresponding probability density functions and cumulative distribution functions for declination and inclination that would be observed at any site on the surface of the Earth. Global paleomagnetic data spanning the past 5 m.y. are used to constrain the free parameters of the model, i.e., those giving the dipole part of the field. The final model has these properties: (1) with two exceptions, each Gauss coefficient is independently normally distributed with zero mean and standard deviation for the nondipole terms commensurate with a white source at the core surface; (2) the exceptions are the axial dipole g_{1}^{} and axial quadrupole g_{2}^{} terms; the axial dipole distribution is bimodal and symmetric, resembling a combination of two normal distributions with centers close to the present-day value and its sign-reversed counterpart; (3) the standard deviations of the nonaxial dipole terms g_{1}^{1} and h_{1}^{1} and of the magnitude of the axial dipole are all about 10% of the present-day g_{1}^{} component; and (4) the axial quadrupole reverses sign with the axial dipole and has a mean magnitude of 6% of its mean magnitude. The advantage of a model specified in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, capable of simultaneously satisfying many known properties of the field. Predictions about any measured field elements may be made to see if they satisfy the available data.
We address the inverse problem of finding the smallest envelope containing all velocity profiles consistent with a finite set of imprecise τ(p) data from a spherical earth. Traditionally, the problem has been attacked after mapping the data relations into relations on an equivalent flat earth. Of the two contemporary direct methods for finding bounds on velocities in the flat earth consistent with uncertain τ(p) data, a nonlinear (NL) approach descended from the Herglotz-Wiechert inversion and a linear programming (LP) approach, only NL has been used to solve the spherical earth problem. On the basis of the finite collection of τ(p) measurements alone, NL produces an envelope that is too narrow: there are numerous physically acceptable models that satisfy the data and violate the NL bounds, primarily because the NL method requires continuous functions as bounds on τ(p) and thus data must be fabricated between measured values by some sort of interpolation. We use the alternative LP approach, which does not require interpolation, to place optimal bounds on the velocity in the core. The resulting velocity corridor is disappointingly wide, and we therefore seek reasonable physical assumptions about the earth to reduce the range of permissible models. We argue from thermodynamic relations that P wave velocity decreases with distance from the earth's center within the outer core and quite probably within the inner core and lower mantle. We also show that the second derivative of velocity with respect to radius is probably not positive in the core. The first radial derivative constraint is readily incorporated into LP. The second derivative constraint is nonlinear and cannot be implemented exactly with LP; however, geometrical arguments enable us to apply a weak form of the constraint without any additional computation. LP inversions of core τ(p) data using the first radial derivative constraint give new, extremely tight bounds on the P wave velocity in the core. The weak second derivative constraint improves them slightly.
We have measured the Newtonian gravitational constant using the ocean as an attracting mass and a research submersible as a platform for gravity measurements. Gravitational acceleration was measured along four continuous profiles to depths of 5000 m with a resolution of 0.1 mGal. These data, combined with satellite altimetry, sea surface and seafloor gravity measurements, and seafloor bathymetry, yield an estimate of G = (6.677 +/- 0.013) x 10(-11) m3 s-2 kg-1; the fractional uncertainty is 2 parts in 1000. Within this accuracy, the submarine value for G is consistent with laboratory determinations.