We present an R package for computing univariate power spectral density estimates with little or no tuning effort. We employ sine multitapers, allowing the number to vary with frequency in order to reduce mean square error, the sum of squared bias and variance, at each point. The approximate criterion of Riedel and Sidorenko (1995) is modified to prevent runaway averaging that otherwise occurs when the curvature of the spectrum goes to zero. An iterative procedure refines the number of tapers employed at each frequency. The resultant power spectra possess significantly lower variances than those of traditional, non-adaptive estimators. The sine tapers also provide useful spectral leakage suppression. Resolution and uncertainty can be estimated from the number of degrees of freedom (twice the number of tapers). This technique is particularly suited to long time series, because it demands only one numerical Fourier transform, and requires no costly additional computation of taper functions, like the Slepian functions. It also avoids the degradation of the low-frequency performance associated with record segmentation in Welch{\textquoteright}s method. Above all, the adaptive process relieves the user of the need to set a tuning parameter, such as time-bandwidth product or segment length, that fixes frequency resolution for the entire frequency interval; instead it provides frequency-dependent spectral resolution tailored to the shape of the spectrum itself. We demonstrate the method by applying it to continuous borehole strainmeter data from a station in the Plate Boundary Observatory, namely station B084 at the Pinon Flat Observatory in southern California. The example illustrates how pad elegantly handles spectra with large dynamic range and mixed-bandwidth features-features typically found in geophysical datasets. (C) 2013 Elsevier Ltd. All rights reserved.

}, keywords = {Adaptive spectrum estimation, bias, Borehole strainmeters, Multitaper spectral analysis, multitapers, noise-levels, Observatory, observatory borehole strainmeters, plate boundary, Power spectral density estimation, R language, Sine}, isbn = {0098-3004}, doi = {10.1016/j.cageo.2013.09.015}, url = {We examined the transverse electric mode of 2D magnetotelluric sounding for a simple system comprising a laterally variable thin sheet over an insulator terminated by a perfectly conducting base. We found, by asymptotic analysis and a numerical example, that the phase of the c response, or that of the corresponding entry in the impedance tensor, is completely unrestricted. This behavior is unlike that of 1D systems or transverse magnetic mode induction, where the phase is confined to a single quadrant.

}, isbn = {0016-8033}, doi = {10.1190/geo2013-0325.1}, url = {Numerical solution of global geomagnetic induction problems in two and three spatial dimensions can be conducted with commercially available, general-purpose, scripted, finite-element software. We show that FlexPDE is capable of solving a variety of global geomagnetic induction problems. The models treated can include arbitrary electrical conductivity of the core and mantle, arbitrary spatial structure and time behaviour of the primary magnetic field. A thin surface layer of laterally heterogeneous conductivity, representing the oceans and crust, may be represented by a boundary condition at the Earthspace interface. We describe a numerical test, or validation, of the program by comparing its output to analytic and semi-analytic solutions for several electromagnetic induction problems: (1) concentric spherical shells representing a layered Earth in a time-varying, uniform, external magnetic field, (2) eccentrically nested conductive spheres in the same field and (3) homogeneous spheres or cylinders, initially at rest, then rotating at a steady rate in a constant, uniform, external field. Calculations are performed in both the time and frequency domains, and in both 2-D and 3-D computational meshes, with adaptive mesh refinement. Root-mean-square accuracies of better than 1 per cent are achieved in all cases. A unique advantage of our technique is the ability to model Earth rotation in both the time and the frequency domain, which is especially useful for simulating satellite data.

}, keywords = {constraints, Earth, eccentrically nested spheres, Electrical properties, geomagnetic induction, heterogeneity, magnetic field, mantle electrical-conductivity, mid-mantle, Numerical solutions, oceans, responses, satellite induction, Satellite magnetics}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2011.05255.x}, url = {Magnetotelluric surveys on the seafloor have become an important part of marine geophysics in recent years. The distorting effects of topographic relief on the electromagnetic fields can be far-reaching, but local terrain is also important. Thus, computational techniques that can treat a large area containing fine-scale topography could find widespread application. We describe a new solution to the problem based on a well-established theory of electromagnetic induction in thin sheets. The procedure requires taking the Fourier transform of the integral equations derived by Dawson and Weaver in 1979, and by McKirdy, Weaver and Dawson in 1985. The equations in the transformed electric field are solved iteratively by a new technique. We prove the new iterative procedure is always convergent, whereas the original scheme diverges when the grid spacing of the discretization is small. We also give a means of correcting for distant features that need not be specified in as great detail. Preliminary tests confirm the new process is very efficient and that topographic data sets of several million points will be handled with ease.

}, keywords = {conductivity, Earth, electromagnetic induction, electromagnetics, Fourier analysis, Magnetotelluric, mantle, marine, Numerical solutions, polarization induction, surface, thin half-sheets}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2011.05350.x}, url = {A closed-form solution is given for a 2-D, transverse electric mode, magnetotelluric (MT) problem. The model system consists of a finite vertical thin conductor with variable integrated conductivity over a perfectly conducting base. A notable property of the solution is that the frequency response possesses a single pole in the complex plane. Systems with finitely many resonances play a central role in the 1-D MT inverse problem based on finite data sets, but until now, no 2-D system of this kind was known. The particular model is shown to be just one of a large class of thin conductors with same the property, and further examples are given. The solutions of the induction problem for members of this family can often be written in compact closed form, making them the simplest known solutions to the 2-D MT problem.

}, keywords = {Electromagnetic theory, geomagnetic induction, Magnetotelluric}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2011.05091.x}, url = {P\>The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell{\textquoteright}s equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

}, keywords = {algorithm, existence, geomagnetic induction, induction, Inverse theory, Magnetotelluric, models, Non-linear differential equations, Numerical solutions, partial-differential-equations, system least-squares}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2010.04895.x}, url = {Weidelt and Kaikkonen showed that in the transverse magnetic (TM) mode of magnetotellurics it is not always possible to match exactly the 2-D response at a single site with a 1-D model, although a good approximation usually seems possible. We give a new elementary example of this failure. We show for the first time that the transverse electric (TE) mode responses can also be impossible to match with a 1-D response, and that the deviations can be very large.

}, keywords = {electromagnetic induction, Electromagnetic theory, existence, geomagnetic induction, impedances, inverse problem, Magnetotelluric}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2010.04512.x}, url = {The spectral analysis of geological and geophysical data has been a fundamental tool in understanding Earth{\textquoteright}s processes. We present a Fortran 90 library for multitaper spectrum estimation, a state-of-the-art method that has been shown to outperform the standard methods. The library goes beyond power spectrum estimation and extracts for the user more information including confidence intervals, diagnostics for single frequency periodicities, and coherence and transfer functions for multivariate problems. In addition, the sine multitaper method can also be implemented. The library presented here provides the tools needed in multiple fields of the Earth sciences for the analysis of data as evident from various examples. (C) 2008 Elsevier Ltd. All rights reserved.

}, keywords = {analysis, bias, coherence, core, deconvolution, earthquake, error, Fortran 90 library, free oscillations, multitaper spectrum, noise, ocean, spectral analysis, Transfer function, waves}, isbn = {0098-3004}, doi = {10.1016/j.cageo.2008.06.007}, url = {We describe a new technique for implementing the constraints on magnetic fields arising from two hypotheses about the fluid core of the Earth, namely the frozen-flux hypothesis and the hypothesis that the core is in magnetostrophic force balance with negligible leakage of current into the mantle. These hypotheses lead to time-independence of the integrated flux through certain {\textquoteright}null-flux patches{\textquoteright} on the core surface, and to time-independence of their radial vorticity. Although the frozen-flux hypothesis has received attention before, constraining the radial vorticity has not previously been attempted. We describe a parametrization and an algorithm for preserving topology of radial magnetic fields at the core surface while allowing morphological changes. The parametrization is a spherical triangle tesselation of the core surface. Topology with respect to a reference model (based on data from the Oersted satellite) is preserved as models at different epochs are perturbed to optimize the fit to the data; the topology preservation is achieved by the imposition of inequality constraints on the model, and the optimization at each iteration is cast as a bounded value least-squares problem. For epochs 2000, 1980, 1945, 1915 and 1882 we are able to produce models of the core field which are consistent with flux and radial vorticity conservation, thus providing no observational evidence for the failure of the underlying assumptions. These models are a step towards the production of models which are optimal for the retrieval of frozen-flux velocity fields at the core surface.

}, keywords = {boundary, flow, frozen flux, geomagnetic secular variation, geomagnetism, mantle, perfectly conducting core, secular variation, surface}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2007.03526.x}, url = {We develop a method to obtain confidence intervals of earthquake source parameters, such as stress drop, seismic moment and corner frequency, from single station measurements. We use the idea of jackknife variance combined with a multitaper spectrum estimation to obtain the confidence regions. The approximately independent spectral estimates provide an ideal case to perform jackknife analysis. Given the particular properties of the problem to solve for source parameters, including high dynamic range, non-negativity, non-linearity, etc., a log transformation is necessary before performing the jackknife analysis. We use a Student{\textquoteright}s t distribution after transformation to obtain accurate confidence intervals. Even without the distribution assumption, we can generate typical standard deviation confidence regions. We apply this approach to four earthquakes recorded at 1.5 and 2.9 km depth at Cajon Pass, California. It is necessary to propagate the errors from all unknowns to obtain reliable confidence regions. From the example, it is shown that a 50 per cent error in stress drop is not unrealistic, and even higher errors are expected if velocity structure and location errors are present. An extension to multiple station measurement is discussed.

}, keywords = {apparent stress, california, confidence interval, deep borehole recordings, depth, earthquake source parameters, energy, error, fault, jackknife, regression, seismograms, spectral-analysis, stress drop}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2006.03257.x}, url = {We examine the nonlinear inverse problem of electromagnetic induction to recover electrical conductivity. As this is an ill-posed problem based on inaccurate data, there is a critical need to find the reliable features of the models of electrical conductivity. We present a method for obtaining bounds on Earth{\textquoteright}s average conductivity that all conductivity profiles must obey. Our method is based completely on optimization theory for an all-at-once approach to inverting frequency-domain electromagnetic data. The forward modeling equations are constraints in an optimization problem solving for the electric fields and the conductivity simultaneously. There is no regularization required to solve the problem. The computational framework easily allows additional inequality constraints to be imposed, allowing us to further narrow the bounds. We draw conclusions from a global geomagnetic depth sounding data set and compare with laboratory results, inferring temperature and water content through published Boltzmann-Arrhenius conductivity models. If the upper mantle is assumed to be volatile free we find it has an average temperature of 1409-1539 degrees C. For the top 1000 km of the lower mantle, we find an average temperature of 1849-2008 degrees C. These are in agreement with generally accepted mantle temperatures. Our conclusions about water content of the transition zone disagree with previous research. With our bounds on conductivity, we calculate a transition zone consisting entirely of Wadsleyite has \< 0.27 wt.\% water and as we add in a fraction of Ringwoodite, the upper bound on water content decreases proportionally. This water content is less than the 0.4 wt.\% water required for melt or pooling at the 410 km seismic discontinuity. Published by Elsevier B.V.

}, keywords = {algorithm, conductivity of the mantle, electrical, electrical-conductivity, electromagnetic induction, existence, geomagnetic induction, geophysical inverse theory, ideal bodies, inverse problem, lower mantle, optimization, transition-zone, water, water in the mantle}, isbn = {0031-9201}, doi = {10.1016/j.pepi.2006.09.001}, url = {The power spectral density of geophysical signals provides information about the processes that generated them. We present a new approach to determine power spectra based on Thomson{\textquoteright}s multitaper analysis method. Our method reduces the bias due to the curvature of the spectrum close to the frequency of interest. Even while maintaining the same resolution bandwidth, bias is reduced in areas where the power spectrum is significantly quadratic. No additional sidelobe leakage is introduced. In addition, our methodology reliably estimates the derivatives (slope and curvature) of the spectrum. The extra information gleaned from the signal is useful for parameter estimation or to compare different signals.

}, keywords = {coherence, curvature, derivatives, frequency, harmonic-analysis, inverse problem, multitaper spectrum, noise, sea beam, seismograms, waves}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.2007.03592.x}, url = {We present two approaches to invert geophysical measurements and estimate subsurface properties and their uncertainties when little is known a priori about the size of the errors associated with the data. We illustrate these approaches by inverting first-arrival traveltimes of seismic waves measured in a vertical well to infer the variation of compressional slowness in depth. First, we describe a Baye-Sian formulation based on probability distributions that define prior knowledge about the slowness and the data errors. We use an empirical Bayes approach, where hyperparameters are not well known ahead of time (e.g., the variance of the data errors) and are estimated from their most probable value. given the data. The second approach is a non-Bayesian formulation that we call spectral, in the sense that it uses the power spectral density of the traveltime data to constrain the inversion (e.g., to estimate the variance of the data errors). In the spectral approach, we vary assumptions made about the characteristics of the slowness signal and evaluate the resulting slowness estimates and their uncertainties. This approach is computationally simple and starts from a few assumptions. These assumptions can be checked during the analysis. On the other hand, it requires evenly spaced traveltime measurements, and it cannot be extended easily (e.g., to data that have gaps). In contrast, the Bayesian framework is based on a general theory that can be generalized immediately, but it is more involved computationally. Despite the conceptual and practical differences, we find that the two approaches give the same results when they start from the same assumptions: The allegiance to a Bayesian or non-Bayesian formulation matters less than what one is willing to assume when solving the inverse problem.

}, keywords = {empirical bayes}, isbn = {0016-8033}, doi = {10.1190/1.2194516}, url = {We analyze the problem of reliably estimating uncertainties of the earthquake source spectrum and related source parameters using Empirical Green Functions (EGF). We take advantage of the large dataset available from 10 seismic stations at hypocentral distances (10 km \< d \<50 km) to average spectral ratios of the 2001 M5.1 Anza earthquake and 160 nearby aftershocks. We estimate the uncertainty of the average source spectrum of the M5.1 target earthquake by performing propagation of errors, which, due to the large number of EGFs used, is significantly smaller than that obtained using a single EGF. Our approach provides estimates of both the earthquake source spectrum and its uncertainties, plus confidence intervals on related source parameters such as radiated seismic energy or apparent stress, allowing the assessment of statistical significance. This is of paramount importance when comparing different sized earthquakes and analyzing source scaling of the earthquake rupture process. Our best estimate of radiated energy for the target earthquake is 1.24{\texttimes}1011 Joules with 95\% confidence intervals (0.73{\texttimes}1011, 2.28{\texttimes}1011). The estimated apparent stress of 0.33 (0.19, 0.59) MPa is relatively low compared to previous estimates from smaller earthquakes (1MPa) in the same region.

}, keywords = {16 Structural geology, 19 Seismology, active faults, deformation, earthquakes, energy, faults, neotectonics, seismic, seismicity, seismotectonics, symposia, tectonics}, isbn = {978-0-87590-435-1}, doi = {10.1029/170gm08}, url = {http://dx.doi.org/10.1029/170GM08}, author = {Prieto, G. A. and Parker, R.L. and Vernon, F. L. and Shearer, P. M. and Thomson, D. J.}, editor = {Abercrombie, Rachel E. and McGarr, Art and Kanamori, Hiroo and Di Toro, Giulio} } @article {28010, title = {Assigning uncertainties in the inversion of NMR relaxation data}, journal = {Journal of Magnetic Resonance}, volume = {174}, number = {2}, year = {2005}, note = {n/a}, month = {Jun}, pages = {314-324}, type = {Article}, abstract = {Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T-2 log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere. \© 2005 Elsevier Inc. All rights reserved.

}, keywords = {integral-equations, multiexponential relaxation, times}, isbn = {1090-7807}, doi = {10.1016/j.jmr.2005.03.002}, url = {Stacks of globally distributed relative paleointensity records from sediment cores are used to study temporal variations in the strength of the geomagnetic dipole. We assess the intrinsic accuracy and resolution of such stacks, which may be limited by errors in paleointensity, non-dipole field contributions, and the age scales assigned to each sediment core. Our approach employs two types of simulations. Numerical geodynamo models generate accurate predictions of time series of magnetic variations anywhere in the world. The predicted variations are then degraded using an appropriate statistical model to simulate expected age and paleointensity errors. A series of experiments identify the major contributors to error and loss of resolution in the resulting stacks. The statistical model simulates rock magnetic and measurement errors in paleointensity, and age errors due to finite sampling and approximations inherent in interpolation, incomplete or inaccurate tie point information, and sedimentation rate variations. Data sampling and interpolation to a designated age scale cause substantial decorrelation, and control the maximum level of agreement attainable between completely accurate records. The particular method of interpolation appears to have little effect on the coherence between accurate records, but denser tie point data improve the agreement. Age errors decorrelate geomagnetic signals, usually at shorter periods, although they can destroy coherence over a broad range of periods. The poor correlation between neighboring paleomagnetic records often observed in real data can be accounted for by age errors of moderate magnitude. In a global dataset of 20 records, modeled after the SINT800 compilation and spanning 300 kyr, our results show that dipole variations with periods longer than about 20 kyr can be recovered by the stacking process. Reasonable contributions to error in the paleointensity itself have a modest influence on the result, as do non-dipole field contributions whose effect is minor at periods longer than 10 kyr. Modest errors in the ages of tie points probably account for most of the degradation in geomagnetic signal. Stacked sedimentary paleomagnetic records can be improved by denser temporal sampling and careful selection of independent high-quality tie points. (C) 2004 Elsevier B.V. All rights reserved.

}, keywords = {accumulation rates, age errors, atlantic, chronostratigraphy, field, geomagnetic dipole moment, geomagnetic paleointensity, intensity, ka, paleomagnetism, sedimentary records, spectral analysis, stacking}, isbn = {0031-9201}, doi = {10.1016/j.pepi.2004.02.011}, url = {In many interferometers, two fringe signals can be generated in quadrature. The relative phase of the two fringe signals depends on whether the optical path length is increasing or decreasing. A system is developed in which two quadrature fringe signals are digitized and analyzed in real time with a digital signal processor to yield a linear, high-resolution, wide-dynamic-range displacement transducer. The resolution in a simple Michelson interferometer with inexpensive components is 5 X 10(-13) m Hz(-1/2) at 2 Hz. (C) 2004 Optical Society of America.

}, isbn = {0003-6935}, doi = {10.1364/ao.43.000771}, url = {[1] The high-amplitude magnetic anomalies observed by the Mars Global Surveyor imply the presence of a large intensity of magnetization in the Martian crust. We investigate the mathematical question of determining the distribution of magnetization that has the smallest possible intensity, without any assumptions about the direction of magnetization. The greatest lower bound on intensity found in this way depends on an assumed layer thickness. An analytical expression is discovered for the optimal magnetization, and numerical methods are described for solving the equations that determine the distribution. Some relatively small scale numerical calculations illustrate the theory. These calculations enable us to conclude, for example, that if the magnetization of Mars is confined to a 50-km thick layer, it must be magnetized with an intensity of at least 4.76 A/m.

}, keywords = {extremal models, gravity, magnetization bounds, nonuniqueness}, isbn = {0148-0227}, doi = {10.1029/2001je001760}, url = {[1] The magnetic field originating within the Earth can be divided into core and crustal components, which can be characterized by the geomagnetic power spectrum. While the core spectrum is determined quite well by satellite studies, models of the shorter wavelength crustal spectrum disagree considerably. We reexamine aeromagnetic data used by O{\textquoteright}Brien et al. [1999] to obtain a new, improved estimate of the crustal geomagnetic power spectrum. O{\textquoteright}Brien et al. {\textquoteright}s model somewhat failed to give a satisfactory connection between the longer-wavelength satellite studies and a reliable crustal model. We show that this was caused by an inadequate processing step that aimed to remove external variations from the data. We moreover attempt to bound the long-wavelength part of the spectrum using constraints of monotonicity in the correlation of the magnetization. However, this proves to be a weak constraint. Reversing the process, though, we are able to evaluate the correlation function using the reliable part of our geomagnetic spectrum. Thus we can obtain a sensible estimate for the long-wavelength part of the spectrum that is not well constrained by the data. Our new model shows better agreement with earlier satellite studies and can be considered reliable in the spherical harmonic degree range l = 30 to 1200.

}, keywords = {aeromagnetic data, crustal magnetization, geomagnetic power spectrum, geomagnetic stochastic process, geomagnetic-field, models, vector}, isbn = {0148-0227}, doi = {10.1029/2001jb001389}, url = {We describe the experimental procedure we use to calibrate a cryogenic pass-through magnetometer. The procedure is designed to characterize the magnetometer sensitivity as a function of position within the sensing region. Then we extend a theory developed in an earlier paper to cover inexact observations and apply it to the data set. The theory allows the calculation of a smooth, harmonic, internally consistent interpolating function for each of the nine components of the response tensor of the magnetometer. With these functions we can calculate the response to a dipole source in any orientation and position, and predict the magnetometer signal from any kind of specimen. The magnetometer in the paleomagnetic laboratory onboard the research vessel Joides Resolution is the subject of one such experiment and we present the results. The variation with position of sensitivity is displayed in a series of plane slices through the magnetometer. We discover from the calibration model that the X and Z coils are misaligned so that the magnetic centre of the coils is displaced from the geometric centre by approximately 0.7 cm. We synthesize the signal expected from the magnetometer when a variety of simple cores are measured. We find that, unless appropriate corrections are made, changes in magnetization direction can appear as variations in magnetic intensity, and conversely, fluctuations in the magnetization strength can produce apparent swings in declination and inclination. The magnitude of these effects is not small and is certainly worth taking into account in the interpretation of records from this kind of instrument. In a pilot study on data from a core measured with the shipboard magnetometer, we observe some large distortions, particularly in declination, that are attributable to uncorrected instrumental effects.

}, keywords = {paleomagnetic instruments}, isbn = {0956-540X}, doi = {10.1046/j.1365-246X.2002.01692.x}, url = {A major limitation in the analysis of physical quantities measured from a stratigraphic core is incomplete knowledge of the depth to age relationship for the core. Records derived from diverse locations are often compared or combined to construct records that represent a global signal. Time series analysis of individual or combined records is commonly employed to seek quasi-periodic components or characterize the timescales of relevant physical processes. Assumptions that are frequently made in the approximation of depth to age relationships can have a dramatic and harmful effect on the spectral content of records from stratigraphic cores. A common procedure for estimating ages in a set of samples from a stratigraphic core is to assign, based on complementary data, the ages at a number of depths (tie points) and then assume a uniform accumulation rate between the tie points. Imprecisely dated or misidentified tie points and naturally varying accumulation rates give rise to discrepancies between the inferred and the actual ages of a sample. We develop a statistical model for age uncertainties in stratigraphic cores that treats the true, but in practice unknown, ages of core samples as random variables. For inaccuracies in the ages of tie points, we draw the error from a zero-mean normal distribution. For a variable accumulation rate, we require the actual ages of a sequence of samples to be monotonically increasing and the age errors to have the form of a Brownian bridge. That is, the errors are zero at the tie points. The actual ages are modeled by integrating a piecewise constant, randomly varying accumulation rate. In each case, our analysis yields closed form expressions for the expected value and variance of resulting errors in age at any depth in the core. By Monte Carlo simulation with plausible parameters, we find that age errors across a paleomagnetic record due to misdated tie points are likely of the same order as the tie point discrepancies. Those due to accumulation rate variations can be as large as 30 kyr, but are probably less than 10 kyr. We provide a method by which error estimates like these can be made for similar stratigraphic dating problems and apply our statistical model to an idealized marine sedimentary paleomagnetic record. Both types of errors severely degrade the spectral content of the inferred record. We quantify these effects using realistic tie point ages, their uncertainties and depositional parameters. (C) 2002 Elsevier Science B.V. All rights reserved.

}, keywords = {age, calibration, chronostratigraphy, climate-change, electrical-conductivity, errors, geomagnetic-field intensity, greenland, grip ice core, paleomagnetism, records, relative paleointensity, secular variation, spectral analysis, western equatorial pacific}, isbn = {0012-821X}, doi = {10.1016/s0012-821x(02)00747-1}, url = {Near-bottom magnetic data contain information on paleomagnetic field fluctuations during chron C5 as observed in both the North and South Pacific. The North Pacific data include 12 survey lines collected with a spatial separation of up to 120 kin, and the South Pacific data consist of a single long line collected on the west flank of the East Pacific Rise (EPR) at 19 degreesS. The North Pacific magnetic profiles reveal a pattern of linear, short-wavelength (2 to 5 km) anomalies (tiny wiggles) that are highly correlated over the shortest (3.8 km) to longest (120 km) separations in the survey. Magnetic inversions incorporating basement topography show that these anomalies are not caused by the small topographic relief. The character of the near-bottom magnetic profile from anomaly 5 on the west flank of the EPR, formed at a spreading rate more than twice that of the North Pacific, displays a remark-able similarity to the individual and stacked lines from the North Pacific survey area, Over distances corresponding to 1 m.y., 19 lows in the magnetic anomaly profile can be correlated between the North and South Pacific lines. Modeling the lows as due to short polarity events suggests that they may be caused by rapid swings of the magnetic field between normal and reversed polarities with little or no time in the reversed state. Owing to the implausibly high number of reversals required to account for these anomalies and the lack of any time in the reversed state, we conclude that the near-bottom signal is primarily a record of pateointensity fluctuations during chron C5. Spectral analysis of the North Pacific near bottom lines shows that the signal is equivalent to a paleointensity curve with a temporal resolution of 40 to 60 kyr, while measurements of the smallest separations of correlatable dips in the field suggest a temporal resolution of 36 kyr.

}, keywords = {basin, brunhes, construction, geomagnetic polarity reversal, intensity, magnetostratigraphy, miocene, oceanic-crust, resolution, transition}, isbn = {0148-0227}, doi = {10.1029/2001jb000278}, url = {We present a statistical analysis of magnetic fields simulated by the Glatzmaier-Roberts dynamically consistent dynamo model. For four simulations with distinct boundary conditions, means, standard deviations, and probability functions permit an evaluation based on existing statistical paleosecular variation (PSV) models. Although none closely fits the statistical PSV models in all respects, some simulations display characteristics of the statistical PSV models in individual tests. We also find that nonzonal field statistics do not necessarily reflect heat flow conditions at the core-mantle boundary. Multitaper estimates of power and coherence spectra allow analysis of time series of single, or groups of, spherical harmonic coefficients representing the magnetic fields of the dynamo simulations outside the core. Sliding window analyses of both power and coherence spectra from two of the simulations show that a 100 kyr averaging time is necessary to realize stationary statistics of their nondipole fields and that a length of 350 kyr is not long enough to full characterize their dipole fields. Spectral analysis provides new insight into the behavior and interaction of the dominant components of the simulated magnetic fields, the axial dipole and quadrupole. Although we find spectral similarities between several reversals, there is no evidence of signatures that can be conclusively associated with reversals or excursions. We test suggestions that during reversals there is increased coupling between groups of spherical harmonic components. Despite evidence of coupling between antisymmetric and symmetric spherical harmonics in one simulation, we conclude that it is rare and not directly linked to reversals. In contrast to the reversal model of R. T. Merrill and P. L. McFadden, we demonstrate that the geomagnetic power in the dipole part of the dynamo simulations is either relatively constant or fluctuates synchronously with that of the nondipole part and that coupling between antisymmetric and symmetric components occurs when the geomagnetic power is high.

}, keywords = {core, dynamo, geomagnetic-field, harmonic, mantle, models, paleosecular variation, pole, reference fields (regional, global), reversal, reversals, reversals (process,, secular variation, simulation, spectral, stationarity, statistics, theories, time variations-secular and long term, timescale, magnetostratigraphy), transition}, isbn = {1525-2027}, doi = {10.1029/2000GC000130}, url = {The Earth{\textquoteright}s magnetic field can be subdivided into core and crustal components and we seek to characterize the crustal part through its spatial power spectrum, R-1. We process vector Magsat data to isolate the crustal field and then invert power spectral densities of flight-local components along-track for R-1 following O{\textquoteright}Brien et al. [1999]. Our model, designated LPPC, is accurate up to approximately spherical harmonic degree 45 (lambda = 900 km): this is the resolution limit of our data and suggests that global crustal anomaly maps constructed from vector Magsat data should not contain features with wavelengths less than 900 km. We find continental power spectra to be greater than oceanic ones and attribute this to the relative thicknesses of continental and oceanic crust.

}, keywords = {currents, magnetic-field, model, ocean}, isbn = {0148-0227}, doi = {10.1029/2000jb900437}, url = {Knowledge of past variations in the intensity of the Earth{\textquoteright}s magnetic field provides an important constraint on models of the geodynamo. A record of absolute palaeointensity for the past 50 kyr has been compiled from archaeomagnetic and volcanic materials, and relative palaeointensities over the past 800 kyr have been obtained from sedimentary sequences. But a long-term record of geomagnetic intensity should also be carried by the thermoremanence of the oceanic crust. Here we show that near-seafloor magnetic anomalies recorded over the southern East Pacific Rise are well correlated with independent estimates of geomagnetic intensity during the past 780 kyr. Moreover, the pattern of absolute palaeointensity of seafloor glass samples from the same area agrees with the well-documented dipole intensity pattern for the past 50 kyr. A comparison of palaeointensities derived from seafloor glass samples with global intensity variations thus allows us to estimate the ages of surficial lava flows in this region. The record of geomagnetic intensity preserved in the oceanic crust should provide a higher-time-resolution record of crustal accretion processes at mid-ocean ridges than has previously been obtainable.

}, keywords = {constraints, crustal emplacement, East Pacific Rise, field, laschamp excursion, lavas, layer 2a thickness, midocean ridge, paleointensity, processes, submarine basaltic glass}, isbn = {0028-0836}, doi = {10.1038/35048513}, url = {By studying a simple model of a pass-through magnetometer we show that there are circumstances in which misleading results might arise if the spatial sensitivity of the instrument is not properly corrected. For example, if the core sample is not correctly centred, or the magnetometer itself is misaligned, serious distortion can appear in the inferred inclination distribution. The possibility of such errors warrants a thorough study of laboratory instruments and, as a first step, we require a spatial calibration, that is, an estimate of the sensitivity of the various coils to samples placed anywhere in the sensing region. Only when this information is available for laboratory magnetometers will it be possible to calculate suitable corrections. The fact that laboratory magnetometers employ superconducting material makes inferring the response from the geometry of the coils impractical because the field from a specimen is modified inside the instrument by image currents flowing in the superconducting elements. To overcome this obstacle we treat a very general calibration problem. We show that the sensitivity of a particular coil as a function of position obeys Laplace{\textquoteright}s equation, and therefore the description in space of the sensitivity is mathematically exactly the same as modelling the geomagnetic field. A calibration experiment consists of several hundred measurements performed on a tiny dipole sample, systematically positioned throughout the sensing volume of the instrument. From such observations we aim to construct a harmonic interpolating function that represents the response in the measurement region. The natural geometry for the problem is that of a cylinder, so we work from the cylindrical harmonic expansion of an equivalent magnetic field. Cylindrical harmonic expansions take the form of an infinite set of unknown functions, not just a collection of coefficients as with spherical harmonics. To build a suitable interpolating function from them we appeal to the principles of spline interpolation by constructing a model that minimizes some measure of response complexity. We examine in detail two such measures. The first corresponds to magnetic field energy; the second is a more abstract norm that smoothes more heavily than the energy norm, and whose Gram matrix elements can be found without recourse to lengthy numerical procedures. The second norm promises to form the basis of a practical programme of calibration.

}, keywords = {harmonic splines, palaeomagnetic instrumentation}, isbn = {0956-540X}, doi = {10.1046/j.1365-246x.2000.00171.x}, url = {The geomagnetic power spectrum R-l is the squared magnetic field in each spherical harmonic degree averaged over a spherical surface. Satellite measurements have given reliable estimates of the spectrum for the part that originates in the core, but above I = 15, where the geomagnetic field arises primarily from crustal magnetization, there is considerable disagreement between various estimates derived from observation. Furthermore, several theoretical models for the spectrum disagree with each other and the data. We have examined observations from a different source, 5000-km-long Project Magnet aeromagnetic survey lines; we make new estimates of the spectrum which overlap with the wavelength interval accessible to the satellites. The usual way the spectrum is derived from observation is to construct a large spherical harmonic decomposition first, then square, weight, and add the Gauss coefficients in each degree, but this method cannot be applied to isolated flight lines. Instead, we apply a statistical technique based on an idea of McLeod and Coleman which relates the geomagnetic spectrum to the power and cross spectra of magnetic field components measured on the survey lines. Power spectra from the 17 aeromagnetic surveys, all of which were conducted over the oceans, are averaged together to improve geographic coverage and reduce variance, and the average spectra are then inverted for the geomagnetic spectrum R-l. Like most of the theoretical models, our spectrum exhibits a maximum, but at a wavelength of 100 km, about a factor of 2 smaller than the closest theoretical prediction. Our spectrum agrees quite well with the most recent estimates based on satellite observations in the range 20 less than or equal to l less than or equal to 50, but above l=50, our values increase slowly, while all the satellite data suggest a sharply rising curve. In this wavelength range we believe our measurements are more trustworthy. Further work is planned to confirm the accuracy of our spectrum when continental survey paths are included.

}, keywords = {anomalies, core, main geomagnetic-field, models, paleosecular variation, pole, vector}, isbn = {0148-0227}, doi = {10.1029/1999jb900302}, url = {We have conducted a detailed exploratory analysis of an II million year long almost continuous record of relative geomagnetic paleointensity from a sediment core acquired on Deep Sea Drilling Project Leg 73, at Site 522 in the South Atlantic. We assess the quality of the paleointensity record using spectral methods and conclude that the relative intensity record is minimally influenced by climate variations. Isothermal remanence is shown to be the most effective normalizer for these data, although both susceptibility and anhysteretic remanence are also adequate. Statistical analysis shows that the paleointensity variations follow a gamma distribution, and are compatible with predictions from modified paleosecular variation models and global absolute paleointensity data. When subdivided by polarity interval, the variability in paleointensity is proportional to the average, and further, the average is weakly correlated with interval length. Spectral estimates for times from 28.77 until 22.74 Ma, when the reversal rate is about 4 Myr(-1), are compatible with a Poisson model in which the spectrum of intensity variations is dominated by the reversal process in the frequency range 1-50 Mgr(-1) In contrast, between 34.7 and 29.4 Ma, when the reversal rate is about 1.6 Myr(-1), the spectra indicate a different secular variation regime. The magnetic field is stronger, and more variable, and a strong peak in the spectrum occurs at about 8 Myr(-1). This peak magi be a reflection of the same signal as recorded by the small variations known as tiny wiggles seen in marine magnetic anomaly profiles.

}, keywords = {drilling project site-522, field, models, paleointensity, paleosecular variation, pole, relative}, doi = {10.1029/98jb01519}, url = {The theory for recovering crustal magnetization from along-strike and, especially, axial magnetic profiles is examined. We develop a conventional Fourier technique that takes into account the special magnetic cross-section at a ridge axis including the thinning of layer 2A. Such an approach might be completely inappropriate because it is assumed that the observation path is perpendicular to all the magnetic variability, whereas in fact the path lies in the direction of least magnetic variation. To study this question and to overcome possible deficiencies, we consider a statistical modification of the theory in which the magnetization is treated as a planar stationary process in a thin layer with known power spectrum. The relationship between two signals is studied: the magnetic anomaly on a straight path at the sea surface, and the magnetization in the crust immediately under the observation track. The coherence between the two signals can be calculated, as well as the transfer function between them. We test the ideas with data from a long axial magnetic profile on the southern East Pacific Rise compiled by Gee \& Kent. A model power spectrum is estimated from these data: the spectrum is red and, as expected, highly elongated perpendicular to the strike of the ridge. We find strong coherence (gamma(2) \> 0.8) between the magnetic anomaly and the subtrack magnetization for wavelengths longer than 50 km, but coherence falls sharply for smaller scales. The naive, 1-D filter theory incorrectly predicts a close relationship clown to much finer scales (3 km). Calculations for hypothetical surveys off-axis predict that there is always a band of high coherence, but only for an on-axis survey does the good correlation extend to infinite wavelength. We conclude that, in a wide variety of circumstances, the magnetic anomaly and the subtrack magnetization are highly correlated in a particular wavelength interval that depends on the shape of the power spectrum.

}, keywords = {inverse problem, magnetic anomalies, mid-ocean ridges, spectral analysis}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.1998.tb07143.x}, url = {Algorithms used in geomagnetic main-field modelling have for the most part treated the noise in the field measurements as if it were white. A major component of the noise consists of the field due to magnetization in the crust and it has been realized for some time that such signals are highly correlated at satellite altitude. Hence approximation by white noise, while of undoubted utility, is of unknown validity. Langel, Estes \& Sabaka (1989) were the first to evaluate the influence of correlations in the crustal magnetic field on main-field models. In this paper we study two plausible statistical models for the crustal magnetization described by Jackson (1994), in which the magnetization is a realization of a stationary, isotropic, random process. At a typical satellite altitude the associated fields exhibit significant correlation over ranges as great as 15 degrees or more, which introduces off-diagonal elements into the covariance matrix, elements that have usually been neglected in modelling procedures. Dealing with a full covariance matrix for a large data set would present a formidable computational challenge, brit fortunately most of the entries in the covariance matrix are so small that they can be replaced by zeros. The resultant matrix comprises only about 3 per cent non-zero entries and thus we can take advantage of efficient sparse matrix techniques to solve the numerical system. We construct several main-field models based on vertical-component data from a selected 5 degrees by 5 degrees data set derived from the Magsat mission. Models with and without off-diagonal terms are compared. For one of the two Jackson crustal models, k(3), we find significant changes in the main-field coefficients, with maximum discrepancies near degree 11 of about 27 per cent. The second crustal spectrum gives rise to much smaller effects for the data set used here, because the correlation lengths are typically shorter than the data spacing. k(4) also significantly underpredicts the observed magnetic spectrum around degree 15. We conclude that there is no difficulty in computing main-field models that include off-diagonal terms in the covariance matrix when sparse matrix techniques are employed; we find that there may be important effects in the computed models, particularly if we wish to make full use of dense data sets. Until a definitive crustal field spectrum has been determined, the precise size of the effect remains uncertain. Obtaining such a statistical model should be a high priority in preparation for the analysis of future low-noise satellite data.

}, keywords = {core-mantle boundary, crustal magnetization, geomagnetism, magnetic-field, magsat data}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.1997.tb01866.x}, url = {We investigate the power spectra and cross spectra derived from the three components of the vector magnetic field measured on a straight horizontal path above a statistically stationary source. All of these spectra, which can be estimated from the recorded time series, are related to a single two-dimensional power spectral density via integrals that run in the across-track direction in the wavenumber domain. Thus the measured spectra must obey a number of strong constraints: for example, the sum of the two power spectral densities of the two horizontal field components equals the power spectral density of the vertical component at every wavenumber and the phase spectrum between the vertical and along-track components is always pi/2. These constraints provide powerful checks on the quality of the measured data; if they are violated, measurement or environmental noise should be suspected. The noise due to errors of orientation has a clear characteristic; both the power and phase spectra of the components differ from those of crustal signals, which makes orientation noise easy to detect and to quantify. The spectra of the crustal signals can be inverted to obtain information about the cross-track structure of the field. We illustrate these ideas using a high-altitude Project Magnet profile flown in the southeastern Pacific Ocean.

}, keywords = {models}, isbn = {0148-0227}, doi = {10.1029/97jb02130}, url = {During a recent marine magnetic survey of the Juan de Fuca Rise, two magnetometers were towed near the seafloor, one about 300 m above the other. To understand how to interpret the records, we investigate a simple statistical model: two magnetometers moving on parallel paths above a statistically stationary source, with known spectrum. Magnetometers on paths normal to perfectly lineated magnetic anomalies will measure signals that have unit coherence at all wavelengths. Departure of the system from this ideal state can be diagnosed by a; lower coherence, and something about the across-track structure can be learned from the shape of the coherence spectrum. We calculate the power and cross spectra of the profile signals in terms of the two-dimensional power spectrum of the field just above the source region; hence we obtain the coherence and phase spectra. For the special case of a white source spectrum we find surprisingly high coherences. A set of inequalities between the spectral estimates is derived and can be used to check the consistency of the measured signals with the model assumptions. The theory is applied to a magnetic traverse of the Juan de Fuca Rise when two near-bottom magnetometers were deployed. The key results are these: in the wavelength range above about 1 km the observed coherency is substantially higher than that from the disordered field model, consistent with the highly lineated structures observed at the surface over all ocean ridge systems. On scales between 500 m and 1 km the coherence falls to levels indistinguishable from those given by an isotropic flat spectrum, implying that on these scales there is little or no across-track lineation. This finding means that the resolution of paleomagnetic field behavior based on seafloor data in this area is no better than 36,000 years.

}, keywords = {magnetic-anomalies}, isbn = {0148-0227}, doi = {10.1029/96jb03803}, url = {The frozen-flux hypothesis for the Earth{\textquoteright}s liquid core assumes that convective terms dominate diffusive terms in the induction equation governing the behaviour of the magnetic field at the surface of the core. While highly plausible on the basis of estimates of physical parameters, the hypothesis has been questioned in recent work by Bloxham, Gubbins \& Jackson (1989) who find it to be inconsistent with their field models for most of the century. To study this question we improve the method of Constable, Parker \& Stark (1993), which tests the consistency of magnetic observations with the hypothesis by constructing simple, flux-conserving core-field models fitting the data at pairs of epochs. We introduce a new approach that fixes the patch configurations at each of the two epochs before inversion, so that each configuration is consistent with its respective data set but possesses the same patch topology. We expand upon the inversion algorithm, using quadratic programming to maintain the proper flux sign within patches; the modelling calculations are also extended to include data types that depend non-linearly on the model. Every test of a hypothesis depends on the characterization of the observational uncertainties; we undertake a thorough review of this question. For main-field models, the primary source of uncertainty comes from the crustal field. We base our analysis on one of Jackson{\textquoteright}s (1994) statistical models of the crustal magnetization, adjusted to bring it into better conformity with our data set. The noise model permits us to take into account the correlations between the measurements and requires that a different weighting be given to horizontal and vertical components. It also indicates that the observations should be fit more closely than has been the practice heretofore. We apply the revised method to Magsat data from 1980 and survey and observatory data from 1915.5, two data sets believed to be particularly difficult to reconcile with the frozen-flux hypothesis. We compute a pair of simple, flux-conserving models that fit the averaged data from each epoch. We therefore conclude that present knowledge of the geomagnetic fields of 1980 and 1915.5 is consistent with the frozen-flux hypothesis.

}, keywords = {algorithm, core-mantle boundary, crustal magnetization, geomagnetic variation, geomagnetic-field, inversion, magnetic-field, magsat data}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.1997.tb01566.x}, url = {Fourier methods for potential fields have always been developed with the simplification that the calculation surface is a level plane. The Fourier approach can be extended to deal with an uneven observation surface. I consider the case of terrain correction for gravity surveys, in which the attraction of a variable-thickness layer is calculated at points on its upper surface. The main idea is to use a power series in topographic height that is then converted into a series of convolutions. To avoid convergence problems, a cylindrical zone around the observer must be removed from the Fourier treatment and its contribution computed directly. The resultant algorithm is very fast: in an example based on a recent survey, the new method is shown to be more than 300 times faster than a calculation based on summing contributions from a column of material under each topographic grid point.

}, isbn = {0016-8033}, doi = {10.1190/1.1443965}, url = {The properties of the log of the admittance in the complex frequency plane lead to an integral representation for one-dimensional magnetotelluric (MT) apparent resistivity and impedance phase similar to that found previously for complex admittance. The inverse problem of finding a one-dimensional model for MT data can then be solved using the same techniques as for complex admittance, with similar results. For instance, the one-dimensional conductivity model that minimizes the chi(2) misfit statistic for noisy apparent resistivity and phase is a series of delta functions. One of the most important applications of the delta function solution to the inverse problem for complex admittance has been answering the question of whether or not a given set of measurements is consistent with the modeling assumption of one-dimensionality. The new solution allows this test to be performed directly on standard MT data, Recently, it has been shown that induction data must pass the same one-dimensional consistency test if they correspond to the polarization in which the electric field is perpendicular to the strike of two-dimensional structure, This greatly magnifies the utility of the consistency test. The new solution also allows one to compute the upper and lower bounds permitted on phase or apparent resistivity at any frequency given a collection of MT data, Applications include testing the mutual consistency of apparent resistivity and phase data and placing bounds on missing phase or resistivity data, Examples presented demonstrate detection and correction of equipment and processing problems and verification of compatibility with two-dimensional B-polarization for MT data after impedance tensor decomposition and for continuous electromagnetic profiling data.

}, keywords = {algorithm, electromagnetic induction, existence}, isbn = {0031-9201}, doi = {10.1016/s0031-9201(96)03191-3}, url = {The main magnetic field of the Earth is a complex phenomenon. To understand its origins in the fluid of the Earth{\textquoteright}s core, and how it changes in time requires a variety of mathematical and physical tools. This book presents the foundations of geomagnetism, in detail and developed from first principles. The book is based on George Backus{\textquoteright} courses for graduate students at the University of California, San Diego. The material is mathematically rigorous, but is logically developed and has consistent notation, making it accessible to a broad range of readers. The book starts with an overview of the phenomena of interest in geomagnetism, and then goes on to deal with the phenomena in detail, building the necessary techniques in a thorough and consistent manner. Students and researchers will find this book to be an invaluable resource in the appreciation of the mathematical and physical foundations of geomagnetism.

}, keywords = {Geomagnetism.}, isbn = {0521410061 (hardback)}, url = {http://www.loc.gov/catdir/toc/cam023/95044208.htmlhttp://www.loc.gov/catdir/description/cam027/95044208.htmlhttp://www.worldcat.org/oclc/33333582}, author = {Backus, George and Parker, Robert L. and Constable, Catherine} } @article {27984, title = {A new method for fringe-signal processing in absolute gravity meters}, journal = {Manuscripta Geodaetica}, volume = {20}, number = {3}, year = {1995}, note = {n/a}, month = {Mar}, pages = {173-181}, type = {Article}, abstract = {In all modern absolute gravity meters, an interferometer illuminated with a stabilized laser tracks the motion of a freely falling retroreflector. The value of gravity is measured by timing the passage of interference fringes. Typically, the sinusoidal fringe signal is converted to a series of pulses, a subset of which are input to a time digitizer. In our new system, the fringe signal is digitized with a fast analog-to-digital converter and fit to an increasing-frequency sine wave. In addition to being smaller and less expensive, the system should eliminate some potential systematic errors that may result from imperfect zero-crossing discrimination and pulse pre-scaling.

}, isbn = {0340-8825}, url = {A description of a new Fourier technique is given for calculating the gravitational attraction of a layer with an irregular top surface for application in the terrain correction of marine gravity surveys in shallow water. An earlier Fourier-based algorithm fails or becomes inaccurate when the peaks of the topography approach the sea surface too closely. The new approach divides the attraction into two parts: a local contribution from the material within a cylinder around each observation point and the attraction from the matter outside the cylinder. A special quadrature rule, optimized for the actual data distribution, evaluates the local contribution. The calculation of the exterior component represents the bulk of the numerical effort. Fortunately, the exterior integral possesses an expansion as a series of convolutions, and by evaluating these in the Fourier domain, the procedure can take advantage of the efficiency of the fast Fourier transform. Chebychev economization of the convolution series provides further significant improvements in computational speed. Two examples, one artificial and the other based on a survey around Guadalupe Island, illustrate the application of the new technique. Estimates of the errors from computation sources and from the inadequacies of the topographic model confirm the general accuracy of the approach, except in regions of very steep terrain.

}, isbn = {0016-8033}, doi = {10.1190/1.1443829}, url = {The Fortran subroutine BVLS (bounded variable least-squares) solves linear least-squares problems with upper and lower bounds on the variables, using an active set strategy. The unconstrained least-squares problems for each candidate set of free variables are solved using the QR decomposition. BVLS has a {\textquoteright}{\textquoteright}warm-start{\textquoteright}{\textquoteright} feature permitting some of the variables to be initialized at their upper or lower bounds, which speeds the solution of a sequence of related problems. Such sequences of problems arise, for example, when BVLS is used to find bounds on linear functionals of a model constrained to satisfy, in an approximate l(p)-norm sense, a set of linear equality constraints in addition to upper and lower bounds. We show how to use BVLS to solve that problem when p = 1, 2, or infinity, and to solve minimum l(1) and l(infinity) fitting problems. FORTRAN 77 code implementing BVLS is available from the statlib gopher at Carnegie Mellon University.

}, keywords = {constrained least-squares, l(1) and l(infinity) regression, optimization, velocity, x(p)}, isbn = {0943-4062}, url = {There are many techniques for modelling the geomagnetic field, any one of which may be suitable for a particular application depending on its associated modelling goals. Each method combines a choice of functions and an approach to fitting data so that, in general, it is best suited to a particular type of field modelling, e.g. core versus crustal, regional versus global, downward continuation versus interpolation. Those few approaches such as spherical cap harmonic analysis (Haines 1985a) that possess any true flexibility in this respect suffer from mathematical and computational complexity. In addition, regularization is still a somewhat overlooked issue. Regularization is essential for downward continuing geomagnetic data because shorter wavelength field components and their errors blow up in this process. Approaches such as harmonic spline modelling (Shure, Parker and Backus 1982) which include regularization do so while significantly complicating the task of inversion. We present a new regularized modelling scheme which employs magnetic monopoles as representing functions. We apply regularizing norms of the type introduced by Shure et al. (1982). Owing to the mathematical simplicity of the monopoles, the expressions for the norms are themselves very simple and flexible, and the monopole models very easy to compute. Moreover, the conceptual simplicity of this representation allows for easy modification to accommodate most geomagnetic modelling problems. We apply the technique to problems on three different length scales, each application having distinctly different modelling goals: globally we model the radial core field at the core-mantle boundary (CMB) from satellite data; on a large regional scale we model the radial crustal field at the earth{\textquoteright}s surface from satellite data; on a small regional scale we model the radial crustal field at the earth{\textquoteright}s surface from surface data. For each of these varied applications we are able to generate monopole models which produce smooth, plausible fields that fit the data.

}, keywords = {anomalies, cap harmonic-analysis, core field, crustal field, geomagnetic field modeling, magnetic-field, monopoles, regularization, splines}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.1994.tb03985.x}, url = {Techniques for modelling the geomagnetic field at the surface of Earth{\textquoteright}s core often penalize contributions at high spherical harmonic degrees to reduce the effect of mapping crustal fields into the resulting field model at the core-mantle boundary (CMB). Ambiguity in separating the observed field into crustal and core contributions makes it difficult to assign error bounds to core field models, and this makes it hard to test hypotheses that involve pointwise values of the core field. The frozen-flux hypothesis, namely that convective terms dominate diffusive terms in the magnetic-induction equation, requires that the magnetic flux through every patch on the core surrounded by a zero contour of the radial magnetic field remains constant, although the shapes, areas and locations (but not the topology) of these patches may change with time. Field models exactly satisfying the conditions necessary for the hypothesis have not yet been constructed for the early part of this century. We show that such models must exist, so testing the frozen-flux hypothesis becomes the question of whether the models satisfying it are geophysically unsatisfactory on other grounds, for example because they are implausibly rough or complicated. We introduce an algorithm to construct plausible fleld models satisfying the hypothesis, and present such models for epochs 1945.5 and 1980. Our algorithm is based on a new parametrization of the field in terms of its radial component B(r) at the CMB. The model consists of values of B(r) at a finite set of points on the CMB, together with a rule for interpolating the values to other points. The interpolation rule takes the specified points to be the vertices of a spherical triangle tessellation of the CMB, with B(r) varying linearly in the gnomonic projections of the spherical triangles onto planar triangles in the planes tangent to the centroids of the spherical triangles. This parametrization of B(r) provides a direct means of constraining the integral invariants required by the frozen-flux hypothesis. Using this parametrization, we have constructed field models satisfying the frozen-flux hypothesis for epochs 1945.5 and 1980, while fitting observatory and survey data for 1945.5 and Magsat data for 1980. We use the better constrained 1980 CMB field model as a reference for 1945.5: we minimize the departure of the 1945.5 CMB field model from a regularized 1980 CMB field model, while constraining the 1945.5 model to have the same null-flux curves and flux through those curves as the 1980 model. The locations, areas and shapes of the curves are allowed to change. The resulting 1945.5 CMB field model is nearly as smooth as that for 1980, fits the data adequately, and satisfies the conditions necessary for the frozen-flux hypothesis.

}, keywords = {core mantle boundary, earths magnetic-field, frozen-flux model, geomagnetic field, inference, magsat data, motions, secular variation, top}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.1993.tb00897.x}, url = {Recent studies of samples from seamounts indicate that the distribution of magnetic intensity is approximately lognormal, which implies that the commonly adopted models of interior magnetization based upon a constant vector with an isotropic perturbation are inappropriate. We develop a unidirectional model in which the direction of magnetization is fixed and the intensity is of one sign, with no upper limit on magnitude, which, if the seamount is built during a period of single magnetic polarity, is likely to be a better approximation. We show that models of this class fitting the data best in the two-norm sense conform to the ideal-body pattern comprising unidirectional, point dipoles in the surface of the seamount. Practical methods are developed for discovering the best data misfit associated with paleopole position. The methods are first tested on simple artificial magnetic anomalies and are found to be capable of recovering the true pole position with high accuracy when such a solution is possible; also when a mixed polarity artificial model is analyzed, it is found that there are no unidirectional solutions, just as would be hoped. The method is next applied to three seamount surveys. In the first it is found that every direction of magnetization is in accord with the data, so that apparently nothing useful can be learned from the survey without a stronger assumption; this result is in contrast with the results of an earlier solution based upon a statistical model, which yielded a high accuracy in the position of the paleopole. The second investigation provides a reasonably compact location of the paleopole of the seamount. The third magnetic anomaly is complex and earlier studies concluded this was necessarily the product of mixed polarity magnetization. We find that in fact unidirectional magnetizations can satisfy observation.

}, keywords = {gravity, inverse-theory, magnetization}, isbn = {0148-0227}, doi = {10.1029/91jb01497}, url = {We have measured the Newtonian gravitational constant using the ocean as an attracting mass and a research submersible as a platform for gravity measurements. Gravitational acceleration was measured along four continuous profiles to depths of 5000 m with a resolution of 0.1 mGal. These data, combined with satellite altimetry, sea surface and seafloor gravity measurements, and seafloor bathymetry, yield an estimate of G = (6.677 +/- 0.013) x 10(-11) m3 s-2 kg-1; the fractional uncertainty is 2 parts in 1000. Within this accuracy, the submarine value for G is consistent with laboratory determinations.

}, keywords = {inverse-square law, tower gravity experiment}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.67.3051}, url = {The magnetization of long cores of sedimentary material is often measured in a pass-through magnetometer, whose output is the convolution of the desired function with the broad impulse response of the system. Because of inevitable measurement noise and the inherent poor conditioning of the inverse problem, any attempt to estimate the true magnetization function from the observations must avoid unnecessary amplification of small-scale features which would otherwise dominate the model with deceptively large undulations. We propose the construction of the smoothest possible magnetization model satisfying the measured data to within the observational error. By means of a cubic spline basis in the representations of both the unknown magnetization and the empirically measured response, we facilitate the imposition of maximum smoothness on the unknown magnetization. For our purposes, the smoothest model is the one with the smallest 2-norm of the second derivative, the same criterion used in the construction of cubic spline interpolators. The approach is tested on a marine core that was subsequently sectioned and measured in centimetre-sized individual specimens, with highly satisfactory results. An empirical estimate of the resolution of the method indicates a three-fold improvement in the processed record over the original signal. We illuminate the behaviour of the numerical scheme by showing the relation between our smoothness-maximizing procedure and a more conventional filtering approach. Our solution can indeed be approximated by convolution with a special set of weights, although the approximation may be poor near the ends of the core. In an idealized system we study the question of convergence of the deconvolution process, by whether the model magnetization approaches the true one when the experimental error and other system parameters are held constant, while the spacing between observations is allowed to become arbitrarily small. We find our procedure does in fact converge (under certain conditions) but only at a logarithmic rate. This suggests that further significant improvement in resolution cannot be achieved by increased measurement density or enhanced observational accuracy.

}, keywords = {magnetization, paleomagnetic cores, spline deconvolution}, isbn = {0956-540X}, doi = {10.1111/j.1365-246X.1991.tb05693.x}, url = {A gravity inversion algorithm for modeling discrete bodies with nonuniform density distributions is presented. The algorithm selects the maximally uniform model from the family of models which fit the data, ensuring a conservative and unprejudiced estimate of the density variation within the body. The only inputs required by the inversion are the gravity anomaly field and the body shape. Tests using gravity anomalies generated from synthetic bodies confirm that seminorm minimizing inversions successfully represent mass distribution trends but do not reconstruct sharp discontinuities. We apply the algorithm to model the density structure of seamounts. Inversion of the seasurface gravity field observed over Jasper Seamount suggests the edifice has a low average density of 2.38 g/cm3 and contains a dense body within its western flank. These results are consistent with seismic, magnetic, and petrologic studies of Jasper Seamount.

}, keywords = {ideal bodies, Islands, origin, pacific-ocean, positivity constraints}, isbn = {0016-8033}, doi = {10.1190/1.1442959}, url = {An Airy-type geophysical experiment was conducted in a 2-km-deep hole in the Greenland ice cap at depths between 213 m and 1673 m to test for possible violations of Newton{\textquoteright}s inverse square law. The experiment was done at Dye 3, the location of a Distant Early Warning Line radar dome and the site of the deepest of the Greenland Ice-Sheet Program (GISP) drill holes. Gravity measurements were made at eight depths in 183-m intervals with a LaCoste\&Romberg borehole gravity meter. Prior to the experiment the borehole gravity meter was calibrated with an absolute gravity meter, and the wireline depth-rinding system used in the borehole logging was calibrated in a vertical mine-shaft against a laser geodimeter. The density of the ice in the region was calculated from measurements taken from ice cores obtained from earlier drilling observations. Ice penetrating radar was employed in order to correct the gravity data for the topography of the ice-rock interface. Surface gravity observations were made to assess the extent to which density variations in the sub-ice rock could affect the vertical gravity gradient. The locations of the gravity observation points were determined with a combination of GPS recording, first-order leveling, and EDM surveying. An anomalous variation in gravity totaling 3.87 mGal (3.87{\texttimes}10^{-5} m/s^{2}) in a depth interval of 1460 m was observed. This may be attributed either to a breakdown of Newtonian gravity or to unexpected density variations in the rock below the ice.

Seafloor and sea surface gravity measurements are used to model the internal density structure of Axial Volcano. Seafloor measurements made at 53 sites within and adjacent to the Axial Volcano summit caldera provide constraints on the fine-scale density structure. Shipboard gravity measurements made along 540 km of track line above Axial Volcano and adjacent portions of the Juan de Fuca ridge provide constraints on the density over a broader region and on the isostatic compensation. The seafloor gravity anomalies give an average density of 2.7 g cm^{-3} for the uppermost portion of Axial Volcano, The sea surface gravity anomalies yield a local compensation parameter of 23\%, significantly less than expected for a volcanic edifice built on zero age lithosphere. Three-dimensional ideal body models of the seafloor gravity measurements suggest that low-density material, with a density contrast of at least 0.15 g cm^{-3}, may be located underneath the summit caldera. The data are consistent with low-density material at shallow depths near the southern portion of the caldera, dipping downward to the north. The correlation of shallow low-density material and surface expressions of recent volcanic activity (fresh lavas and high-temperature hydrothermal venting) suggests a zone of highly porous crust. Seminorm minimization modeling of the surface gravity measurements also suggest a low-density region under the central portion of Axial Volcano. The presence of low-density material beneath Axial caldera suggests a partially molten magma chamber at depth.

Signals reported as evidence for a non-newtonian {\textquoteright}fifth{\textquoteright} force at a North Carolina television tower and elsewhere can be explained in a conventional way by postulating small density variations underground. The assumptions employed in earlier analyses which pointed to a failure of the inverse square law are examined and found to be difficult to justify.

}, isbn = {0028-0836}, doi = {10.1038/342029a0}, url = {An Airy-type geophysical experiment was conducted in a 2-km-deep hole in the Greenland ice cap at depths between 213 and 1673 m to test for possible violations of Newton{\textquoteright}s inverse-square law. An anomalous gravity gradient was observed. We cannot unambiguously attribute it to a breakdown of Newtonian gravity because we have shown that it might be due to unexpected geological features in the rock below the ice.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.62.985}, url = {Recent experimental evidence suggests that Newton{\textquoteright}s law of gravity may not be precise. There are modern theories of quantum gravity that, in their attempts to unify gravity with other forces of nature, predict non-Newtonian gravitational forces that could have ranges on the order of 10^{2}-10^{5} m. If they exist, these forces would be apparent as violations of Newton{\textquoteright}s inverse square law. A geophysical experiment was carried out to search for possible finite-range, non-Newtonian gravity over depths of 213-1673 m in the glacial ice of the Greenland ice cap. The principal reason for this choice of experimental site is that a hole drilled through the ice cap already existed and the uniformity of the ice eliminates one of the major sources of uncertainty arising in the first of earlier namely, the heterogeneity of the rocks through which a mine shaft or drill hole passes. Our observations were made in the summer of 1987 at Dye 3, Greenland, in the 2033-m-deep borehole, which reached the basement rock.

A two- or three-dimensional treatment of magnetic anomaly data generally requires that the data be interpolated onto a regular grid, especially when the analysis involves transforming the data into the Fourier domain. We present an algorithm for interpolation and downward continuation of magnetic anomaly data that works within a statistical framework. We assume that the magnetic anomaly is a realization of a random stationary field whose power spectral density (PSD) we can estimate; by using the PSD the algorithm produces an array incorporating as much of the information contained in the data as possible while avoiding the introduction of unnecessary complexity. The algorithm has the added advantage of estimating the uncertainty of every interpolated value. Downward continuation is a natural extension of the statistical algorithm. We apply our method to the interpolation of magnetic anomalies from the region around the 95.5{\textdegree}W Galapagos propagating rift onto a regular grid and also to the downward continuation of these data to a depth of 2200 m. We also note that the observed PSD of the Galapagos magnetic anomalies has a minimum at low wave numbers and discuss how this implies that intermediate wavelength (65 km \<λ \< 1500 km) magnetic anomalies are weaker than suggested by one dimensional spectral analysis of single profiles.

}, isbn = {0148-0227}, doi = {10.1029/JB094iB12p17393}, url = {We discuss the use of smoothing splines (SS) and least squares splines (LSS) in nonparametric regression on geomagnetic data. The distinction between smoothing splines and least squares splines is outlined, and it is suggested that in most cases the smoothing spline is, a preferable function estimate. However, when large data sets are involved, the smoothing spline may require a prohibitive amount of computation; the alternative often put forward when moderate or heavy smoothing is -desired is the least squares spline. This may not be capable of modeling the data adequately since the smoothness of the resulting function can be controlled only by the number and position of the knots. The computational efficiency of the least squares spline may be retained and its principal disadvantage overcome, by adding a penalty term in the square of the second derivative to the minimized functional. We call this modified form a penalized least squares spline, (denoted by PS throughout this work), and illustrate its use in the removal of secular trends in long observatory records of geomagnetic field components. We may compare the effects of smoothing splines, least squares splines, and penalized least squares splines by treating them as equivalent variable-kernel smoothers. As Silverman has shown, the kernel associated with the smoothing spline is symmetric and is highly localized with small negative sidelobes. The kernel for the least squares spline with the same fit to the data has large oscillatory sidelobes that extend far from the central region; it can be asymmetric even in the middle of the interval. For large numbers of data the penalized least squares spline can achieve essentially identical performance to that of a smoothing spline, but at a greatly reduced computational cost. The penalized spline estimation technique has potential widespread applicability in the analysis of geomagnetic and paleomagnetic data. It may be used for the removal of long term trends in data, when either the trend or the residual is of interest.

}, isbn = {0021-9991}, doi = {10.1016/0021-9991(88)90062-9}, url = {A new statistical model is proposed for the geomagnetic secular variation over the past 5 m.y. Unlike previous models, which have concentrated upon particular kinds of paleomagnetic observables, such as VGP or field direction, the new model provides a general probability density function from which the statistical distribution of any set of paleomagnetic measurements can be deduced. The spatial power spectrum of the present-day nondipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is our model of the nondipole field. Assuming that this characterization holds for the fields of the past, we can combine it with an arbitrary statistical description of the dipole. We compute the corresponding probability density functions and cumulative distribution functions for declination and inclination that would be observed at any site on the surface of the Earth. Global paleomagnetic data spanning the past 5 m.y. are used to constrain the free parameters of the model, i.e., those giving the dipole part of the field. The final model has these properties: (1) with two exceptions, each Gauss coefficient is independently normally distributed with zero mean and standard deviation for the nondipole terms commensurate with a white source at the core surface; (2) the exceptions are the axial dipole g_{1}^{} and axial quadrupole g_{2}^{} terms; the axial dipole distribution is bimodal and symmetric, resembling a combination of two normal distributions with centers close to the present-day value and its sign-reversed counterpart; (3) the standard deviations of the nonaxial dipole terms g_{1}^{1} and h_{1}^{1} and of the magnitude of the axial dipole are all about 10\% of the present-day g_{1}^{} component; and (4) the axial quadrupole reverses sign with the axial dipole and has a mean magnitude of 6\% of its mean magnitude. The advantage of a model specified in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, capable of simultaneously satisfying many known properties of the field. Predictions about any measured field elements may be made to see if they satisfy the available data.

The standard least squares estimation of the average magnetization of a seamount is generally regarded as unsatisfactory. The reason for its poor performance is, I assert, due to the strong correlation and uneven magnitudes of the unfitted noise component of the magnetic anomaly. Least squares estimation depends for its efficiency and freedom from bias upon the statistical independence and common distribution of the random contamination. The primary contributor to the unfitted field is the part due to nonuniformity of the magnetic material within the seamount. If the magnetic heterogeneities are modeled as a random stationary vector field of known functional form, it is possible to calculate the covariance matrix of the magnetic field noise. Then, to the extent that the statistical description of the magnetic sources is valid, the correlation can be properly accounted for in the least squares procedure by weighting the observations to form an equilibrated set with random components that are independent and identically distributed. The main use of the average magnetization vector is in the determination of the associated paleopole position: I show how the least squares estimates of the magnetization vector and its uncertainty are mapped into a confidence region for the pole position. Application of this theory to a young seamount in the South Pacific is highly satisfactory. The model stationary vector field chosen for the magnetization is one with isotropic directional behavior and a very short length scale of correlation. It is found that the equilibrated residuals are far less correlated than those of the conventional procedure and the misfits are uniformly distributed throughout the equilibrated data. The estimated paleopole position is within a few degrees of the north geographic pole, which is the expected location because the seamount is geologically young. Traditional least squares fitting gave a pole at latitude 60{\textdegree}N. The 95\% confidence zone associated with the paleopole position has semiaxes measuring 16{\textdegree} by 13{\textdegree}, and thus the uncertainty is not much larger than those of standard paleomagnetic work.

}, isbn = {0148-0227}, doi = {10.1029/JB093iB04p03105}, url = {The paleomagnetism of Cretaceous Pacific seamounts is reexamined. Herein techniques for nonuniform magnetic modeling are applied to determine paleomagnetic pole positions and their associated confidence limits. Modeling techniques are presented for reconstruction of both uniform and nonuniform components of the seamount magnetization. The uniform component yields an estimate of the paleomagnetic pole position, and the nonuniform component accounts for irregularities in the seamount magnetization. A seminorm minimization approach constructs maximally uniform magnetizations and is used to represent seamount interiors. A statistical modeling approach constructs random nonuniform magnetizations and is used to determine the confidence limits associated with each pole position. Mean paleopoles are calculated for groups of seamounts, including their associated error bounds. The mean paleopole for seven reliably dated Upper Cretaceous seamounts is located close to the position predicted by Pacific-hotspot relative motion. The paleopole for five seamounts with Cretaceous minimum dates is located west of the hotspotpredicted apparent polar wander path and may represent a Lower Cretaceous or Upper Jurassic pole.

}, isbn = {0148-0227}, doi = {10.1029/JB092iB12p12695}, url = {We present a new technique for constructing the narrowest corridor containing all velocity profiles consistent with a finite collection of τ(p) data and their statistical uncertainties. Earlier methods for constructing such bounds treat the confidence interval for each τ datum as a strict interval within which the true value might lie with equal probability, but this interpretation is incompatible with the estimation procedure used on the original travel time observations. The new approach, based upon quadratic programming (QP), shares the advantages of the linear programming (LP) solution: it can invert τ(p) and X(p) data concurrently; it permits the incorporation of constraints on the radial derivative of velocity for spherical earth models; and theoretical results about convergence and optimality can be obtained for the method. We compare P velocity bounds for the core obtained by QP and LP. The models produced by LP predict data values at the ends of the confidence intervals; these values are unlikely according to the proper statistical dstribution of errors. For this reason the LP velocity bounds can be wider than those given by QP, which takes better account of the statistics. Sometimes, however, the LP bounds are more restrictive because LP never permits the predictions of the models to lie outside the confidence intervals even though occasional excursions are expected. The QP bounds grow narrower at lower levels of confidence, but the corridors at 95\% and 99.9\% are virtually indistinguishable: The data must be improved substantially to make a significant change in the velocity bounds.

}, isbn = {0148-0227}, doi = {10.1029/JB092iB03p02713}, url = {The inversion of electromagnetic sounding data does not yield a unique solution, but inevitably a single model to interpret the observations is sought. We recommend that this model be as simple, or smooth, as possible, in order to reduce the temptation to overinterpret the data and to eliminate arbitrary discontinuities in simple layered models. To obtain smooth models, the nonlinear forward problem is linearized about a starting model in the usual way, but it is then solved explicitly for the desired model rather than for a model correction. By parameterizing the model in terms of its first or second derivative with depth, the minimum norm solution yields the smoothest possible model. Rather than fitting the experimental data as well as possible (which maximizes the roughness of the model), the smoothest model which fits the data to within an expected tolerance is sought. A practical scheme is developed which optimizes the step size at each iteration and retains the computational efficiency of layered models, resulting in a stable and rapidly convergent algorithm. The inversion of both magnetotelluric and Schlumberger sounding field data, and a joint magnetotelluric-resistivity inversion, demonstrate the method and show it to have practical application.

}, isbn = {0016-8033}, doi = {10.1190/1.1442303}, url = {The traditional least squares method for modeling seamount magnetism is often unsatisfactory because the models fail to reproduce the observations accurately. We describe an alternative approach permitting a more complex internal structure, guaranteed to generate an external field in close agreement with the observed anomaly. Potential field inverse problems like this one are fundamentally incapable of a unique solution, and some criterion is mandatory for picking a plausible representative from the infinite-dimensional space of models all satisfying the data. Most of the candidates are unacceptable geologically because they contain huge magnetic intensities or rapid variations of magnetization on fine scales. To avoid such undesirable attributes, we construct the simplest type of model: the one closest to a uniform solution as measured by the norm in a specially chosen Hilbert space of magnetization functions found by a procedure called seminorm minimization. Because our solution is the most nearly uniform one we can say with certainty that any other magnetization satisfying the data must be at least as complex as ours. The theory accounts for the complicated shape of seamounts, representing the body by a covering of triangular facets. We show that the special choice of Hilbert space allows the necessary volume integrals to be reduced to surface integrals over the seamount surface, and we present numerical techniques for their evaluation. Exact agreement with the magnetic data cannot be expected because of the error of approximating the shape and because the measured fields contain noise of crustal, ionospheric, and magnetospheric origin. We examine the potential size of the various error terms and find that those caused by approximation of the shape are generally much smaller than the rest. The mean magnetization is a vector that can in principle be discovered from exact knowledge of the external field of the seamount; this vector is of primary importance for paleomagnetic work. We study the question of how large the uncertainty in the mean vector may be, based on actual noise, as opposed to exact, data; the uncertainty can be limited only by further assumptions about the internal magnetization. We choose to bound the rms intensity. In an application to a young seamount in the Louisville Ridge chain we find that remarkably little nonuniformity is required to obtain excellent agreement with the observed anomaly while the uniform magnetization gives a poor fit. The paleopole position of ordinary least squares solution lies over 30{\textdegree} away from the geographic north, but the pole derived from our seminorm minimizing model is very near the north pole as it should be. A calculation of the sensitivity of the mean magnetization vector to the location of the magnetic observations shows that the data on the perimeter of the survey were given the greatest weight and suggests that enlargement of the survey area might further improve the reliability of the results.

}, isbn = {8755-1209}, doi = {10.1029/RG025i001p00017}, url = {Summary. We reduce the problem of constructing a smooth, 1-D, monotoni-cally increasing velocity profile consistent with discrete, inexact τ (p) and X(p) data to a quadratic programming problem with linear inequality constraints. For a finite-dimensional realization of the problem it is possible to find a smooth velocity profile consistent with the data whenever such a profile exists. We introduce an unusual functional measure of roughness equivalent to the second central moment or {\textquoteleft}Variance{\textquoteright} of the derivative of depth with respect to velocity for smooth profiles, and we prove that its minimal value is unique. In our experience, solutions minimizing this functional are very smooth in the sense of the two-norm of the second derivative and can be constructed inexpensively by solving one quadratic programming problem. Still smoother models (in more traditional measures) may be generated iteratively with additional quadratic programs. All the resulting models satisfy the τ (p) and X(p) data and reproduce travel-time data remarkably well, although sometimes τ (p) data alone are insufficient to ensure arrivals at large X; then an X(p) datum must be included.

}, keywords = {Inverse theory, regularization, seismic refraction}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1987.tb05205.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1987.tb05205.x}, author = {Stark, Philip B. and Parker, Robert L.} } @article {27959, title = {Strict bounds on seismic velocity in the spherical earth}, journal = {Journal of Geophysical Research-Solid Earth and Planets}, volume = {91}, number = {B14}, year = {1986}, note = {n/a}, month = {Dec}, pages = {13892-13902}, type = {Article}, abstract = {We address the inverse problem of finding the smallest envelope containing all velocity profiles consistent with a finite set of imprecise τ(p) data from a spherical earth. Traditionally, the problem has been attacked after mapping the data relations into relations on an equivalent flat earth. Of the two contemporary direct methods for finding bounds on velocities in the flat earth consistent with uncertain τ(p) data, a nonlinear (NL) approach descended from the Herglotz-Wiechert inversion and a linear programming (LP) approach, only NL has been used to solve the spherical earth problem. On the basis of the finite collection of τ(p) measurements alone, NL produces an envelope that is too narrow: there are numerous physically acceptable models that satisfy the data and violate the NL bounds, primarily because the NL method requires continuous functions as bounds on τ(p) and thus data must be fabricated between measured values by some sort of interpolation. We use the alternative LP approach, which does not require interpolation, to place optimal bounds on the velocity in the core. The resulting velocity corridor is disappointingly wide, and we therefore seek reasonable physical assumptions about the earth to reduce the range of permissible models. We argue from thermodynamic relations that P wave velocity decreases with distance from the earth{\textquoteright}s center within the outer core and quite probably within the inner core and lower mantle. We also show that the second derivative of velocity with respect to radius is probably not positive in the core. The first radial derivative constraint is readily incorporated into LP. The second derivative constraint is nonlinear and cannot be implemented exactly with LP; however, geometrical arguments enable us to apply a weak form of the constraint without any additional computation. LP inversions of core τ(p) data using the first radial derivative constraint give new, extremely tight bounds on the P wave velocity in the core. The weak second derivative constraint improves them slightly.

}, isbn = {0148-0227}, doi = {10.1029/JB091iB14p13892}, url = {Summary. Exact spherical harmonic expansions are given for calculating the gravitational and magnetic fields associated with certain uniform solids of revolution. The figures are those made by rotating a conic section about one of its principal axes. The coefficients in the expansions can be computed accurately and efficiently and this approach leads to a very satisfactory method for calculating the fields of geological bodies with approximate circular symmetry about a vertical axis. A complete theory of convergence is given for the expansions. Somewhat unexpectedly, the sphere of convergence is determined by the location of a number of equivalent point or line sources that lie within the body or on its edges.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1985.tb05115.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1985.tb05115.x}, author = {Parker, Robert L. and Shure, Loren} } @article {27955, title = {A preliminary harmonic spline model from Magsat data}, journal = {Journal of Geophysical Research-Solid Earth and Planets}, volume = {90}, number = {NB13}, year = {1985}, note = {n/a}, pages = {1505-1512}, type = {Article}, abstract = {We present a preliminary main field model for 1980 derived from a carefully selected subset of Magsat vector measurements using the method of harmonic splines. This model (PHS (80) for preliminary harmonic splines) is the smoothest model (in the sense that the rms radial field at the core surface is minimum) consistent with the measurements (with an rms misfit of 10 nT to account for crustal and external fields as well as noise in the measurement procedure). Therefore PHS (80) is more suitable for studies of the core than models derived with the traditional least squares approach (e.g., GSFC (9/80)). We compare characteristics of the harmonic spline spectrum, topology of the core field and especially the null-flux curves (loci where the radial field is zero) and the flux through patches bounded by such curves. PHS (80) is less complex than GSFC (9/80) and is therefore more representative of that part of the core field that the data constrain.

}, isbn = {0148-0227}, doi = {10.1029/JB090iB13p11505}, url = {The electric potential due to a single point electrode at the surface of a layered conducting medium is calculated by means of a linear combination of the potentials associated with a set of two-layer systems. This new representation is called the bilayer expansion for the Green{\textquoteright}s function. It enables the forward problem of resistivity sounding to be solved very efficiently, even for complicated profiles. Also, the bilayer expansion facilitates the solution of the resistivity inverse problem: the coefficients in the expansion are linearly related to apparent resistivity as it is measured and they are readily mapped into parameters for a model. Specifically, I consider models comprising uniformly conducting layers of equal thickness; for a given finite data set a quadratic program can be used to find the best-fitting model in this class for any specified thickness. As the thickness is reduced, models of this kind can approximate arbitrary profiles with unlimited accuracy. If there is a model that satisfies the data well, there are other models equally good or better whose variation takes place in an infinitesimally thin zone near the surface, below which there is a perfectly conducting region. This extraordinary class of solutions underscores the serious ambiguity in the interpretation of apparent resistivity data. It is evident that strong constraints from outside the electrical data set must be applied if reliable solutions are to be discovered. Previous work seems to have given a somewhat overly optimistic impression of the resolving abilities of this kind of data. I consider briefly a regularization technique designed to maximize the smoothness of models found with the bilayer inversion.

}, isbn = {0016-8033}, doi = {10.1190/1.1441630}, url = {Three ocean bottom magnetotelluric data sets from sites on the Pacific plate are reinterpreted. The initial analysis found a correlation between the lithospheric age and the depth to a conductive zone beneath each site. That work also suggested that the resistivity increased below the conductor. This analysis, which includes new methods for constructing one-dimensional conductivity models, shows that the postulated increase in resistivity is not demanded by the data. It also reveals an unexpectedly large nonuniqueness inherent in the interpretation of these data. The previously reported trends with lithospheric age still exist, but they are not as strong as initially believed. Finally, it is shown rigorously that the different age sites are distinct in that no one-dimensional model can account for all three data sets.

}, isbn = {0148-0227}, doi = {10.1029/JB089iB03p01829}, url = {The magnetotelluric inverse problem is reviewed, addressing the following mathematical questions: (a)Existence of solutions: A satisfactory theory is now available to determine whether or not a given finite collection of response data is consistent with any one-dimensional conductivity profile. (b)Uniqueness: With practical data, consisting of a finite set of imprecise observations, infinitely many solutions exist if one does. (c)Construction: Several numerically stable procedures have been given which it can be proved will construct a conductivity profile in accord with incomplete data, whenever a solution exists. (d)Inference: No sound mathematical theory has yet been developed enabling us to draw firm, geophysically useful conclusions about the complete class of satisfactory models.Examples illustrating these ideas are given, based in the main on the COPROD data series.

}, isbn = {0046-5763}, doi = {10.1007/bf01453993}, url = {The construction of smooth potential field models has many geophysical applications. The recently-developed method of harmonic splines produces magnetic field models at the core surface which are maximally smooth in the sense of minimization of certain special norms. They do not exhibit the highly oscillatory fields produced by models derived from a least-squares analysis with a truncated spherical harmonic series. Modeling the data by harmonic splines requires solving a square system of equations with dimension equal to the number of data. Too many data have been collected since the 1960s for this method to be practical. We produce almost optimally smooth models by the following method. Since each spline function for the optimal model corresponds to an observation location (called a knot), we select a subset of these splines with knots well-distributed around the Earth{\textquoteright}s surface. In this depleted basis we then find the smoothest model subject to an appropriate fit to all of the data. This reduces the computational problem to one comparable to least-squares analysis while nearly preserving the optimality inherent in the original harmonic spline models.

}, isbn = {0094-8276}, doi = {10.1029/GL009i008p00812}, url = {Summary. The exponential attenuation of fluctuating electromagnetic fields suggests that practical magneto telluric measurements may be uninformative about the electrical conductivity at sufficiently great depths. This notion can be made precise for one-dimensional systems. Below a critical depth the conductivity function may be chosen freely without affecting the consistency of the model with the data. This depth is readily computable with quadratic or linear programming techniques and does not rely upon linearization of the equations.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1982.tb06967.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1982.tb06967.x}, author = {Parker, Robert L.} } @article {27948, title = {Harmonic splines for geomagnetic modelling}, journal = {Physics of the Earth and Planetary Interiors}, volume = {28}, number = {3}, year = {1982}, note = {n/a}, pages = {215-229}, type = {Article}, abstract = {Which features of a geomagnetic field model on the surface of the core are really necessary in order to fit, within observational error, the field observations at and above the Earth{\textquoteright}s surface? To approach this question, we define {\textquoteleft}roughness{\textquoteright} in various ways as a norm on an appropriate Hilbert space of field models which is small when the field is smooth on the core surface. Then, we calculate the model with least norm (the smoothest model) which fits the data, sources outside the core being treated as noise. Sample calculations illustrate the effects of noise, of the choice of norm and of an uneven distribution of observing stations.

}, isbn = {0031-9201}, doi = {10.1016/0031-9201(82)90003-6}, url = {Harrison and Carle [this issue] and others have examined very long profiles of the magnetic field and have calculated one-dimensional power spectra. In these they expect to see, but do not find, a minimum in power at intermediate wavelengths, between 65 and 150 km. Conventional one-dimensional models of the field predict very little power in this band, which lies between the spectral peaks arising from sources in the crust and the core. Mantle sources or high-intensity, long-wavelength magnetizations have been proposed to account for the observations. An alternative, more plausible explanation is that one-dimensional spectra of two-dimensional fields contain contributions from wavenumbers in the perpendicular (i.e., nonsampled) direction. Unless the seafloor spreading anomalies are perfectly lineated at right angles to the profile, some low-wavenumber energy must be attributed to this effect; we propose that such directional aliasing is a major factor in the power spectra. To support this idea, we discuss theoretical models and analyze a large-scale marine survey.

}, isbn = {0148-0227}, doi = {10.1029/JB086iB12p11600}, url = {A previous paper (Parker, 1980) sets out a theory for deciding whether solutions exist to the inverse problem of electromagnetic induction and outlines methods for constructing conductivity profiles when their existence has been demonstrated. The present paper provides practical algorithms to perform the necessary calculations stably and efficiently, concentrating exclusively on the case of imprecise observations. The matter of existence is treated by finding the best fitting solution in a least squares sense; then the size of the misfit is tested statistically to determine the probability that the value would be met or exceeded by chance. We obtain the optimal solution by solving a constrained least squares problem linear in the spectral function of the electric field differential equation. The spectral function is converted into a conductivity profile by transforming its partial fraction representation into a continued fraction, using a stable algorithm due to Rutishauser. In addition to optimal models, which always consist of delta functions, two other types of model are examined. One is composed of a finite stack of uniform layers, constructed so that the product of conductivity and thickness squared is the same in each layer. The numerical techniques developed for the optimal model serve with only minor alteration to find solutions in this class. Models of the second kind are smooth. A special form of the response is chosen so that the kernel functions of the Gel{\textquoteright}fand-Levitan integral equation are degenerate, thus allowing very stable and numerically efficient solution. Unlike previously published methods for finding conductivity models, these algorithms can provide solutions with misfits arbitrarily close to the smallest one possible. The methods are applied to magnetotelluric observations made by Larsen in Hawaii.

}, isbn = {0148-0227}, doi = {10.1029/JB086iB10p09574}, url = {Calibration and use of the diffusion porometer are imprecise because of imperfect understanding of steady diffusion through a porous material. The case of a flat plate with uniformly distributed right circular cylindrical holes is approximated by diffusion through two finite right circular cylinders: one representing the pore, and one representing the area into which vapor diffuses {\textemdash} its size determined by the mutual interference with neighboring pores. An exact three-dimensional solution is presented. It is found that for large pore spacing the empirical result of Holcomb and Cooke is excellent, but for close pore spacing some error occurs. A method for calculation of true calibration plate resistance is described as well as a method for estimating pore size for an unknown plate or membrane.

}, isbn = {0002-1571}, doi = {10.1016/0002-1571(81)90088-1}, url = {A theory is described for the inversion of electromagnetic response data associated with one-dimensional electrically conducting media. The data are assumed to be in the form of a collection of (possibly imprecise) complex admittances determined at a finite number of frequencies. We first solve the forward problem for conductivity models in a space of functions large enough to include delta functions. Necessary and sufficient conditions are derived for the existence of solutions to the inverse problem in this space. The approach relies on a representation of real-part positive functions due to Cauer and an application of Sabatier{\textquoteright}s theory of constrained linear inversion. We find that delta-function models are fundamental to the problem. When existence of a solution has been established for a given set of data, actual conductivities fitting the measurements may be explicitly constructed for various special classes of functions. For a solution in delta functions or homogeneous layers a development as a continued fraction is the essential idea; smoothly varying models are found with an adaption of Weidelt{\textquoteright}s analytic solution.

}, isbn = {0148-0227}, doi = {10.1029/JB085iB08p04421}, url = {We describe a method for slant stacking seismic records at a number of ranges to synthesize the τ{\textemdash}ρ curve. The seismograms do not have to be evenly spaced in range and the correct three-dimensional point-source geometry is retained throughout. The problem is posed as a linear inverse problem in a form that permits the construction of a special solution in a very efficient manner.

}, isbn = {0094-8276}, doi = {10.1029/GL007i012p01073}, url = {Confidence tables and other parameters are provided for a misfit statistic based upon the sum of the absolute discrepancies between a model and observation. This one-norm misfit is compatible with the application of linear programing in geophysical modeling, a technique of considerable value when inequality constraints are to be applied in the mathematical model.

}, isbn = {0148-0227}, doi = {10.1029/JB085iB08p04429}, url = {Summary. Time series of unit vectors occur in geophysics as palaeomagnetic poles or poles of relative motion in plate tectonics, and it is often required to trace a smooth curve through the individual points. A simple method is given for interpolating such time series based on cubic splines. The curve obtained is smooth (e.g. possesses continuous curvature) and does not depend on the choice of coordinate axes. An extension with the same desirable properties is given for the case where the given data are inexact.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1979.tb04802.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1979.tb04802.x}, author = {Parker, Robert L. and Denham, Charles R.} } @article {27938, title = {Interpretation of borehole magnetometer data}, journal = {Journal of Geophysical Research}, volume = {84}, number = {NB10}, year = {1979}, note = {n/a}, pages = {5467-5479}, type = {Article}, abstract = {Samples recovered from Deep-Sea Drilling Project (DSDP) holes into the basaltic layer strongly suggest that a good model of the magnetization would be a spatially random one. To interpret future magnetometer observations made in the holes, we develop a theory relating the observed fields to the magnetization autocorrelation tensor, a function completely describing the second-order statistics of the medium. We examine special models with long- and short-range order and conclude that certain properties of the medium are highly desirable in making interpretation possible. Among these are that the autocorrelation tensor be independent of direction in a horizontal plane and that the direction of magnetization be uniform. Furthermore, useful interpretation is feasible only if three components of the magnetic field are measured, and it is preferable (but not essential) that the magnetometer is absolutely orientated about a vertical axis. We show that only in special circumstances (e.g., horizontal layering of the medium) does the magnetic field correlate directly with the average magnetization in a region surrounding the magnetometer. We analyze the natural remanent magnetization data from leg 37 of the DSDP to obtain partial information on the spatial statistics. There is no significant order vertically on scales longer than 10 m, but in two of the five holes there is correlation in the range 0.3{\textendash}2 m. In three holes there is evidence of fairly good uniformity of direction, a highly desirable property for interpretational purposes. Unfortunately, drill hole samples give us no information about the behavior of the autocorrelation tensor as a function of horizontal distance.

}, isbn = {0148-0227}, doi = {10.1029/JB084iB10p05467}, url = {A geometric formulation of the seismic travel time problem is given based upon the use of slowness as an independent variable. Many of the difficulties in the conventional treatment (e.g., singular kernels) are thereby, avoided. Furthermore, it is shown that the inverse problem possesses an inherently linear formulation. In this formalism we are able to provide extremal solutions giving upper and lower depth bounds using linear programing. This approach has been compared with two well-known nonlinear extremal inversions. We find our technique to be easier to implement and find that it often generates superior results.

}, isbn = {0148-0227}, doi = {10.1029/JB084iB07p03615}, url = {Summary. The formalism of Backus \& Gilbert is applied to the problems of upward and downward continuation of harmonic functions. We first treat downward continuation of a two-dimensional field to a level surface everywhere below the observation locations; the calculation of resolving widths and solution estimates is a straightforward application of Backus{\textemdash}Gilbert theory. The extension to the downward continuation of a three-dimensional field uses a delta criterion giving resolving areas rather than widths. A feature not encountered in conventional Backus{\textemdash}Gilbert problems is the requirement of an additional constraint to guarantee the existence of the resolution integrals. Finally, we consider upward continuation of a two-dimensional field to a level above all observations. We find that solution estimates must be weighted averages of the field not only on this level, but also on a line passing between the observations and sources. Weighting on the lower line may be traded off against resolution on the upper level.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1979.tb03779.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1979.tb03779.x}, author = {Huestis, Stephen P. and Parker, Robert L.} } @article {27934, title = {Isostasy in Australia and evolution of the compensation mechanism}, journal = {Science}, volume = {199}, number = {4330}, year = {1978}, note = {n/a}, pages = {773-775}, type = {Article}, abstract = {A linear transfer function analysis has been applied to gravity and topographic data from Australia to calculate the isostatic response function of Dorman and Lewis. The Australian response function is considerably different from that calculated for the United States. The differences can be explained on the basis of an apparent evolution of the isostatic compensation mechanism in which viscoelastic creep occurs in the lithosphere and relaxes the initial long-wavelength elastic stresses.

}, isbn = {0036-8075}, doi = {10.1126/science.199.4330.773}, url = {Summary. A striking feature of the day-side response of the Moon to periodic fluctuations in the solar wind is the rapid rise, and subsequent fall, in the amplitude of the transfer function as the inducing field frequency increases. This behaviour can be characterized by the amplitude values at the two frequencies 24 and 40 mHz. Before the response of a conductivity model representing the Moon can be calculated at a given frequency, the parameters (ν, θ) (where ν is the solar wind speed and θ is the angle between the solar wind velocity and the magnetic field propagation direction) have to be specified. By applying some results due to Parker (1972) to the above two data points, we have determined constraints on the parameter space (ν, θ). In particular, we determine the region of the (ν, θ) space in which conductivity models may be found that satisfy our data pair. Outside this region, there are no conductivity models satisfying the data pair, and hence many (ν, θ) values are inconsistent with the original data and the model assumptions.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1978.tb04241.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1978.tb04241.x}, author = {Hobbs, U. A. and Parker, R.L.} } @article {27929, title = {Bounding the thickness of the oceanic magnetized layer}, journal = {Journal of Geophysical Research}, volume = {82}, number = {33}, year = {1977}, note = {n/a}, month = {1977}, pages = {5293-5303}, abstract = {We present a theory for placing a lower bound on the thickness of the oceanic magnetized layer using magnetic anomaly observations and estimates of the intensity of magnetization; the theory makes only a minimum number of assumptions regarding the spatial distribution of the magnetization. The principle of the method is based upon the fact that thin layers imply high magnetizations. We show how to calculate the source distribution that has minimum intensity yet fits the data and is confined to a given thickness layer; because the minimum intensity must be a monotonically decreasing function of layer thickness, it follows that an upper bound on the intensity allows us to obtain a lower limit on the thickness. The practical calculations are performed by using linear programing. The method is applied to two sets of near-bottom magnetic profiles, one on the Gal{\'a}pagos Spreading Center at 86{\textdegree}W and the other set on the Pacific-Antarctic Ridge at 51{\textdegree}S. In the first area we conclude that the magnetic layer must be at least 450 m thick, and in the other a crossing of the Jaramillo event indicates that the magnetic layer is probably more than 1000 m in thickness.

}, isbn = {0148-0227}, doi = {10.1029/JB082i033p05293}, url = {Summary. From consideration of the higher order terms, it is shown that the magneto-telluric response is Fr{\'e}chet differentiable with respect to conductivity; this result remains valid for discontinuous profiles, which is not so in the case of the corresponding free-oscdlation problem for the elastic earth. The remainder term in the Fr{\'e}chet formula is shown to be O|δσ|^{2} and a numerical estimate is made of the bounding constant for a restricted class of conductivity models.

Summary. Using the techniques of linear and quadratic programming, it can be shown that the isostatic response function for the continental United States, computed by Lewis \& Dorman (1970), is incompatible with any local compensation model that involves only negative density contrasts beneath topographic loads. We interpret the need for positive densities as indicating that compensation is regional rather than local. The regional compensation model that we investigate treats the outer shell of the Earth as a thin elastic plate, floating on the surface of a liquid. The response of such a model can be inverted to yield the absolute density gradient in the plate, provided the flexural rigidity of the plate and the density contrast between mantle and topography are specified. If only positive density gradients are allowed, such a regional model fits the United States response data provided the flexural rigidity of the plate lies between 1021 and 1022 N m. The fit of the model is insensitive to the mantle/ load density contrast, but certain bounds on the density structure can be established if the model is assumed correct. In particular, the maximum density increase within the plate at depths greater than 34 kin must not exceed 470 kg m-3; this can be regarded as an upper bound on the density contrast at the Mohorovicic discontinuity. The permitted values of the flexural rigidity correspond to plate thicknesses in the range 5{\textendash}10 km, yet deformations at depths greater than 20 km are indicated by other geophysical data. We conclude that the plate cannot be perfectly elastic; its effective elastic moduli must be much smaller than the seismically determined values. Estimates of the stress-differences produced in the earth by topographic loads, that use the elastic plate model, together with seismically determined elastic parameters, will be too large by a factor of four or more.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1977.tb06927.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1977.tb06927.x}, author = {Banks, R. J. and Parker, R.L. and Huestis, S. P.} } @article {27930, title = {Linear inference and underparameterized models}, journal = {Reviews of Geophysics}, volume = {15}, number = {4}, year = {1977}, note = {n/a}, pages = {446-456}, type = {Review}, abstract = {A version of Backus{\textquoteright}s theory of linear inference is developed by using a new finite-dimensional space. This approach affords a clear geometric interpretation of the essential role played by a priori model smoothing assumptions and also facilitates the construction of a theory for the treatment of random data errors that is quite different from the treatment of Backus. When the unknown parameters form a (necessarily incomplete) description of the model, it is possible to formulate a special smoothing assumption that is particularly appropriate; in practical examples this strategy often leads to tighter bounds on the model uncertainty than those obtained with previous assumptions. An analysis of the numerical aspects of the problem forces one to the conclusion that the theory is not competitive numerically with conventional least squares parameter estimation, unless one of the large submatrices in the problem possesses a simple inverse. An example of this kind is discussed briefly.

}, isbn = {8755-1209}, doi = {10.1029/RG015i004p00446}, url = {A study of magnetic daily variations shows their vertical component to be enhanced at the west coast of Australia, just as previous work had showed it to be reduced at the east coast. The anomalous contributions appear most likely to be due to induction by the onshore horizontal component of the daily variation field. The Australian data may also indicate regional differences in the structure of the continent.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1976.tb00304.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1976.tb00304.x}, author = {Lilley, F. E. M. and Parker, R.L.} } @article {27924, title = {An analysis of near-bottom magnetic anomalies: Sea-floor spreading and the magnetized layer}, journal = {Geophysical Journal of the Royal Astronomical Society}, volume = {43}, number = {2}, year = {1975}, note = {n/a}, pages = {387-424}, publisher = {Blackwell Publishing Ltd}, abstract = {Near-bottom magnetic data over six oceanic ridge segments in the East Pacific are inverted, giving magnetization solutions with alternate positive and negative bands which correspond to geomagnetic field reversals. We estimate the average half-width of the crustal formation zone to be 2{\textendash}3 km, based on the transition widths between these bands. The solutions show a narrow region of high magnetization centred directly over the centre of spreading, superimposed on a more gradual decrease of magnetization amplitudes with age. Both features are attributed to weathering of highly magnetized pillow lavas. We demonstrate that the short wavelength (\<3km) anomalies are largely due to topography. Distances to reversal boundaries give distance vs age curves for each ridge which show that spreading changes occur as sudden accelerations typically separated by several million years of very constant motion. These rate changes are probably accompanied by shifts in the locations of poles of relative motion, which are necessary in a system of more than two interacting plates. Palaeomagnetic data and reversal boundary locations from near-bottom and surface data are combined to give spreading half-rates and a refined time scale for the past 6 My. Widespread spreading rate variations occurred at 2{\textendash}3 MyBP and about 5 MyBP, possibly as a response to large scale changes in the plate pattern.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1975.tb00641.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1975.tb00641.x}, author = {Klitgord{\textdagger}, K. D. and Mudie, J. D. and Huestis, S. P. and Parker, R.L.} } @article {27926, title = {The theory of ideal bodies for gravity interpretation}, journal = {Geophysical Journal of the Royal Astronomical Society}, volume = {42}, number = {2}, year = {1975}, note = {n/a}, pages = {315-334.}, publisher = {Blackwell Publishing Ltd}, abstract = {Ambiguity in gravity interpretation is inevitable because of the fundamental incompleteness of real observations; it is, however, possible to provide rigorous limits on possible solutions even with incomplete data. In this paper a systematic theory is developed for finding such bounds, including an upper bound on depth of burial; the bounds are discovered by constructing the unique body achieving the extreme parameter, e.g. depth; such a body is called the {\textquoteright}ideal{\textquoteright} one associated with the given data. Ideal bodies can also be constructed for bounding density, thickness of layer and lateral extent. General properties of ideal bodies are derived and numerical methods for modest numbers of observations are discussed. Some artificial examples, where the buried system is exactly known, are given and it is shown how relatively good bounds can be reached with only a few measurements. A Bouguer anomaly from the Swiss Alps is then considered and it is concluded that the mountain roots are unusually shallow there.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1975.tb05864.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1975.tb05864.x}, author = {Parker, Robert L.} } @article {27921, title = {Best bounds on density and depth from gravity data}, journal = {Geophysics}, volume = {39}, number = {5}, year = {1974}, note = {n/a}, pages = {644-649}, type = {Article}, abstract = {Gravity data cannot usually be inverted to yield unique structures from incomplete data; however, there is a smallest density compatible with the data or, if the density is known, a deepest depth of burial. A general theory is derived which gives the greatest lower bound on density or the least upper bound on depth. These bounds are discovered by consideration of a class of {\textquotedblleft}ideal{\textquotedblright} bodies which achieve the extreme values of depth or density. The theory is illustrated with several examples which are solved by analytic methods. New maximum depth rules derived by this theory are, unlike some earlier rules of this type, optimal for the data they treat.

}, isbn = {0016-8033}, doi = {10.1190/1.1440454}, url = {The inversion of magnetic anomalies in terms of an irregular layer of magnetized material is studied, and an efficient procedure for constructing solutions is developed. Even when magnetic orientation and layer thickness are known, the solution is not unique because of the existence of a magnetization (called the magnetic annihilator) that produces no observable magnetic field. We consider an example of near-bottom marine data and discuss methods for overcoming the problem of nonuniqueness.

}, isbn = {0148-0227}, doi = {10.1029/JB079i011p01587}, url = {A very fast technique involving Fourier transformation can find the gravity or magnetic anomaly of an irregular crustal model as observed on a plane above the material. It is shown how the method can be used to invert the magnetic field data to obtain a magnetization model, but the model so obtained is not unique. The normal restrictions placed on the magnetization models lead to a family of solutions with one degree of freedom.

}, isbn = {0148-0227}, doi = {10.1029/JB079i014p02014}, url = {The relative angular velocity vectors of the plates covering the earth form a three-dimensional closed polyhedron, for which we propose the name geohedron. All forms of plate evolution produce simple changes in the geohedron. Corresponding bodies exist for relative angular accelerations and an attempt is made to determine the value of the relative accelerations of the plates forming a single triple junction when they are governed by kinematic effects alone, but the resulting values do not agree with the available observations.

}, isbn = {0012-821X}, doi = {10.1016/0012-821x(74)90137-x}, url = {It is shown that the procedure of summing (stacking) together a number of surface magnetic anomaly records does not significantly improve the ultimate resolution of short events in the geomagnetic polarity sequence. The limit of resolution (shown elsewhere to be about 3 km for surface data) is determined by the presence of signals not originating in the crust. Single profiles will not usually attain this optimal resolution, however, because of geological irregularities in the magnetic layer. Therefore, stacking offers improvement at longer wavelengths, but not all unwanted effects are properly suppressed: for example, (a) bathymetric features can cause coherent signals in closely spaced surveys and stacking will amplify them; (b) the stacking of phase-shifted anomalies causes smoothing rather than sharpening of transitions.

}, keywords = {1550 Spatial variations attributed to seafloor spreading}, isbn = {1944-8007}, doi = {10.1029/GL001i006p00259}, url = {http://dx.doi.org/10.1029/GL001i006p00259}, author = {Parker, Robert L.} } @article {27916, title = {Modelling geomagnetic variations in or near an ocean using a generalized image technique}, journal = {Geophysical Journal of the Royal Astronomical Society}, volume = {32}, number = {3}, year = {1973}, note = {n/a}, pages = {325-338}, type = {Article}, abstract = {A generalized image technique is described for modelling electromagnetic induction in two-dimensional systems, consisting of a thin (Price-type) ocean overlying a perfect conductor in the mantle. The method constructs a Green{\textquoteright}s function for currents in the ocean by conformally mapping the perfect conductor boundary into a straight line. Examples are given that show the effects to be expected at an ocean-continent boundary with isotherms rising under the ocean, and at a mid-ocean rise where the high temperatures are believed to be quite shallow.

}, isbn = {0016-8009}, doi = {10.1111/j.1365-246X.1973.tb05834.x}, url = {It is shown how a series of Fourier transforms can be used to calculate the magnetic or gravitational anomaly caused by an uneven, non-uniform layer of material. Modern methods for finding Fourier transforms numerically are very fast and make this approach attractive in situations where large quantities of observations are available.

}, isbn = {0016-8009}, doi = {10.1111/j.1365-246X.1973.tb06513.x}, url = {McKenzie{\textquoteright}s model of crustal creation at the ocean ridges1,2 and its derivatives3,4 predicts such features as the topography and high heat flow of the ridges. In spite of this success there are some unsatisfactory aspects of the model; for example, the arbitrary temperature distribution in the intrusive zone gives rise to infinite heat generation and the lithospheric thickness is a free parameter not determined by the physics. We offer here a simple refinement of McKenzie{\textquoteright}s model that overcomes these difficulties. The essential difference stems from the inclusion of terms in the boundary conditions to account for the evolution of latent heat in places where the plate is growing. We first describe the physical basis of the model.

}, doi = {10.1038/physci242137a0}, url = {When only a few observations are available as data for an inverse problem, it is proposed that the best way to use them is to obtain bounds on various functionals of the structure. To do this, the model is found that has the smallest (or largest) value of the functional. In this way, for example, equations are derived for finding the model value that is exceeded somewhere by all structures satisfying the data, and thus this value must be exceeded in the Earth itself. The same techniques can be used to derive conditions for the existence of a solution, when a certain data set is given; this is an important problem in non-linear inverse theory.Three examples are given, including the non-linear problem of electrical conductivity in the mantle. There, one- and two-data problems are solved and, by means of the existence theory, self-consistency criteria are defined for amplitude and phase measurements and for amplitude measurements at two different frequencies.

}, isbn = {0016-8009}, doi = {10.1111/j.1365-246X.1972.tb02203.x}, url = {A new method for continuing two-dimensional potential data upward from an uneven track is developed with special emphasis on solving a particular practical problem, that of magnetic data taken near the bottom of the ocean. The method is based on the use of the Schwarz-Christoffel transformation, which maps the original, irregular track into a horizontal straight line. It has been found to be very fast computationally and to suffer none of the restrictions found in some earlier two-dimensional algorithms.

}, isbn = {0016-8033}, doi = {10.1190/1.1440289}, url = {Using a technique due to Backus the average direction of magnetization of a seamount can be found from sea-surface magnetic observations without assuming anything about the internal structure of the body. Other advantages of this approach are that the required integrals can be made very simple without approximation and that uncertainties can be estimated.

}, isbn = {0016-8009}, doi = {10.1111/j.1365-246X.1971.tb02181.x}, url = {The general method of Backus and Gilbert is applied to the inverse problem of electrical conductivity in the mantle. The data of Banks are reworked to obtain a new model and its uncertainties. It is concluded that the levelling-off in conductivity previously obtained is only barely resolvable with current observations. A great improvement in the accuracy of the model could be expected if more precise data were available in the same frequency range.

}, isbn = {0016-8009}, doi = {10.1111/j.1365-246X.1971.tb03587.x}, url = {In the North Atlantic it is difficult to correlate single magnetic profiles with the spreading ocean floor magnetic models. Within the area of intensive surveys at 45{\textdegree} N, it is possible to average the observations in the direction of the trend of the magnetic anomalies. The profile of averaged anomalies for all data between 45{\textdegree} N and 45.5{\textdegree} N correlates well with a magnetic model spreading (with respect to the ridge axes) westwards at 1.28 cm/y and eastwards at 1.10 cm/y, if the trend of the anomalies is assumed to be 015{\textdegree} East of North.

}, isbn = {0008-4077}, doi = {10.1139/e71-080}, url = {Contours of d = h cosec θ are presented for the worlds oceans, where h is the depth of the ocean and θ the latitude. This quantity is the distance between the ocean surface and the ocean floor in the direction of the axis of rotation of the earth. The inverse is proportional to 2Ω/d = f/h where Ω is the rate of rotation of the earth and f = 2Ω sinθ is the Coriolis parameter. The quantity f/h may be interpreted as the potential vorticity of the ocean in the absence of motion relative to the rotating earth.

}, doi = {10.1016/0011-7471(70)90044-6}, url = {Electromagnetic induction in a thin strip is investigated to provide further understanding of the geomagnetic effects at an ocean edge. A solution to the equations is found by an analytic technique. It is shown that even in the case where the integrated conductivity is finite, infinite field strengths occur at the edges accompanied by rapid changes in phase.

}, isbn = {1365-246X}, doi = {10.1111/j.1365-246X.1967.tb06268.x}, url = {http://dx.doi.org/10.1111/j.1365-246X.1967.tb06268.x}, author = {Parker, Robert L.} } @article {27905, title = {The North Pacific: an example of tectonics on a sphere}, journal = {Nature}, volume = {216}, number = {5122}, year = {1967}, note = {n/a}, pages = {1276-1280}, type = {Article}, abstract = {Individual aseismic areas move as rigid plates on the surface of a sphere. Application of the Mercator projection to slip vectors shows that the paving stone theory of world tectonics is correct and applies to about a quarter of the Earth{\textquoteright}s surface.

}, isbn = {0028-0836}, doi = {10.1038/2161276a0}, url = {A solid conducting sphere or cylinder begins to rotate suddenly from rest in an initially uniform magnetic field. An analytic solution for this problem is obtained and it is shown that lines of magnetic force reconnect to form closed loops during the transient phase. The general behaviour of the system is investigated for all conductivities. Numerical examples are given and approximate expressions derived in the limiting cases of large conductivity and time.

}, doi = {10.1098/rspa.1966.0078}, url = {