Processing math: 2%
X
Advance Search
  • Gubbins, D. (2025). Information carried by different magnetic observations: A review. Earth Planet. Phys., 9(3), 1–12. DOI: 10.26464/epp2025035
    Citation: Gubbins, D. (2025). Information carried by different magnetic observations: A review. Earth Planet. Phys., 9(3), 1–12. DOI: 10.26464/epp2025035
REVIEW   |  SOLID EARTH: GEOELECTROMAGNETICS    Open Access    

Information carried by different magnetic observations: A review

  • Corresponding author:

    David Gubbins, D.Gubbins@leeds.ac.uk

  • Publication History:

    • First Published online: April 02, 2025
    • Accepted article online: April 02, 2025
    • Article accepted: February 18, 2025
    • Article received: January 15, 2025
    Macau Science Satellite-1 (MSS-1) is in a different orbit from other magnetic satellites, designed to focus on the South Atlantic Anomaly. I review the limitations of past surveys using different magnetic components (e.g. declination only, total intensity only). I also review the effect of the measurement sites (e.g. satellite altitude, marine data in oceans).
  • The Macau satellites differ from their predecessors in their orbits: MSS-1 (Macau Science Satellite-1) is in low inclination and the planned MSS-2 will be in highly elliptical orbits. This paper reviews the fundamental advantages and disadvantages of the different possible magnetic measurements: the component (declination, intensity, etc.) and location (satellite, ground, etc.). When planning a survey the choice of component is the "What?" question; the choice of location the "Where?" question. Results from potential theory inform the choice of measurement and data analysis. For example, knowing the vertical component of magnetic field provides a solution for the full magnetic field everywhere in the potential region. This is the familiar Neumann problem. In reality this ideal dataset is never available. In the past we were restricted to declination data only, then direction only, then total intensity only. There have also been large swathes of Earth’s surface with no measurements at all (MSS-1 is restricted to latitudes below 41). These incomplete datasets throw up new questions for potential theory, questions that have some intriguing answers. When only declination is known uniqueness is provided by horizontal intensity measurements on a single line joining the dip-poles. When only directions are involved uniqueness is provided by a single intensity measurement, at least in principle. Paleomagnetic intensities can help. When only total intensity is known, as was largely the case in the early satellite era, uniqueness is provided by a precise location of the magnetic equator. Holes in the data distribution is a familiar problem in geophysical studies. All magnetic measurements sample, to a greater or lesser extent, the potential field everywhere. There is a trade-off between measurements close to the source, good for small targets and high resolution, and the broader sample of a distant measurement. The sampling of a measurement is given by the appropriate Green’s function of the Laplacian, which determines both the resolution and scope of the measurement. For example, radial and horizontal measurements near the Earth’s surface give a weighted average of the radial component over a patch of the core surface beneath the measurement site about 30 in radius. The patch is smaller for shallower surfaces, for example from satellite to ground. Holes in the data distribution do not correspond to similar holes at the source surface; the price paid is in resolution of the source. I argue that, in the past, we have been too reluctant to take advantage of incomplete and apparently hopeless datasets.

  • Satellites orbit the Earth to monitor the magnetic field and its changes. This helps us understand the geology of magnetised rocks, fluid motions in the Earth’s iron core where the magnetic field is generated, and help search for minerals. The progression of these satellites ever since Sputnik has been operated by Russia, USA, Europe, and now Macau of China. This special issue contains results from the first year of operation of the Macau Science Satellite-1 (MSS-1) and studies preparatory to the imminent launch of two more, MSS-2. They differ from previous satellites in their orbits, which have previously been circular and over the poles. MSS-1 is in low latitude orbit so it spends a lot of time flying through a developing region of low magnetic field (the South Atlantic Anomaly) that is creating problems with communications. MSS-2 will be in highly elliptical orbits so they will explore a large volume of space from close to Earth’s surface to well out in the region of the atmosphere called the magnetosphere. This paper is a review of choices of what measurement to make (compass angle, dip, magnetic force) and what each tells us, and choices of site (orbit, altitude, ship or aircraft). This is done within the framework of a branch of mathematics called potential theory, which originated in the 18th century but has had some recent additions in connection with the analysis of measurements of the Earth’s magnetic field.

    The Macau Science Satellites differ from previous magnetic satellites in their orbits, which were and are near-circular and polar. MSS-1 is in a low-latitude orbit in order to concentrate measurements on the South Atlantic Anomaly (SAA), which is centered on about 30S, 50W. There are other benefits to the orbit as discussed elsewhere in this Special Issue. The two MSS-2 satellites are planned to fly in extreme elliptical orbits to sample a larger volume of the magnetosphere and to pass closer to Earth’s surface at perigee than is possible with a satellite in a circular orbit. This is expected to give finer resolution of the part of the geomagnetic field arising from the Earth’s lithosphere. Otherwise MSS satellites are very similar to those used in the Swarm constellation: similar instruments, star cameras, and configuration.

    When asked to write this introduction to the Special Issue, I thought it appropriate to review the different qualities of the various measurements that can be made in geomagnetism, the measured component of the geomagnetic vector and the location where it is measured. The choice of component and location is determined by the purpose of the measurement. I shall focus on the mapping of the internal geomagnetic field, the part originating in the dynamo operating in the Earth’s core and the part originating in magnetised rocks in the Earth’s crust and uppermost mantle — the lithospheric field. The fascinating study of the external field, the part originating in the Sun and magnetosphere, is only mentioned insofar as it must be separated from the raw data.

    The incompleteness of measurements have, in the past, been a serious limitation. Take for example the very first observations made, which were of declination, the angle between true north and the orientation of the compass needle, or originally a piece of lodestone floating on water. Even with a world map of declination, first given by Edmund Halley in about 1700, there remain huge gaps in our knowledge of other components of the magnetic vector. It is tempting now to claim that satellites give us complete global coverage and exquisite accuracy of all the magnetic components, but our dataset is still incomplete and, more significantly, our interpretation remains limited. We also want to understand phenomena, such as the SAA, over long periods of time that stretch back centuries, to times when only limited observations were available. In this paper I review the fundamental limitations that remain for a finite, incomplete, database and hope it will guide some specific applications.

    Geomagnetic measurements are made in regions that are largely free of electric currents and the magnetic field can be described in terms of a scalar potential. The geomagnetic field, {\boldsymbol{B}} , satisfies both the solenoidal condition,

    \nabla\cdot{\boldsymbol{B}}=0, (1)

    which results from the absence of magnetic monopoles, and

    \nabla\times{\boldsymbol{B}}=0, (2)

    which results from the absence of electric currents. In 1839 Gauss published a method of analysis for the geomagnetic field that we have been using, with some modifications, ever since. He knew that, like the Earth’s gravity, the geomagnetic field could be represented as the gradient of a scalar potential,

    {\boldsymbol{B}}=-\nabla V_M . (3)

    Moreover the potential satisfies Laplace’s equation

    \nabla^2 V_M=0 (4)

    everywhere away from electric currents or other sources of magnetism. This is a great simplification: three components of a vector are replaced by a single scalar function of position that is further restricted by (4).

    There are many ways to solve Laplace’s equation; geomagnetism typically uses an expansion in spherical harmonics for the magnetic field of internal origin:

    V_M(r,\theta,\phi) = a \sum\limits_{n=1}^\infty \sum\limits_{m=0}^n \left(\frac{a}{r}\right)^{n+1} P_n^m(\cos\,{\theta}) (g_n^m \cos\,{m\phi} +h_n^m \sin\,{m\phi}), (5)

    where the scalar coefficients \left\{g_n^m,h_n^m\right\} are called the Gauss or geomagnetic coefficients (now that we study the magnetic fields of other planets using this formalism, I prefer to reserve the term geomagnetic coefficients for the Earth’s magnetic field). The factor a at the front, Earth’s radius standardised at a =6371.2 km, is introduced to give the geomagnetic coefficients units of magnetic induction, usually in nT. Note the term in n=0 is removed from the sum in order to satisfy the solenoidal condition (1). Another sum, similar to that in (5). describes the potential field of external origin. The radial factor is inverted, (r/a) , for a potential that is finite at the origin rather than at infinity.

    Components of the magnetic field in spherical coordinates are obtained by differentiating (5):

    \begin{split} &B_r(r,\theta,\phi) = \sum\limits_{n=1}^\infty \sum\limits_{m=0}^n n \left(\frac{a}{r}\right)^{n-1} P_n^m(\cos\,{\theta}) (g_n^m \cos\,{m\phi}+h_n^m\sin\,{m\phi}) ,\\ &B_\theta (r,\theta,\phi) = \sum\limits_{n=1}^\infty \sum\limits_{m=0}^n \left(\frac{a}{r}\right)^{n-1} \frac {{\mathrm{d}} {P_n^m(\cos\,{\theta})}}{{\mathrm{d}} {\theta}} (g_n^m \cos\,{m\phi}+h_n^m\sin\,{m\phi}), \\ &B_\phi(r,\theta,\phi) = \sum\limits_{n=1}^\infty \sum\limits_{m=0}^n \left(\frac{a}{r}\right)^{n-1} \frac{m}{\sin\,{\theta}} P_n^m(\cos\,{\theta}) (-g_n^m \sin\,{m\phi}+h_n^m\cos\,{m\phi}). \end{split} (6)

    These equations show that we can determine all 3 scalar components of the magnetic field from the geomagnetic coefficients. Theorems from potential theory show that Laplace’s equation can be solved everywhere in the potential region given appropriate boundary conditions on a closed surface. The relevant one here is the so-called Neumann boundary condition, the vertical gradient of the potential. The better-known Dirichlet condition, where the potential is specified on the boundary, is not relevant because we do not measure potential. This ideal situation is approximated by satellites in polar orbit for many earth rotations, thus providing dense sampling on a closed surface. The vertical component provides an approximation to the complete solution; the other components provide additional data that can be used to reduce noise and fill gaps. In fact B_\theta also provides uniqueness. These results follow from Equations (6).1 and 2 from the orthogonality of the spherical harmonics and theta derivatives. The ideal situation provides a guide as to what to measure, but in practice we have to deal with uncertainties and gaps in the data coverage.

    Although the Neumann boundary condition guarantees solution when the vertical component is available on the surface, this is not the case when there are sources both inside and outside the sphere. Gauss showed that knowing both vertical and horizontal components allows separation of the magnetic fields associated with each source (Olsen et al., 2010). This gave rigorous demonstration that the geomagnetic field originates inside the Earth. It can also be used to separate the core and lithospheric fields from the external fields. Unfortunately satellites travel above and through some external sources and the method is only partially successful. Separation is beyond the scope of this review, except to point out that it requires all components of the magnetic vector.

    Geomagnetism refers to a total of 7 components, the ones actually measured. These are the basic 3, North (X) , East (Y) , Down (Z) and 4 more that are also commonly measured, Declination (D) , Inclination (I) , Horizontal Intensity (H) , and Total Intensity (F) . X,Y,Z are related to the spherical components: {{\boldsymbol{B}}} = (B_r, B_\theta,B_\phi) = (-Z,-X,Y) .

    The relationships between the various components are illustrated in Figure 1 and the equations in the Box.

    Figure  1.  Components of the geomagnetic field.

    The initial stage of an interpretation of global magnetic measurements is usually to make a model in terms of geomagnetic coefficients. The model allows calculation of any of the components at any location, although the result retains the limitations and inaccuracies inherent in the model, which in turn contains the limitations and inaccuracies of the original dataset.

    One fundamental result from potential theory is relevant to calculating the magnetic field from a set of geomagnetic coefficients: downward and upward continuation, or using a radius different from that of the points of measurement. It is required for combining data made at different heights, for example aircraft and ships or satellite and ground level. The downward continuation factor is (a/r)^{n+1} in Equation (5); it is highly dependent on the spherical harmonic degree n . This leads to Hadamard’s instability, where the sum fails to converge at the boundary of the potential region. Strictly speaking, the series (5) converges non-uniformly, which means that convergence depends on the location on the boundary. Imagine a single point source located on the Earth’s surface, the series converges everywhere away from the point source but not at its location.

    In practice, errors intervene and the instability can render the model useless long before the boundary of the potential region is reached, whether ground level or core surface, is reached. The high-degree harmonics, which are the least well determined because they are small-scale, are increased far more than the low-degree harmonics by downward continuation [see Equations (6)]. Their errors also increase, eventually dominating the entire sum. The problem is alleviated by truncating the harmonic series at a point where the noise level remains acceptable. For example, the International Geomagnetic Reference Field (IGRF), which is supposed to represent the "main field" or roughly speaking the field originating in the Earth’s core, is truncated at a break in the spectrum where the lithospheric field (regarded as "noise") dominates (Figure 2). The latest IGRF is truncated at degree n=13 . The IGRF is made for mapping purposes to provide a simple way to compute, for example, local magnetic deviation in a mobile phone.

    Figure  2.  The geomagnetic, or Lowes, spectrum of the geomagnetic field at satellite altitude using MAGSAT data. The n=1 , dipole, term is high, and the core field decreases on a line determined by the geometrical factor (c/a)^n , where c is the core radius. It meets the lithospheric contribution around degree n=15 , which decreases at a different slope, corresponding to satellite altitude, until it meets the level defined by the noise, a combination of instrument error and incomplete geographical coverage, around degree 40. The join is often used as the truncation point to separate the main field from the lithospheric field. IGRF truncation points have been increasing as measurements have become more accurate and complete. Early epochs started at degree 8, a lower degree because of the higher noise levels. The figure shows the energy at degree 8 is exceptionally low, which helped the decision to make the cut there. From Merrill et al. (1996).

    Truncation is justified for mapping purposes but I dislike it for scientific work because it arbitrarily limits the core field, the field generated by the core dynamo, to harmonics up to degree 13, which is manifestly untrue. In time series analysis a sharp cut-off in the frequency domain is not good practice for filtering because it causes oscillations in the time domain, from Gibbs' phenomenon. A similar, even more interesting, effect occurs on a sphere. Resolution in space can be estimated by the representation of a spike, or delta-function, in terms of the limited number of harmonics left by the truncation. A perfect delta function requires all harmonics but a truncation causes oscillations as in Figure 3. The number of oscillations is determined by the truncation, but in every case the largest oscillation is 180^\circ away from the spike. Thus on a sphere the problem is exacerbated, any estimate on the surface is an average of the true field everywhere, including a large contribution around the antipodal point.

    Figure  3.  The averaging function, or representation of a delta function on a sphere represented by spherical harmonic series truncated at degree 4 (left) and 6 (right), plotted as a function of angular distance (in degrees) from the desired location of the delta function. The delta function is at zero on the x-axis. The averaging function is axisymmetric and therefore a function of only the distance from the delta function. Note the oscillations and exceptionally large lobe at 180 ^\circ . Like Gibbs' phenomenon, raising the truncation point reduces the rms difference between the spike and its estimate, but does not reduce the amplitude of the overshoot. After Whaler and Gubbins (1981).

    A better approach is to include higher harmonics and taper them gradually to zero, as is done in time series analysis (Whaler and Gubbins, 1981). Shure et al. (1982) derived a formalism that takes advantage of the noise by deriving the model that converges at the smallest radius while still fitting the data. A modification of their approach has been used to derive a series of models of the core field in historical times by Bloxham et al. (1989), discussed further in Section 2. These methods, applied to the core field, use the downward continuation factors to taper the higher harmonics, thus adding the additional information that the magnetic field originates at the core surface and that the intervening mantle can be treated as an insulator.

    Creating a model of the geomagnetic field in practice involves finding the geomagnetic coefficients from measured data. This is a standard problem in inverse theory. The components X,Y,Z are linearly related to the geomagnetic coefficients, giving a relatively simple linear inverse problem. The data, values of X, Y, Z at different locations, are arranged in a data vector {\boldsymbol{d}} of length D and the geomagnetic coefficients arranged in a model vector of length M

    {\boldsymbol{m}}=(g_1^0,g_1^1,h_1^1,g_2^0,g_2^1,h_2^1, g_2^2,\dots). (7)

    The relationship between {\boldsymbol{d}} and {\boldsymbol{m}} is then

    {\boldsymbol{d}}={\boldsymbol A} {\boldsymbol{m}} + {\boldsymbol{e}}, (8)

    where {\boldsymbol{e} } is a D-length vector of error estimates associated with the corresponding datum and {\boldsymbol A} is a D -by- M matrix derived from the formulae in (6). The other measured components are not linearly related and are generally included by linearising about some starting model to form Equation (8) [see, for example, Gubbins and Bloxham (1985)]. The model is further improved by iteration. The starting model might be something simple, an axial dipole for example, or it might be desirable to stay as close as possible to an established model. Historical models, which are poorly constrained by the data, are usually iterated away from some more recent model.

    Equation (8) is usually solved by least squares, minimising a combination of the sum of squares of the errors, weighted by their covariance matrix {\boldsymbol{C}}_e , which is usually diagonal if the data are independent measurements and a function of the model coefficients with another "damping" matrix {\boldsymbol {C}}_m . The least squares solution becomes

    {\boldsymbol{m}}= ({\boldsymbol A}^{\mathrm{T}}{\boldsymbol C}^{-1}_e{\boldsymbol A} + {\boldsymbol C}^{-1}_m)^{-1} {\boldsymbol A}^{\mathrm{T}}{\boldsymbol C}^{-1}_e {\boldsymbol{d}} . (9)

    The theory provides a covariance matrix for the model parameters

    {\boldsymbol C}=({\boldsymbol A}^{\mathrm{T}}{\boldsymbol C}^{-1}_e{\boldsymbol A} + {\boldsymbol C}^{-1}_m)^{-1}, (10)

    which is not used much, probably because authors place little faith in the damping matrix {\boldsymbol C}_m . Inverse theory also provides a resolution matrix, {\boldsymbol R} , which relates the true model {\boldsymbol{m}}_T to the estimate:

    {\boldsymbol{m}} = {\boldsymbol R}{\boldsymbol{m}}_T, (11)
    \label {eq:{11}} {\boldsymbol R} = {\boldsymbol I}-{\boldsymbol C}{\boldsymbol C}^{-1}_m, (12)

    where {\boldsymbol I} is the unit matrix. Note that the model solution depends on the measurements {\boldsymbol{d}} , their error estimates {\boldsymbol C}_e , Their component and location through {\boldsymbol A} , and the damping through {\boldsymbol C}_m . The covariance and resolution matrices, by contrast, do not depend on the actual measurements {\boldsymbol{d}} . The covariances of the model parameters is just a mapping of the data errors and a priori uncertainties. Of these the resolution matrix is the most useful way to judge the effect of damping and errors on the independence of each model parameter.

    {\boldsymbol R} is a square P\times P matrix but is not invertible (otherwise we could recover the true model). One line of {\boldsymbol R} gives the corresponding element of {\boldsymbol{m}} as a linear combination of all the elements of {\boldsymbol{m}}_T . Ideally {\boldsymbol R} is the unit matrix and each line has zero entry except one, and the element of {\boldsymbol{m}} is equal to the corresponding element of the true model. In reality the off-diagonal elements are non-zero and the diagonal elements are less than one, showing the extent to which the estimate is a mix of all the true elements. This lack of resolution arises from the model weighting and data errors; more fundamentally it arises because the measurements are not of the geomagnetic coefficients directly but of the combination that makes up the datum. Further details of inverse theory are given in Gubbins (2004) pp 118f and Chapter 12. An example of {\boldsymbol R} for different geomagnetic datasets is given in Figure 4.

    Figure  4.  Resolution matrix for the determination of a field model for epoch 1715 derived by Bloxham et al. (1989). Numbers indicate spherical harmonic degree n . The order of the geomagnetic coefficients is by n then m , as in (7). Note the ramping for each set of coefficients with the same order, n . This arises from the sensitivity to variations in longitude, which are greatest for harmonics with the highest order m . Only the upper triange is shown because {\boldsymbol R} is symmetric.

    In the next Section I describe the relationships between magnetic components (the data), the geomagnetic coefficients (model elements), and the magnetic field derived from the model, in the chronological order of the invention of instruments devised to measure them. This is the "What?" question when deciding what instrument to use for a particular purpose. In the following Section I discuss the effect of measurement location, the altitude of a satellite, depth of a marine tow, height of an aircraft, and lateral extent of a ground or marine survey. This is the "Where?" question when designing a survey. The review concludes with a short discussion of the future of geomagnetic surveying, and what we have learnt from the past.

    The historical geomagnetic dataset has provided a usable record of the global geomagnetic field since about AD1600. Before that, the measurements can only be described as single-site. Records in Paris, London, and China go back a little further. The data set evolves over time with the invention of different instruments and measurement methods. This section follows the historical order of declination (compass), inclination (inclinometer), relative intensity (oscillation period of a suspended magnet), and absolute total intensity (Gauss' method and the proton magnetometer).

    All axisymmetric fields have zero declination and therefore cannot be distinguished by declination data alone. (Note this only applies to a potential field, a non-zero east component of an axisymmetric field would have the same value on all longitudes, implying a circular field line and an electric current.) Declination depends on the ratio of east to north components Y/X (see box) and Y has a factor m multiplying (6)-3. Thus neither D nor Y sample the axisymmetric field. Declination defines the horizontal direction of a magnetic field line and is therefore not sensitive to magnetic anomalies ahead or behind the direction of the compass needle; it is most sensitive to anomalies on either side. For this reason, and because the dominant axial dipole has magnetic field aligned approximately north−south, D contains more information on geomagnetic coefficients with large m for any given n because they vary the most rapidly in longitude. Equation (6)-3 implies non-axisymmetric fields with m\ne 0 could be found uniquely from a perfect Y dataset because of the orthogonality of the spherical harmonics; the formula for D is nonlinear and no equivalent proof is available to my knowledge.

    Just because declination alone does not provide a complete solution for the geomagnetic field does not make the measurements useless. Edmund Halley surveyed declination in the Atlantic Ocean and made a contour map. This might have provided a means for navigators to measure their longitude were secular variation not so rapid as to require new surveys every few years. Declination maps also provide measures of westward drift and more complex changes of the magnetic field, as well as the existence of the magnetic poles.

    Figure 4 shows the resolution matrix for the determination of geomagnetic coefficients for epoch 1715 from Bloxham et al. (1989). It is based largely on Halley’s data with a small number of inclinations. The axial dipole term g_1^0 is fixed at a suitable guess in order to make the result finite (otherwise a minimum norm solution is simply zero). The diagonal elements are large down to degree 5, representing good resolution, although the off-diagonal elements are significant. Beyond that, diagonal elements for the axisymmetric harmonics (m=0) fall significantly but increase with increasing m . This is a result of a dataset dominated by declination: harmonics with large m vary rapidly in longitude, to which declination is most sensitive.

    The trace of the resolution matrix, the sum of its diagonal elements, is often regarded as the number of degrees of freedom, or number of independent elements in the model. The trace of the resolution matrix for the 1715 model in Figure 4 is 54. It was produced by regularising at the core−mantle boundary, since the object was to image the core field. An equivalent model based on a truncated spherical harmonic series at degree 6 would have 48 coefficients, so the 1715 model is similar in its information content to a truncated model of degree 6. The damping of the 1715 model was chosen as a compromise between fitting the data to within its error estimates and a model with a magnetic field that is smooth at the core−mantle boundary. It is much more difficult to justify the choice of a hard truncation point.

    Georg Hartmann was first to notice the compass needle dipping as well as pointing approximately north; he reported it in a letter to Duke Albert of Prussia in 1544. Unfortunately the letter was only found and published in 1831, by which time Robert Norman had invented the inclinometer (in 1581) and many measurements had been made. The inclinometer is reported as a troublesome instrument, probably because of the sideways force on the pivot. Its main use was in searches for the geomagnetic dip poles. Declination can, in principle, be used to locate the poles as all contours of constant declination converge there. It is also the place where the compass does not work, so a precise search would be largely futile. The inclinometer, by contrast, obtains a precise measurement of the dip and therefore proximity of the pole. James Ross used an inclinometer on his polar expeditions, as have other explorers.

    Halley did not measure inclination in his survey of the Atlantic Ocean but fortunately Louis Feuillée, a French scientist, did. His voyage, from 1707–1711, was from France around Cape Horn to Peru. These measurements, together with those made in London and Paris, contribute to the dataset for the 1715 model described above. Declinations dominate the database of 2636 measurements, of which only 128 are of inclination. However, the contribution from inclination may be greater than their small number might suggest.

    Consider the idealised case of perfect declination information. We separate the magnetic field vector into radial and horizontal components

    {\boldsymbol{B}}=B_r\hat{{\boldsymbol{r}}} + {\boldsymbol{B}}_{\rm h}. (13)

    Declination is the direction of the horizontal field {\boldsymbol{B}}_{\rm h} on the spherical surface r=a . The radial component of the current-free condition \nabla\times{\boldsymbol{B}}=0 can be written as

    \nabla_{\rm h}\times{\boldsymbol{B}}_{\rm h} =0, (14)

    where \nabla_{\rm h} is the horizontal gradient operator in spherical coordinates,

    \nabla_{\rm h} = \left(0;\frac {\partial {}}{\partial {\theta}};\frac{1}{\sin{\theta}}\frac {\partial {}}{\partial {\phi}}\right). (15)

    Let {\boldsymbol{B}}'_{\rm h} be a horizontal vector that is the gradient of a potential V'_M and also has the direction given by the declination. The true horizontal field may then be written as \alpha {\boldsymbol{B}}'_{\rm h} . Equation (14) applies to this field, so substituting in and using a well-known vector identity gives

    \nabla_{\rm h} \alpha \times {\boldsymbol{B}}'_{\rm h} =0. (16)

    Therefore \nabla_{\rm h} \alpha is parallel to {\boldsymbol{B}}'_{\rm h} and level lines of \alpha are the same as contours of declination. It follows that \alpha is only required at one point on each contour of D : in other words, on any line joining the dip-poles. Only a small number of inclinations may therefore be needed to drastically improve the 1715 model: Louis Feuillée’s voyage of 1707−1711 is a pretty good approximation to the line joining the poles, given the scope of the known world at the time!

    Directions continued to dominate the data available in the 250 years between Robert Norman’s invention of the inclinometer and C. F. Gauss' measurement of absolute intensity. Data were limited mainly to the oceans and on the main trading routes. The main improvement during this time was the navigation, with the invention of the chronometer. It is interesting to note here that the main source of error in early satellite measurements was also the navigation.

    Global models with no absolute intensity must be made by fixing one value, usually that of g_1^0 extrapolated back from younger models which do contain measurements of absolute intensity. The extrapolation of dipole intensity is usually taken to be a straight line with the same average slope as later determinations [e.g. Jackson et al. (2000)]. Gubbins et al. (2006) used archeomagnetic intensity determinations and claimed the slope may have been dramatically lower prior to 1830, but this is not confirmed by later work (Suttie et al., 2011). This raises the question of whether just one measurement of intensity, F , is sufficient to provide uniqueness for a model based otherwise on perfect directional data. The answer is very surprising. In two dimensions it is easy to prove, using complex variable theory, that the number of intensity measurements required is one less than the number of dip-poles (places where the magnetic field is vertical) (Proctor and Gubbins, 1990). Proof of the same result in three dimensions was found by Hulot et al. (1997). For the Earth at present, which has just 2 dip-poles, one intensity measurement is enough.

    The problem of finding a model from directions and a single intensity is nonlinear but homogeneous, in the sense that if two different models, {\boldsymbol{B}}_1 and {\boldsymbol{B}}_2 , have the same direction but different intensities, then any linear combination of them, a{\boldsymbol{B}}_1 + b{\boldsymbol{B}}_2 will also have the same direction. It follows that, having found {\boldsymbol{B}}_1 , there is another model very close that satisfies the data. It can be found by linearisation, so it should be possible to map out the whole family of models. Proctor and Gubbins (1990) tried this with a simple axisymmetric model that was known to have more than one solution. They found that a very accurate calculation was needed before a second solution emerged: expansion in spherical harmonics out to degree 65, far higher than those of the simple candidates (maximum degrees 2 and 3).

    The first effort to measure relative horizontal intensity was by timing the period of a magnet suspended horizontally and set to oscillate. The period is proportional to H ; the measurement allowed comparison of H at different places. Alexander von Humboldt used oscillations of a vertical needle to measure F in his voyages around the Americas. Sometimes magnets were returned to Paris for comparison with a standard, but there were inevitably problems with unknown changes in the magnetic moment of the magnets when they were transported. The measurement is relative because it cannot be related back to the fundamental units of mass, length, and time. Relative horizontal intensity measurements have made little contribution to geomagnetism but they do raise an interesting point in the context of global mapping. They are essentially the ratio of intensities at different places whereas the angles D and I are ratios of different intensities at the same place, so contribute similar information to a global model. Measurements of absolute intensity far more valuable.

    C. F. Gauss devised the first method of measuring absolute intensity in 1832 and shortly afterwards established a global network of observatories, encouraged by Alexander von Humboldt and facilitated by Edward Sabine and Humphrey Lloyd in what is known by historians as the Magnetic Crusade, probably the first scientific endeavour requiring substantial government support. Gauss provided the theory and instrumentation, von Humboldt the influence, Sabine, a soldier/scientist in Wellington’s army, the organisation and assistance of the British Army, and physicist Loyd the training of recruits to make the measurements. There was an interesting debate as to the merits of absolute versus relative measurements, with physicists Gauss and Lloyd favouring absolute measurements and winning the argument. Respectable global coverage of absolute intensity was achieved by various expeditions throughout the 19th century, notably the Challenger expedition to the Antarctic in 1875.

    The invention of the proton magnetometer produced a major shift in the nature of geomagnetic datasets. It is an absolute and robust instrument capable of being towed behind a ship. It provided the evidence for magnetic stripes on the ocean floor and the primary evidence for plate tectonic motions. It, and its successors the alkali vapor and Overhauser magnetometers, are used in satellites. (The proton magnetometer is less suitable for a satellite because it takes a relatively long time to produce a measurement.) Consequently, datasets from about 1950 until 1989 are dominated by measurements of intensity.

    The earliest satellites measured only total intensity because it was not possible to measure the attitude of the craft with anything like the accuracy required to match that of the magnetometer. Technology gradually improved from 1958 to 1971 as shown in Table 1. A recording device (tape recorder) was needed before complete geographical coverage was achieved, otherwise the data was limited to line-of-sight to a recording station. Position was the dominant source of error at first. These early satellites are reviewed by Cain (1971), who reports on the various international initiatives that helped promote funding for them (the International Geophysical Year, World Magnetic Survey, and later the Years of the Quiet Sun). Of relevance here is his comment "at that time more thought was given to the problem of acquiring the observations than to their use in defining a model of the geomagnetic field". An understandable sentiment, but one wonders what would have happened if Backus' nonuniqueness result (Backus, 1970), discussed below, had been known before planning began.

    Table  1.  The 9 satellites making total intensity measurements from 1958–1971, redrawn from Cain (1971). The OGO satellites, combined as POGO (Polar Orbiting Geophysical Observatories) were specifically designed to monitor the geomagnetic field with a view to establishing a global model.
    Spacecraft Inclination (°) Altitude (km) Duration Coverage Error1 (nT)
    Sputnik 3 65 440–600 May–Jun 1958 USSR 100
    Vanguard 3 33 510–3750 Sep–Dec 1959 Near ground stations 10
    Cosmos 26 49 270–403 Mar 1964 Whole orbit ^2
    Cosmos 49 50 261–488 Oct–Nov 1964 Whole orbit 22
    1964-83C 90 1040–1098 Dec 1964–Jun 1965 Near ground stations 22
    OGO 2 87 413–1510 Nov 1965–Sep 1967 Whole orbit 10
    OGO 4 86 412–908 Jul 1967–Jan 1969 Whole orbit 10
    OGO 6 82 397–1098 Jun 1969–Aug 1970 Whole orbit 10
    Cosmos 321 71 270–403 Jan–Mar 1970 Whole orbit
    Notes: 1 Combined error; 2 “Whole orbit” does not cover whole Earth but up to the latitudes determined by the inclination of the satellite.
     | Show Table
    DownLoad: CSV

    The POGO (Polar Orbiting Geophysical Observatory) series of satellites made total intensity measurements in polar orbit down to perigee 410 km from 1965–1971. While directional data is not critical to marine and other survey measurements, satellites were to be the main providers of data for main field modelling. This prompted work on the uniqueness and information content of intensity data with little or no directional information. Early main field models based primarily on intensity data looked good and there was optimism that a uniqueness theorem would be forthcoming, but it soon became clear that, although the intensities were fitted well and agreed with other intensity measurements, the scalar components ( X,Y,Z ) did not agree so well. Hurwitz and Knapp (1974) and others performed simple tests using synthetic sets of field model coefficients, extracting only the total intensities, and inverting using the same procedure as for intensity datasets. The scalar components derived from this model could then be compared with the originals. Typical results are shown in Fig. 5 from Stern et al. (1980), who used a field model derived from vector data from the later satellite MAGSAT. This error map is similar to those found earlier (Hurwitz and Knapp, 1974; Lowes, 1975; Stern and Bredekamp, 1975).

    Figure  5.  Differences between a main field model based on vector data and one based on only the total intensities F from the same model (Stern et al., 1980). The vertical component is shown. Similar results were found from other synthetic models. The largest errors are focussed on the magnetic equator, not the geographic equator, and are much larger that expected. Reproduced from Stern et al. (1980).

    George Backus had found a counter example to the conjecture of uniqueness of a model based purely on intensity data (apart from a sign) (Backus, 1970). Cain (1971) was unaware of the result but it was known to later authors investigating the large errors in derived scalar components. Backus defined non-zero scalar potentials u and v with |\nabla u | = |\nabla v | =0 on r=a . Defining \phi = u+v and \psi =u-v the requirement becomes

    \nabla \phi \cdot \nabla \psi = (\nabla u)^2 + (\nabla v)^2 = 0 .

    Requiring gradients of both the sum and the difference to be zero takes account of the arbitrary sign always present in the solution. Backus then set \phi to be an axial dipole \phi = (a/r)^3 \cos{\theta} and used the recurrence relations of the associated Legendre functions to solve for \psi in the form of recurrence relations for the spherical harmonic coefficients. These equations converge and relate harmonic degrees n+2 and n for the same order m . They are solved by arbitrarily choosing the spherical harmonic coefficient \psi_m^m and using the recurrence relations to determine the higher degrees. Two families of solutions are possible, one with n+m even and one with n+m odd: these are respectively symmetric and antisymmetric about the equator. The term \psi_0^0=0 because there are no monopoles; therefore all coefficients with m=0 are zero. The non-uniqueness is called the Backus effect or ambiguity and the solution in spherical harmonics the Backus series.

    Backus (1970) points out several important points. First, the conjecture holds if the models are restricted to a finite number of harmonics. In practice solutions are restricted to a finite number of harmonics and therefore may appear unique. Secondly, because the problem is homogeneous the second solution can be arbitrarily close to the first. Again, in practice models are found iteratively by linearising about an initial model: they can therefore iterate away from the desired solution. Thirdly, the counter example does not disprove the existence of unique solutions, nor are they the only solutions. Finally, Backus (1974) shows that intensity measurements throughout a finite volume of space, not just a surface, can yield a unique solution.

    Lowes (1975) has a somewhat different explanation of the large errors in geomagnetic components. The relationship between geomagnetic coefficients and intensity is non-linear and inversion requires linearisation about some initial field model and subsequent iteration. The total intensity is related to the local scalar components as:

    F=(X^{\,2}+Y^{\,2}+Z^{\,2})^{1/2}. (17)

    A small perturbation in F , \Delta F , is related to changes in the components by

    \Delta F = (X\Delta X+Y\Delta Y+Z\Delta Z)/F, (18)

    which may be written as

    \Delta F = {\boldsymbol{B}}\cdot \Delta {\boldsymbol{B}}/F. (19)

    The values of \Delta F form the "data" and the geomagnetic coefficients for \Delta {\boldsymbol{B}} the "model" for the next iteration of the linearised inversion. It is clear that, at each iteration, the "data" contain no information about the component of the model perpendicular to {\boldsymbol{B}} . It follows that this part of the solution will be poorly constrained by the data. Lowes (1975) calls this the "perpendicular error effect". It is discussed further by Hurwitz and Knapp (1974) for their computer simulated dataset, but they attribute the analysis to the later paper by Lowes.

    Lowes (1975) discusses the relationship between the Backus and perpendicular error effects, and in particular why the largest errors are on the equator and in the sectoral harmonics (those with n=m ). The solution for the Backus ambiguities involves solving a recurrence relation starting from a sectoral harmonic. Since successive recursions decrease in size the sectoral terms are likely to be the largest. Also the perpendicular error will be in the Z-component along the magnetic equator, where the main field is horizontal. The perpendicular error and Backus effects are not the same but are clearly related, partly because Backus' counter example has the axial dipole as one solution and the geomagnetic field is dominated by its axial dipole.

    Further work concentrated mainly on how best to augment the intensity data to give a good model. Theoretical work by Khokhlov et al. (1997) proved uniqueness if the dip equator were located precisely. Holme et al. (2008) proposed using the equatorial electroject to locate the dip equator. Khokhlov et al. (1997) also drew parallels between the Backus F problem and the direction-only problem discussed in Section 2.2. In a later paper Khokhlov et al. (1999) examined the effect of errors in F and location of the equatorial electrojet and shed some light on empirical results using synthetic datasets (Barraclough and Nevitt, 1976; Lowes and Martin, 1987; Ultré et al., 1998). In practice IGRFs and other models were produced using all the data available, particularly vector data from ground magnetic observatories. Richard Holme, a referee, thinks more information can be extracted from the intensity measurements from POGO and other sources using the framework of Holme and Bloxham (1996) in dealing with attitude errors. Furthermore, models based on intensity are very sensitive to the location of the magnetic equator, which can be moved by changes in the external field. This suggests a combined study including the external field could lead to better intensity models.

    The interval 1970–1980 saw a distressing fall in magnetic data quality and quantity, with no dedicated satellite missions and even several long-established ground observatories closing. MAGSAT was the first satellite to measure the full vector. It flew in polar orbit from 30 Oct 1979 to 11 June 1980 and carried similar instruments as today’s satellites. It vindicated NASA’s dogged pursuit of magnetic observation: it provided a gold standard by which older field models could be judged and the data were used in an astonishing number of scientific papers. Sadly it was followed by yet another decade of degradation in geomagnetic data, but the establishment of the INTERMAGNET network of ground magnetic observatories, the European satellites Oersted, Champ, and Swarm, and now the Macau Science Satellites, means we are in a period of unprecedented geomagnetic observation.

    Every magnetic measurement samples the magnetic field everywhere in the current-free region. This produces a compromise between resolution and noise. Close features are better resolved than distant ones; larger features are better resolved by distant measurements than smaller ones. It is essential that ground surveys are close to the target source, the horizontal distance being comparable to the depth. For airborne, marine, and satellite measurements the altitude is critical. Shipborne measurements of the ocean floor, famous for discovering the magnetic stripes caused by seafloor spreading, are made at the ocean surface, typically 5 km or so above the sea floor. Had the measurements been made closer to the source the stripes might not have been discovered so quickly: processing of a noisy dataset would be required to bring out the surprising pattern of stripes. The main advantage of aircraft (including helicopters) is rapid coverage of the ground and access over private property, but they also contain an intrinsic low-pass filter that can be an advantage in exploration. Satellites, with their great altitude, offer the broadest sampling for each measurement having the least resolution, and are the most tolerant of gaps in the data.

    Historical datasets have always suffered from patchy geographical coverage. There are very few magnetic observatories in the oceans, and those that do exist are confined to remote oceanic islands (Hawaii for example) that tend to suffer from highly magnetised local environments and induced electric currents in the oceans. During the earlier times of European exploration, however, the measurements were mainly confined to the oceans, leaving huge gaps in coverage over some continental regions. Satellite MSS-1 also presents a large hole in the dataset: no measurements north or south of 41°N and S. The trade-off between the high resolution of close measurements and the broad coverage of distant measurements comes into play when interpreting the magnetic field within large gaps in the data.

    The original concept of MSS-1 was to focus on the South Atlantic Anomaly region (Figure 6). This required a low-latitude orbit to pass frequently over the region, abandoning sites in the higher latitudes to be covered by the continuing Swarm constellation. Since all these satellites fly at approximately the same altitude of around 500 km they have approximately the same orbit time. They all produce data that can be used for core and lithospherical field studies at 1 Hz, so they all make the same number of measurements in a year. The difference is that MSS-1 concentrates the measurements into a fraction of the spherical surface while the Swarm data are distributed over more or less the whole sphere (polar orbiters are never truly polar and leave a small hole in coverage over the poles). With orbit at inclination 41° the area sampled of the orbit surface, radius r_{\rm{o}} , is 4\pi \cos{(41)} r^2_{\rm {o}} , or about 66% of the full surface, a 50% higher sampling rate than a polar orbiter. Figure 7 shows the data density of MSS-1 compared with Swarm for the last year. The average density of data is clearly greater for MSS-1 but also it is highest at the north and south extremes of the orbit whereas Swarm’s data density is highest near the poles. Implications for this are discussed in another paper in this issue (Williams et al., 2025).

    Figure  6.  The South Atlantic Anomaly. Shown is total intensity at satellite altitude based on MSS-1 data. The SAA is the deep blue are centered on the coast of South America. The background pattern comes from the basic dipole structure (strongest at the poles) and characteristic 4-lobe polar structure. From https://eos.org/features/the-herky-jerky-weirdness-of-earths-magnetic-field.
    Figure  7.  A comparison of data coverage in one year of data from MSS-1 (top) and Swarm (bottom). The colour scale is for number of data points within each equal area. The different orbit tracks are clearly visible. The variation in latitude is also clear: in both cases the orbits converge towards their northern and southern limits, giving greater sampling there.

    Holes in a data distribution can be filled in by using different scalar components if the measuring surface is well above the model surface, for example between satellite altitude and the ground, or more dramatically between the Earth’s surface and the core. The vertical field at the model surface, if known completely, determines the magnetic potential uniquely. The vertical field at the measuring surface is most sensitive to the vertical field immediately beneath the measurement point but is also an average over the whole measurement surface, decaying away with angular distance. The horizontal field, by contrast, is not at all sensitive to the vertical field beneath; its maximum sensitivity is some angular distance away, depending on the ratio of radii of the two surfaces.

    The result relies on the Green’s function for the Neumann problem. Given the radial component of magnetic field on a spherical surface {\boldsymbol{r}}={\boldsymbol{r}}' , the magnetic potential at another radius {\boldsymbol{r}} is given by

    V_M(r)=-\int\int_{S'}G({\boldsymbol{ r}},{\boldsymbol{ r}}')B_r({\boldsymbol{ r}}') {\mathrm{d}}S', (20)

    where G({\boldsymbol{ r}},{\boldsymbol{ r}}') is the Green’s function for Laplace’s equation with Neumann boundary conditions and S' is the boundary. Differentiating (20) with respect to r, \theta, and \phi and setting r equal to the position vector of a measurement site gives expressions for the three components of magnetic field. For components measured at the pole of the spherical co-ordinate system we have

    Z=-B_r=\int\int_{S'}\frac{\partial G}{\partial r} B_r({\boldsymbol{ r}}'){\mathrm{d}}S', (21)
    X=-B_\theta=\int\int_{S'}\frac{1}{r}\frac{\partial G}{\partial\theta} B_r({\boldsymbol{ r}}'){\mathrm{d}}S' , (22)
    Y=0 .\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad (23)

    G is a function only of the angular distance between the points (\theta,\phi) and (\theta ',\phi ') ; its derivatives are specified by

    \left(\frac{\partial G}{\partial r}\right)_{{\boldsymbol{ r}}=\left(a,0,0\right)} =\frac{b^2}{4\pi}\frac{(1-b^2)}{f^{\,3}}, (24)
    \frac{1}{r}\left(\frac{\partial G}{\partial \theta}\right)_{{\boldsymbol{ r}} =\left(a,0,0\right)} =-\sin\theta '\frac{b}{4\pi}\left[\frac{1-2b\mu+3b^2}{f^{\,3}} +\frac{\mu}{f\left(f+b-\mu\right)}-\frac{1}{1-\mu}\right], (25)

    where b=r'/r , f=(1-2b\mu+b^2)^{\frac{1}{2}} , and \mu = \cos \theta ' (Gubbins and Roberts, 1983).

    Equations (21) and (22) show how the vertical and horizontal measurements average B_r on the model surface. Without loss of generality we can place the measurement point at the pole of the coordinate system and plot the derivatives of the Green’s functions as a function of \mu , as in Figure 8 for b=c/a , suitable for modelling the core surface field (radius c ) from measurements at the Earth’s surface (radius a ). These are called "data kernels" in the language of inverse theory. That for Z peaks below the measurement site and decreases with angular distance away, falling to half at \theta = 34^\circ . That for H is zero below the measurement site, increasing to a maximum at 23° before decreasing to half at about 60°. Measurements made so far above the source of the magnetic field therefore sample a considerable area of the source. The price paid is lack of resolution there, unless there is very substantial density of data on the measurement surface.

    Figure  8.  Data kernels for the inversion of B_r on the core surface from measurements of Z and H respectively as a function of horizontal distance between the measurement and model points. It shows how each measurement samples B_r on the core surface. The y-axis is in arbitrary units, the scale is the same for both curves Nz and Nh.

    While the core or main field does not require fine resolution, the lithospheric field is small scale and requires the resolution offered by surveys close to the ground. Most exploration surveys for minerals are very small in scale and tied to some reference such as the IGRF. Unfortunately almost all of them cover an area that is smaller than the resolution obtained for the main field, making combination of adjacent surveys difficult or even arbitrary. The gap in the spectrum between ground and satellite measurements is a major problem that could be solved in the future by improved resolution.

    The lithospheric field sets the limit of resolution for the core field, which is generally confined to spherical harmonic degree 14 or less (Figure 2). This is not true of the secular variation (SV) because the crustal field does not change with time, except in response to the effect of the ambient field on induced magnetisation. We can now see SV out to degree 18–20. At this scale the energy in the weak lithospheric field becomes comparable to the core field, which is attenuated by upward continuation through thousands of kilometers of mantle. The two sources cannot be separated by length scale alone: better models of the core and lithospheric field are needed. Flying satellites at low altitude would not improve the core field directly but it would help with the interpretation of the lithospheric field, thereby perhaps removing its effect from the intermediate wavelengths and improving the core field indirectly. Satellite altitude is limited by the drag; the satellites run out of fuel quickly below about 400 km. There is also a problem with buffeting of the space craft by the thicker atmosphere. The vector satellite Champ was taken down to around 250 km at the end of its life before burning up in order to obtain high resolution data for the lithospheric field (Olsen et al., 2017).

    Future plans are for satellites in highly elliptical orbits. These would spend a small amount of time at a low perigee, 200 km or less, to avoid heavy fuel consumption. Olsen (2023) carried out a synthetic study to estimate the variance of models derived from a satellite flying with perigee 140 km. He concluded it could produce reliable geomagnetic coefficients out to degree n=170 , much further than the n=100 achievable from a circular orbit at altitude 350 km. Jiang Y et al. (2023) carried out simulations of datasets to be gathered from the proposed MSS-2 satellites flying in highly elliptical orbits ( 200\times 5300 km) together with the near-circular orbit of MSS-1 and showed the variance reduction would be 285 times at n=200 and 1300 times at n=250 . The future looks very promising for geomagnetic surveying.

    MSS-1 Satellites have, so far, been extraordinarily successful. Hopes are high for MSS-2 and its higher resolution for the lithospheric field. We stand on the threshold of an exciting decade of geomagnetic exploration. It is tempting to assume future models will be substantially better than previously: the instruments are well tested and the analysis methods well established. Things can, however, go wrong. Geomagnetic surveying still relies on a certain amount of luck: both Magsat and MSS were fortuitous. Magsat was cheap, could be launched on a Scout rocket, and was ready to go during a lull in space activity (R. A. Langel, personal communication). Maintenance of Swarm is dependent on ESA’s willingness to continue funding. A Macau satellite was suggested at a conference where Professor Keke Zhang was ready with a proposal to investigate the SAA. There is clearly no guarantee that such opportunities will arise at the appropriate moment in a decade’s time.

    On the other hand we have learned that good information can come from quite poor datasets. In my view the community has tended to downplay the opportunities available from incomplete data. It was a surprise when the historical data, from AD1700 until the satellite era, were found to produce global models showing similar features at the core surface to Magsat. It should not have been a surprise: a hint of the 4-lobe structure, for example, appears in Halley’s chart. With the benefit of hindsight, the reaction to the Backus ambiguity seems overplayed: the models are quite good when combined with some vector data from ground stations and, in particular, location of the dip-equator. Some authors, myself included, think it worth flying some low altitude satellites to measure total intensity in combination with the present constellation of vector satellites. Intensity measurements are very accurate and robust, and may tolerate the buffeting at low altitude (Olsen, 2023).

    Better data may offer new opportunities for interpretation. The longer records of the core field will produce better secular variation models for predictions into the future and help in theoretical modelling of core dynamics. Better models of the lithospheric field may also improve the intermediate wavelengths of the core field, again advancing understanding of core dynamics. If the gap in the spectra of satellite and near-ground data can be closed, much new work will be possible combining satellite data with the enormous wealth of other survey data.

    This review only contains information based on data already published.

    I thank Simon Williams and Yi Jiang for their assistance in producing Figures 6 and 7.

  • Backus, G. E. (1970). Non-uniqueness of the external geomagnetic field determined by surface intensity measurements. J. Geophys. Res., 75(31), 6339–6341. https://doi.org/10.1029/JA075i031p06339
    Backus, G. E. (1974). Determination of the external geomagnetic field from intensity measurements. Geophys. Res. Lett., 1(1), 21. https://doi.org/10.1029/GL001i001p00021
    Barraclough, D. R., and Nevitt, C. E. (1976). The effect of observational errors on geomagnetic field models based solely on total-intensity measurements. Phys. Earth Planet. Inter., 13(2), 123–131. https://doi.org/10.1016/0031-9201(76)90077-7
    Bloxham, J., Gubbins, D., and Jackson, A. (1989). Geomagnetic secular variation. Phil. Trans. Roy. Soc. London A: Math. Phys. Sci., 329(1606), 415–502. https://doi.org/10.1098/rsta.1989.0087
    Cain, J. C. (1971). Geomagnetic models from satellite surveys. Rev. Geophys.: Space Phys., 9(2), 259–273. https://doi.org/10.1029/RG009i002p00259
    Gubbins, D., and Roberts, N. (1983). Use of the frozen flux approximation in the interpretation of archaeomagnetic and palaeomagnetic data. Geophys. J. Int., 73(3), 675–687. https://doi.org/10.1111/j.1365-246X.1983.tb03339.x
    Gubbins, D., and Bloxham, J. (1985). Geomagnetic field analysis-III. Magnetic fields on the core—mantle boundary. Geophys. J. Int., 80(3), 695–713. https://doi.org/10.1111/j.1365-246X.1985.tb05119.x
    Gubbins, D. (2004). Time Series Analysis and Inverse Theory for Geophysicists. Cambridge: Cambridge University Press.
    Gubbins, D., Jones, A. L., and Finlay, C. C. (2006). Fall in Earth’s magnetic field is erratic. Science, 312(5775), 900–902. https://doi.org/10.1126/science.1124855
    Holme, R., and Bloxham, J. (1996). The treatment of attitude errors in satellite geomagnetic data. Phys. Earth Planet. Inter., 98(3-4), 221–233. https://doi.org/10.1016/S0031-9201(96)03189-5
    Holme, R., James, M. A., and Lühr, H. (2005). Magnetic field modelling from scalar-only data: Resolving the Backus effect with the equatorial electrojet. Earth Planet. Space, 57(12), 1203–1209. https://doi.org/10.1186/BF03351905
    Hulot, G., Khokhlov, A., and Le Mouël, J. L. (1997). Uniqueness of mainly dipolar magnetic fields recovered from directional data. Geophys. J. Int., 129(2), 347–354. https://doi.org/10.1111/j.1365-246X.1997.tb01587.x
    Hurwitz, L., and Knapp, D. G. (1974). Inherent vector discrepancies in geomagnetic main field models based on scalar F. J. Geophys. Res., 79(20), 3009–3013. https://doi.org/10.1029/JB079i020p03009
    Jackson, A., Jonkers, A. R. T., and Walker, M. R. (2000). Four centuries of geomagnetic secular variation from historical records. Phil. Trans. Roy. Soc. London A: Math. Phys. Eng. Sci., 358(1768), 957–990. https://doi.org/10.1098/rsta.2000.0569
    Jiang, Y., Olsen, N., Ou, J. M., and Yan, Q. (2023). Simulation for MSS-2 low-perigee elliptical orbit satellites: an example of lithospheric magnetic field modelling. Earth Planet. Phys., 7(1), 151–160. https://doi.org/10.26464/epp2023021
    Khokhlov, A., Hulot, G., and Le Mouel, J. L. (1997). On the Backus effect-I. Geophys. J. Int., 130(3), 701–703. https://doi.org/10.1111/j.1365-246X.1997.tb01864.x
    Khokhlov, A., Hulot, G., and Le Mouel, J. L. (1999). On the Backus effect-II. Geophys. J. Int., 137(3), 816–820. https://doi.org/10.1046/j.1365-246x.1999.00843.x
    Lowes, F. J. (1975). Vector errors in spherical harmonic analysis of scalar data. Geophys. J. Roy. Astr. Soc., 42(2), 637–651. https://doi.org/10.1111/j.1365-246X.1975.tb05884.x
    Lowes, F. J., and Martin, J. E. (1987). Optimum use of satellite intensity and vector data in modelling the main geomagnetic field. Phys. Earth Planet. Inter., 48(3-4), 183–192. https://doi.org/10.1016/0031-9201(87)90143-9
    Merrill, R. T., McElhinny, M. W., and McFadden, P. L. (1996). The Magnetic Field of the Earth: Paleomagnetism, The Core, and The Deep Mantle (2nd ed). San Diego: Academic Press.
    Olsen, N., Glassmeier, K. H., and Jia, X. (2010). Separation of the magnetic field into external and internal parts. Space Sci. Rev., 152(1), 135–157. https://doi.org/10.1007/s11214-009-9563-0
    Olsen, N., Ravat, D., Finlay, C. C., and Kother, L. K. (2017). LCS-1: A high-resolution global model of the lithospheric magnetic field derived from CHAMP and Swarm satellite observations. Geophys. J. Int., 211(3), 1461–1477. https://doi.org/10.1093/gji/ggx381
    Olsen, N. (2023). Modelling Earth’s lithospheric magnetic field using satellites in low-perigee elliptical orbits. Geophys. J. Int., 232(3), 2035–2048. https://doi.org/10.1093/gji/ggac422
    Proctor, M. R. E., and Gubbins, D. (1990). Analysis of geomagnetic directinal data. Geophys. J. Int., 100(1), 69–77. https://doi.org/10.1111/j.1365-246X.1990.tb04568.x
    Shure, L., Parker, R. L., and Backus, G. E. (1982). Harmonic splines for geomagnetic modelling. Phys. Earth Planet. Inter., 28(3), 215–229. https://doi.org/10.1016/0031-9201(82)90003-6
    Stern, D. P., and Bredekamp, J. H. (1975). Error enhancement in geomagnetic models derived from scalar data. J. Geophys. Res., 80(13), 1776–1782. https://doi.org/10.1029/JA080i013p01776
    Stern, D. P., Langel, R. A., and Mead, G. D. (1980). Backus effect observed by Magsat. Geophys. Res. Lett., 7(11), 941–944. https://doi.org/10.1029/GL007i011p00941
    Suttie, N., Holme, R., Hill, M. J., and Shaw, J. (2011). Consistent treatment of errors in archaeointensity implies rapid decay of the dipole prior to 1840. Earth Planet. Sci. Lett., 304(1-2), 13–21. https://doi.org/10.1016/j.jpgl.2011.02.010
    Ultré-Guérard, P., Hamoudi, M., and Hulot, G. (1998). Reducing the Backus effect given some knowledge of the dip-equator. Geophys. Res. Lett., 25(16), 3201–3204. https://doi.org/10.1029/98GL02211
    Whaler, K. A., and Gubbins, D. (1981). Spherical harmonic analysis of the geomagnetic field: an example of a linear inverse problem. Geophys. J. Int., 65(3), 645–693. https://doi.org/10.1111/j.1365-246X.1981.tb04877.x
    Williams, S., Gubbins, D., Livermore, P. W., and Jiang, Y. (2025). Evaluation of the lithospheric magnetic field mapped by the first year of MSS-1 data. Earth Planet. Phys., 9(3), 1–8. https://doi.org/10.26464/epp2025033 https://doi.org/10.26464/epp2025033
  • Related Articles

Catalog

    Figures(9)  /  Tables(1)

    Article views (16) PDF downloads (7) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return