A method for the near real-time production of quasi-definitive magnetic observatory data
Magnetic observatory data are widely used in the derivation of time-varying magnetic field models, often in combination with satellite magnetic data, when available. Traditionally the definitive observatory results are used, the availability of which can often lag those of the satellite data by months and even years. The recently defined quasi-definitive observatory data type has been introduced to meet the need to provide observatory data suitable for use in field modeling in a more rapid time frame and for producing Level 2 products planned for the upcoming European Space Agency Swarm mission. A method for producing quasi-definitive data is presented and the essential steps described. To evaluate the method, provisional data published on a next day basis since 2000 are tested against definitive data at five INTERMAGNET observatories. The means and standard deviations of the differences between the candidate quasi-definitive and definitive data are within the accuracy of 5 nT set by INTERMAGNET. Since the tested data were published on-line on a next day basis, they also easily meet the INTERMAGNET requirement of availability within three months. These results demonstrate that prompt production of quasi-definitive data is possible for observatories that already perform to the standards set by INTERMAGNET.
Key wordsGeomagnetic observatory quasi-definitive data observatory baselines data processing methods
Researching and modeling the Earth’s magnetic field, more often than not, requires accurate separation of the various field sources, which, when combined, result in the observed magnetic field. Good quality measurements are essential for this. For effective separation of internal (core and lithospheric) and external (ionospheric and magnetospheric currents) sources it has long since been recognised that continuous, absolute measurements of the magnetic vector at single locations, sustained over tens or even hundreds of years, are vital. Such locations are known as magnetic observatories and there are around 170 operating around the world today (see for example Rasson et al., 2011). Of course, high quality global models require better global coverage than observatories alone can provide and measurements of the magnetic field from low-Earth-orbiting satellites at altitudes below 1000 km have radically changed the way in which models are produced and what they reveal about Earth processes.
High-precision satellite missions, flown over the last decade (CHAMP, ϕrsted, SAC-C) have enabled the production of improved core field models (e.g. IGRF-11, Finlay et al., 2010); models with higher spherical degree (e.g. Maus et al., 2006; Olsen et al., 2006a; Thomson et al., 2010) that include more of the internal signal from the lithosphere, which in turn enable better separation between internal and external sources; as well as comprehensive models that strive to model all sources (e.g. Sabaka et al., 2004). Nonetheless, the production of all such global models require ground based observatory data to complement the satellite data, whether used in building the models or in validation or in helping to back out the external field sources or all three.
With the launch of the Swarm constellation of satellites (Friis-Christensen et al., 2006), a new era in the study of the Earth’s magnetic field will begin. The Swarm mission will enable the separation of sources on a global scale better than ever before (e.g. Olsen et al., 2006b), especially the longer wavelength sources from the lithosphere. Absolute values of the magnetic field from observatories will be an essential component of the mission, both in the validation of the satellite data and in combination with the satellite data for the derivation of Level 2 Swarm products. The importance of observatory data for Swarm is highlighted in Lesur et al. (2006) and Macmillan and Olsen (2013). In the latter, the current practice of using hourly mean values selected from the holdings at the World Data Centre for Geomagnetism, Edinburgh is discussed as well as the potential for future requirements for data with a higher temporal resolution such as one-minute and one-second means. Macmillan and Olsen effectively support the case for data, corrected to near absolute level, to be made available more rapidly than definitive data currently are.
In light of the generally slow availability of definitive data, the initiative has been taken, first by INTERMAGNET, an organisation which is described in Kerridge (2001) and subsequently by the International Association of Geomagnetism and Aeronomy (IAGA), to establish a new data type, quasi-definitive (QD) data, and encourage the production of QD data from observatory operators. The main driving force has been for ‘ground truth’ data for Swarm research activities as well as Swarm Level 2 products. A number of other scientific activities may also benefit, such as providing the opportunity to derive a more rapidly available version of the Dst index, which is commonly used in solar terrestrial research and in space weather applications.
At the INTERMAGNET meeting in Beijing, 2007, the problem of the delay time for the publication of definitive observatory data—after the calendar year ends and in some cases several months later—was discussed and the concept of “quasi-real-time baselines” was first introduced. Further discussions took place in Golden, Colorado, 2008, on the possibilities of producing QD data to support specific applications such as Swarm.
IAGA approved resolution number 5 at its 11th Scientific Assembly, Sopron, Hungary, 2009, which states “IAGA, recognizing the importance of prompt baseline-corrected observatory data for the production of geomagnetic indices and geomagnetic models such as the IGRF, noting that several individual users and groups of users, such as the Mission Advisory Group of the upcoming ESA Swarm satellite mission, have expressed their interest in and need for such data, encourages magnetic observatories to produce baseline-corrected quasi-definitive data shortly after their acquisition.” At the INTERMAGNET meeting that followed, also in Sopron, a tentative definition of QD data was coined as “... data corrected using temporary baselines shortly after their acquisition and very near to being the final data of the observatory”. Observatory data types, formats, products and metadata are summarised in Reay et al. (2011).
In the metadata data type field, the valid values would be extended to include “Quasi Definitive”; and
In the file name convention the new data type would be denoted by the letter “q” as a valid code for the data type.
corrected using temporary baselines;
made available less than three months after their acquisition; and
such that the difference between the quasi-definitive and definitive (X, Y, Z) monthly means is less than 5 nT for every month of the year.
Initial doubts within the observatory community over what could be achieved with the production of QD data within the time frame required were quickly dispelled with two different approaches demonstrated by groups from Institut de Physique du Globe de Paris (IPGP) (Chulliat et al., 2009) and British Geological Survey (BGS) (Baillie et al., 2009). Peltier and Chulliat (2010) confirmed that the IPGP method achieved accuracies well within that set by INTERMAGNET. They found that the mean and standard deviations of differences between simulated QD and definitive data during 2008 was within 0.3 nT for nine observatories.
Over the last year many other groups operating observatories have also made the changes to their procedures to enable the derivation of QD data (e.g. Matzka, 2013). Macmillan and Olsen (2013) reported that the submission of QD type data to INTERMAGNET has been made for 44 (out of 125) observatories in 2012. Although most solar terrestrial research and space weather applications do not necessarily require absolute values of observatory data, which is the most challenging part of producing QD data, there is no doubt that the efforts across the world to develop systems to provide higher quality and more rapidly available observatory data will be of great value to all users.
For more than a decade BGS have operated and developed a programme in geomagnetic observatory instrumentation, data acquisition and processing to enable production of near absolute data from its observatories in near realtime. These data, although not labeled as such at the time of production, to all intents and purposes can, retrospectively, claim to be QD data. One of the intentions of the work described in this paper is to test this statement. For the purposes of clarity and simplicity we use the label “QD data” when referring to the historical data set of retrospectively labeled QD data or candidate QD data that have been used in the analysis.
The paper details the processes involved in enabling the production of QD data from the BGS observatories and the evaluation of these data to verify that they meet the standard and quality set. The analysis that has been carried out is described and the results are presented. General problems faced and observatory specific difficulties for the timely production of QD data are also discussed.
2. Observatory Instruments and Measurements
At each of the UK observatories three identical systems, each with two different instruments, record the magnetic field direction and magnitude. At ASC and PST there is a single system with vector and scalar instruments. The ability to derive QD data in near real-time relies on the standard of the instruments used and quality of the raw measurements.
Observatory data collection and transmission is controlled by a Geomagnetic Data Acquisition System (GDAS) (Turbitt and Flower, 2004). GDAS systems were installed at the UK observatories in 2002, becoming operational from January 2003. GDAS incorporates a Danish Meteorological Institute (DMI), now Danish Technological University (DTU), FGE tri-axial linear-core fluxgate magnetometer (operated in variometer mode), a Gem Systems GSM-90 Overhauser-effect proton precession magnetometer (PPM), a GPS-referenced time source and a PC running BGS proprietary data acquisition software under the QNX operating system. QNX is a real time operating system that behaves in a way that is similar to the UNIX operating system, but is particularly useful for embedded systems. The data loggers are connected via internet and telephone networks as well as satellite for reliable communication of data to Edinburgh. The connections are bi-directional, allowing remote monitoring and administration of the GDAS systems.
The FGE fluxgate magnetometers have been tuned to have a dynamic range of ±4000 nT. Thus compensating fields are used to zero the magnetometer output on set-up. The orthogonal sensors are oriented to measure variations in the horizontal (H) and vertical (Z) components of the field. The third sensor is orientated perpendicular to these in an eastward sense, measuring variations proportional to the changes in declination (D). Measurements are made at a rate of 1 Hz to a resolution of 0.2 nT. One-minute values are derived at the data processing stage by applying a 61-point cosine filter.
The DTU FGE fluxgate was chosen due to its proven long term stability, on which the production of accurate and timely QD data relies. To reduce the influence of temperature variation and pier tilt, the three sensors are mounted in a single marble cube held in a gimbaled mount, as described in the FGE technical manual (DMI, 2006). The magnetometer is also located on a stable concrete pier in a temperature-controlled chamber.
The scaling constants of the fluxgate magnetometer are determined at regular intervals (approximately every 4–6 months) by applying a known current through the compensation coils of each sensor. The digitiser is also routinely calibrated against a known voltage. The current and voltage sources are traceable to National Accreditation of Measurement and Sampling (NAMAS) standards. The coil-constants of the compensation coils were provided by the manufacturer (DTU).
Absolute total field intensity (F) measurements are recorded from the PPM to 0.1 nT resolution (the GSM90 is capable of measuring to a resolution of 0.01 nT), at a rate of 0.1 Hz. In this case one-minute values are derived by applying a 7-point cosine filter. Since the GSM-90 PPM has a low temperature coefficient there is no need for temperature control and the instrument can be sited at the nearest convenient location. The internal frequency counter of the PPM is routinely calibrated against a NAMAS frequency standard. The difference in the total field intensity between the GDAS PPM and the absolute pillar is also measured regularly using a second PPM in order that the GDAS PPM data can be corrected to the absolute pillar.
At each observatory manual absolute measurements of the direction of the magnetic field are made from a single standard pillar located in an absolute hut except in the case of Ascension Island, where the pillar is not covered. Measurements of D and inclination (I) are made using a fluxgate-theodolite. This consists of either a Zeiss or Wild theodolite with a Bartington MAG 01H fluxgate magnetometer attached. At each observatory absolute values of all geomagnetic elements are referred to this single standard absolute pillar location.
Two absolute observations are made per week at the UK observatories and two per month are made at the international stations, with additional observations being made during annual service visits. The accuracy of the absolute measurements is fundamental to the production of QD data.
Systematic errors associated with each individual instrument exist due to non-perfect alignment (collimation errors) and magnetometer offset. See for example Kerridge (1988) and Jankowski and Sucksdorff (1996). These errors are eliminated by the measurement process. However, associated parameters, which should remain reasonably constant, are calculated for each observation and plotted over time to provide a valuable aid to the quality control process.
3. A Method for Producing Quasi-Definitive Data
The method, that has been adapted and developed over many years by BGS, is best described in two separate parts. One involves the processes in place for the detection and correction of erroneous values in the raw variometer measurements and the other is the process of deriving and fitting the baselines.
3.1 Daily data processing and quality control procedures for variometer data
Where an observatory has more than one system installed, such as at LER, ESK and HAD, the use of comparison plots between systems for each component is used to identify any corrupt data. Quality control plots and data products are automatically updated in near real-time or can be regenerated manually as required, and are available to view on an internal website.
During the day and on a next day basis these quality control plots are carefully analysed by the duty processor. Any errors identified in the variometer data are either removed, or, in the case of the UK where backup systems are running, replaced with unaffected data from the most appropriate system. For real-time products, the data processing software will carry out the latter automatically using configuration files that can be manually adjusted. Common observatory data quality control practices are discussed in Reda et al. (2011).
Every morning during normal working days any required adjustments to the variometer data for the previous day (or three days following the weekend) are completed. The original reported data are retained and a separate adjusted day file is created. It is important to note that the one-minute and one-second data are stored separately from the daily baseline values, i.e. as variometer data, and are tied to the baseline values by the software, prior to publishing. Any corrections to data are logged in a diary system detailing the times, type of correction and information on the cause, if known.
3.2 Processing of absolute observation measurements and derivation of baseline values
Absolute observations are recorded and processed using the BGS proprietary Java program ‘GDASView’ (Turbitt and Shanahan, 2012). All of the relevant information associated with each observation can either be entered or read in from a saved version that has previously been entered. The software also reads in the variometer and PPM data for the same date and time and outputs absolute observations in D, I, F, H and Z and associated collimation errors, plus the difference between the absolute and variometer values, known as spot baseline values. Note that what we refer to as the measured D and H variations are not exactly in the standard geomagnetic reference frame; rather they are in a cylindrical reference frame defined by the exact orientation of the sensors, and as such, are not exact variations in D and H. Thus the calculation of both the D and H spot baselines include both these so-called measured D and H variations to ensure the sensor reference frame is accounted for. This processing is carried out as soon as possible after the manual measurements are made and received in the Edinburgh office via email. Spot baselines account for instrument offsets and correction to the absolute pillar.
Presenting the results in the form of Fig. 3, combines relevant information that is required to help make decisions on whether the daily baseline values need to be updated. When new observations are available this plot is assessed to make sure the current baselines (and effectively the baselines predicted for the next few weeks) are as accurate as they can be and well within the QD standard. The plot is regenerated after any adjustments to the baseline have been made and used to visually assess the quality of the polynomial fit of the baseline to the data. At this point in time the spot baseline values (spots) were available up to the end of September, whereas the daily baselines (lines) were predicted into the future. Note that this plot has been retrospectively created for illustration purposes in this paper and the actual baselines shown may not have been exactly as they were at the time, although they would have been very close.
Adjustments are made to the piecewise polynomials as required and the baseline updated. Baseline data are stored in year files with one value for each component H, D and Z per day. All previous versions of the baseline file are saved using a simple version control system. Following any revisions to the values baseline plots of the type shown in Fig. 3 are regenerated, and the constancy of the F difference trace is used to decide if another iteration of the process is required.
Daily baseline values are created for the full year, which by default will include a projection into the future based on the baselines on the day of computation. The extrapolation is usually constant (order 0), although account can be taken of any current increasing or decreasing linear trends (order 1). Where such trends exist it is even more important to repeat the process with new observations as soon as they are available, thus reducing the number of days of predicted baseline values.
3.3 Production and delivery of quasi-definitive data
Automated data processing software combines the daily extrapolated baseline values of H, D and Z, derived from the baseline functions, with the H, D and Z variometer data. One-minute data are delivered to the Edinburgh INTERMAGNET GIN in near real-time and on a next day basis. IAGA-2002 type ‘v’ variometer data (INTERMAGNET type “R” reported) are delivered in real-time and IAGA-2002 type ‘p’; provisional (INTERMAGNET type “A” adjusted), which have baselines applied but not necessarily fully quality controlled, are delivered next-day, shortly after UT midnight. Once the full procedures in 3.1 and 3.2 have been completed, the QD data are prepared and also delivered to the GIN by running the data processing software in manual mode on a next (working) day basis.
Later, if baselines are revised, QD data may be resub-mitted to the GIN within the 3-month window. We have included an extra header line “Data file created on” to keep a record of when the QD data is produced. This information is currently lost when data are extracted from the GIN (although it is retained by BGS) so we would encourage INTERMAGNET to give consideration to time stamping or version control of data. For some applications, such as Swarm Level 2 data products, a clear audit trail may be required and having information on the version or creation date of QD data may be a requirement.
4. Evaluation of Quasi-Definitive Data Accuracy
A statistical evaluation has been carried out using data from 2000 to 2011 (the last year for which definitive data are available at the time of the analysis) for the five INTERMAGNET observatories, as shown in Fig. 1.
Although publishing data as type QD is a relatively recent activity, as previously discussed, it is suggested that the data published as type “provisional” by BGS over several years have met the criteria now established for QD data. For more than a decade hourly mean values from LER, ESK and HAD, derived from the provisional, baseline corrected, one-minute values have been published online at http://www.geomag.bgs.ac.uk/data_service/data/obs data/hourly_means.html on a next day basis. In the case of ASC and PST the hourly mean values needed to be computed from provisional data submitted to the INTERMAGNET GIN, also on a next day basis. These hourly values are used here to test the hypothesis that the provisional data of the time can be classed as QD and that the method developed at BGS for the production of QD is suitable to meet the criteria established.
The definitive one-minute values, as published on an annual basis, are used to compute definitive hourly mean values that are then compared against the candidate QD hourly means. The hourly differences in the North (X), East (Y) and Vertical (Z) components were computed from 2000 to 2011 for LER, ESK and HAD, from 2004 to 2011 for ASC and from 2005 to 2011 for PST.
Mean (μ) and standard deviation (σ) of the differences in the X, Y and Z hourly mean values (QD—definitive) at five IMOs for all years analysed.
Mean (μ) and standard deviation (σ) of the differences in the X, Y and Z hourly mean values (QD—definitive) at five IMOs for 2011.
5. Discussion of Results
This evaluation of the candidate QD data has shown that they are clearly within 5 nT of the definitive values most of the time. Better results are obtained for the vertical (Z) component than the two horizontal components (X and Y) at all five observatories and the poorest results are obtained for the Y component, reflecting the difficulties involved in accurate measurement of D and fitting of the D baseline. The values in Table 1 also highlights that the QD data from HAD observatory has been closest to definitive than any of the others throughout the analysed period with the PST results being generally the poorest. The mean and standard deviations for all five IMOs are within 5 nT, although the spread of the differences, as seen in both Figs. 5 and 6, at ASC and PST are greater than at the UK observatories. Comparing Figs. 5 and 6 with each other it is clear that there is a reduction in spread for all observatories except ASC in 2011 (Fig. 6) compared with all years (Fig. 5) and the standard deviation values in Table 2 compared to those in Table 1 also demonstrate this improvement. Figures 5 and 6 also provide evidence of bi-modal and non-Gaussian error distributions in some and most cases respectively.
At ASC and PST the instrument baselines have been more prone to variations due to being located in a more challenging operating environment, without BGS staff on site to address problems as they arise. The fewer number of absolute observations at these remote sites make it more difficult to account for drifts, such as those that might be caused by temperature and site difference (artificial or otherwise) changes. It is not always possible to fit new data and revise the baselines every month at these observatories. The observation quality can sometimes fall outside the criterion and the increased uncertainty in the values can result in the decision not to use the new measurement. The next observation is then required before any trends can be evaluated with any degree of confidence. This delay means that predicted baselines at ASC and PST are in use further into the future than at the UK observatories.
In the case of ASC, 2010 and 2011 were years when delays to fitting the baseline were more frequent for a variety of reasons and this is highlighted in Fig. 7. Operation of PST has been the most challenging out of the five BGS IMOs over the years and this is reflected in the results overall and in particular prior to 2009. The improvement since then has been largely down to improvement in the quality of the absolute observations made, although the observatory operations overall are still not without problems at this site.
Figure 7 highlights a clear drop in the accuracy of QD data at LER during 2008 and to a lesser extent in 2009.
This was entirely due to the discovery that heaters located in the absolute hut, contained magnetic material. The heaters were removed from the hut on 25th March 2008 but it was much later that the problem was fully understood. It was left until the production of the final definitive data for 2008 to deal with this artificial change at the observatory absolute pillar. The decision was made to introduce a discontinuity in the final observatory results between 31st December 2007 and 1st January 2008, the effect of which was to create a step at that point and all measurements since then needing to be reprocessed. This processing was carried out in May 2009 with the consequence that much larger than usual changes to the provisional baselines throughout 2008 and up to the end of May 2009 were required. In the case of the Y component this amounted to an offset of greater than 5 nT being applied retrospectively and explains the bi-modal distribution in the LER Y errors and the overall poor result for LER in 2008 and into 2009.
One disadvantage of the baseline derivation method is that it has four separate stages using three different sets of software: the first is a Java program to derive spot absolute values; the second is a FORTRAN program to plot the relevant parameters; the third is the use of Excel spreadsheets to fit piecewise polynomials; and finally the FORTRAN program is used again to re-plot the combined data sets and enable final assessment on the quality of the fit. A more streamlined approach is possible and development is being carried out that will make the process less labour intensive. Nonetheless the results presented here are not affected. The principle behind the chosen method that enables data of QD standard to be produced in near real-time remains and the methodology as described in this paper will continue for the foreseeable future.
6. Concluding Remarks
The ability to maintain baselines to QD specification is fundamentally a sampling problem. In other words the variometer baseline must not contain signal of the order of the required QD accuracy at periods less than twice the sampling interval. So an observatory requires stable variometers; good control of the measurement; and regular, precise absolute measurements. The less stable the variometers or where there are environment problems, the more frequent absolute measurements will be required.
The method developed by BGS was initially driven by the real-time demand from users for time varying data that was near to the absolute level. Although not labeled QD at the time it has been shown that the method in place enabled the derivation of data products that were close enough to definitive to meet the current QD accuracy standard most of the time. Hourly mean values published on a next day basis were within ±5 nT of the definitive values published over a year later, close to 100% of the time in the majority of years analysed, apart from a few exceptional cases (e.g. LER 2008 and PST prior to 2009).
In a similar study by Peltier and Chulliat (2010) the error in QD data is found to be <0.3 nT, which is a power of ten magnitude less than that found in the present study (worse case is 3.8 nT). This difference is in part explained by the method used at the two institutes as well as by the differences in the analyses carried out. The IPGP method is a monthly process, which concentrates on obtaining the most accurate results for the recent past. The BGS method, although similar, also attempts to produce next day QD data using predicted baseline values. The analysis carried out for this paper has been on these predicted QD data as opposed to QD data with a 3 month delay, which in the future can also be analysed for accuracy. Both methods are clearly valid to meet the QD data definition set by INTERMAG-NET and each has strengths that will benefit specific users of the data.
We hope that these results provide encouragement to operators of observatories around the world, who have not yet started producing and publishing QD data.
The authors would like to thank Tom Shana-han, Tony Swan, Colin Pringle, Stephen Tredwin, Nigel Bishop and staff of the UK Met Office and Cable and Wireless based at the BGS observatories, for all the measurements made and their dedication to the operation of the observatories. We also thank BGS colleagues, Alan Thomson, Simon Flower and Susan Macmillan for their help at the internal BGS review stage and encouragement and general constructive comments on the paper. This paper is published with permission of the Executive Director, British Geological Survey (Natural Environment Research Council).
- Baillie, O., E. Clarke, S. Flower, S. Reay, and C. Turbitt, Reporting quasi-definitive observatory data in near real-time, presentation at the 11th IAGA Assembly, Sopron, 2009 (unpublished).Google Scholar
- Chulliat, A., A. Peltier, F. Truong, and D. Fouassier, Proposal for a new observatory data product: Quasi-definitive data, presentation at the 11th IAGA Assembly, Sopron, 2009 (unpublished).Google Scholar
- Danish Meteorological Institute, Fluxgate Magnetometer Model FGE Manual, Danish Meteorological Institute Technical Report, 96–4, ISSN:0906-897X, Copenhagen, 2006.Google Scholar
- Finlay, C. C, S. Maus, C. D. Beggan, T. N. Bondar, A. Chambodut, T. A. Chernova, A. Chulliat, V. P. Golovkov, B. Hamilton, M. Hamoudi, R. Holme, G. Hulot, W. Kuang, B. Langlais, V. Lesur, F J. Lowes, H. Luhr, S. Macmillan, M. Mandea, S. McLean, C. Manoj, M. Menvielle, I. Michaelis, N. Olsen, J. Rauberg, M. Rother, T. J. Sabaka, A. Tangborn, L. Toffner-Clausen, E. Thebault, A. W. P. Thomson, I. Wardinski, Z. Wei, and T. I. Zvereva, International Geomagnetic Reference Field: The eleventh generation, Geophys. J. Int., 183, 1216–1230, 2010.CrossRefGoogle Scholar
- Jankowski, J. and C. Sucksdorff, Guide for Magnetic Measurements and Observatory Practice, International Association of Geomagnetism and Aeronomy, Warsaw, 1996.Google Scholar
- Kerridge, D. J., Theory of the fluxgate theodolite, British Geological Survey Technical Report, WM/88/14, 1988.Google Scholar
- Kerridge, D. J., INTERMAGNET: Worldwide near real-time geomagnetic observatory data, Proc. ESA Space Weather Workshop, ESTEC, The Netherlands, http://www.esa-spaceweather.net/spweather/workshops/SPW_W3/PROCEEDINGS_W3/ESTEC_Intermagnet.pdf (accessed 26-02-13), 2001.
- Matzka, J., Preparation of Quasi-Definitive (QD) data for the observatories Narsaruaq, Qeqertarsuaq and Tristan Da Cunha, in Proceedings of the XVth IAGA workshop on geomagnetic observatory instruments, data acquisition and processing, Real Institituto Y Observatorio de la Armada en San Fernando, edited by P. Hejda, A. Chulliat, and M. Catalán, Bo-letin Roa N.o 3/2013, pp. 50–53, 2013.Google Scholar
- Olsen, N., R. Haagmans, T. J. Sabaka, A. Kuvshinov, S. Maus, M. E. Purucker, M. Rother, V. Lesur, and M. Mandea, The Swarm End-to-End mission simulator study: A demonstration of separating the various contributions to Earth’s magnetic field using synthetic data, Earth Planets Space, 58, 359–370, 2006b.CrossRefGoogle Scholar
- Rasson, J. L., H. Toh, and D. Yang, The Global Geomagnetic Observatory Network, in Geomagnetic Observations and Models, IAGA Special Sopron Book Series 5, edited by M. Mandea and M. Korte, pp. 1–25, doi:10.1007/978-90-481-9858-0.7, 2011.Google Scholar
- Reay, S. J., D. C. Herzog, S. Alex, E. Kharin, S. McLean, M. Nosé, and N. Sergeyeva, Magnetic observatory data and metadata: Types and availability, in Geomagnetic Observations and Models, IAGA Special Sopron Book Series 5, edited by M. Mandea and M. Korte, pp. 149–181, doi:10.1007/978-90-481-9858-0.7, 2011.Google Scholar
- Reda, J., D. Fouassier, A. Isac, H.-J. Linthe, J. Matzka, and C. W. Turbitt, Improvements in geomagnetic observatory data quality, in Geomagnetic Observations and Models, IAGA Special Sopron Book Series 5, edited by M. Mandea and M. Korte, pp. 127–148, doi:10.1007/978-90-481-9858-0.7, 2011.Google Scholar
- St-Louis, B. J. (ed), INTERMAGNET Technical Reference Manual, version 4.5, INTERMAGNET, http://www.intermagnet.org/publications/im_manual.pdf (accessed 26-02-13), 2011.
- Turbitt, C. W. and S. M. Flower, GDAS User’s Manual V0.2, British Geological Survey, Edinburgh, 2004 (unpublished).Google Scholar
- Turbitt, C. W. and T. J. G. Shanahan, GDASView 4.4: Observatory Data Processing Software, Presentation at the OIC Workshop on Geomagnetic Observatories and their Applications Islamabad, Pakistan, April 2012 (unpublished).Google Scholar