1 Introduction

Very Long Baseline Interferometry (VLBI) is a versatile measurement technique, where a network of radio telescopes simultaneously observe signals emitted from distant radio sources such as quasars. Today, VLBI is used in various fields for different applications, including astronomy (Mantovani and Kus 2005; Lister et al. 1996), astrometry (Charlot et al. 2020) and geodesy (Schuh and Böhm 2013). For example, within the astronomical community, VLBI is used to generate images of cosmic radio sources, while the astrometric community utilizes VLBI to measure their positions, and the geodetic community uses VLBI to measure the orientation of Earth in space as well as telescope position movements and geophysical signals.

Due to its measurement principle (Sovers et al. 1998), it is necessary to proactively organize VLBI observations among the participating telescopes. This is done by providing an observing plan, the so-called schedule. The process of generating a schedule is often called scheduling. Depending on the application, VLBI scheduling has different requirements.

Geodetic VLBI scheduling aims to generate a sequence of scans that leads to an evenly distributed station sky coverage, meaning observations with different azimuth and elevation angles, over short periods (Petrov et al. 2009; Schartner and Böhm 2019). This allows for an optimal determination of tropospheric delays, one of the major error sources in geodetic VLBI (Schuh and Böhm 2013). Therefore, geodetic VLBI scheduling can be seen as station-centric scheduling. A geodetic network might often consist of ten to twenty globally distributed telescopes. Consequently, a source is in general not visible by the whole network simultaneously. To generate optimal sky coverage, the network is frequently divided into smaller subnetworks so that multiple sources are observed simultaneously (Vandenberg 1999). This process is called subnetting. The required observation duration to achieve a certain targeted signal-to-noise ratio (SNR) is typically calculated per baseline (a pair of telescopes) and observed frequency band, resulting in different recording start and stop times per station, due to the telescopes having different sensitivities and different slew rates (Gipson 2016). To compensate for the different integration times and also the typically very inhomogeneous telescope slew rates, fillin-modes are frequently used where a subset of the available stations can observe additional scans in case of long idle times (Gipson 2010).

In contrast, the goal of astronomical VLBI scheduling is to generate a sequence of scans that leads to an evenly distributed sampling of the UV plane for every source, represented by a set of two-dimensional coordinates of the baseline vectors projected on the plane perpendicular to the direction of a given source (Krásná and Petrov 2021). Therefore, astronomical VLBI scheduling can be seen as source-centric scheduling. For astronomical VLBI sessions, typically all stations observe in every scan together and all stations have the same integration time per scan. This yields a maximum number of independent closure quantities (i.e., closure phase as a sum of the phase observables over a loop of three baselines and closure amplitude as a combination of four amplitude observables of a quadrangle), which are essential for imaging. Concepts such as subnetting are not favored by the astronomical community since it limits the UV coverage of the observed scans. Similarly, fillin-modes are not utilized. The required integration time is either fixed to some predefined values or calculated per network for each source, meaning that all telescopes typically start and stop recording at the same time.

Finally, astrometric VLBI scheduling can be classified into two categories: relative astrometry and absolute astrometry. Within the astronomical community, astrometry often refers to the phase referencing technique (Reid and Honma 2014). During phase referencing, the observations alternate between a target and a calibration source. Simplified speaking, the relative position of the target w.r.t. the calibrator source is of interest. From a scheduling point of view, phase referencing experiments are therefore comparably straightforward to plan. In contrast, absolute astrometry aims at the absolute positioning of the observed radio sources, in particular the determination of the celestial reference frame (Charlot et al. 2020). For this purpose, astrometric VLBI scheduling is often combined with geodetic scheduling since some of the requirements are identical. However, in astrometry, source selection plays a more important role compared to geodetic VLBI scheduling. For example, to derive a celestial reference frame, a large number of VLBI experiments are necessary and it is required to ensure that the target sources are observed regularly. In the remainder of this manuscript, the term astrometric VLBI scheduling refers to the second category, the determination of the absolute positions of radio sources.

Due to the different requirements during scheduling, VLBI observations generated for one application will not yield equivalent performance for a different application. For instance, Krásná and Petrov (2021) investigated the use of astronomical VLBI experiments observed by the Very Long Baseline Array (VLBA) for geodetic application. Within their investigation, they found that the precision of geodetic parameters obtained from these astronomical VLBI sessions is a factor of 1.3 to 1.8 worse compared to their geodetic counterparts. They studied the origin of the discrepancies and concluded that they can be explained by the differences in scheduling approaches between astronomical and geodetic VLBI.

2 Motivation

Within the geodetic community, a new network of next-generation radio telescopes, the so-called VLBI Global Observing System (VGOS) (Petrachenko et al. 2012; Niell et al. 2018), is currently being developed to meet increased requirements for the Global Geodetic Observing System (GGOS) (Plag and Pearlman 2009; Beutler and Rummel 2012). This development is supervised by the International VLBI Service for Geodesy and Astrometry (IVS) (Nothnagel et al. 2017).

During the design review for the new VGOS telescopes, Monte Carlo simulations were conducted and tropospheric delays were identified as the major error source in geodetic VLBI (Nilsson and Haas 2010; Pany et al. 2011). Therefore, the resulting recommended VGOS telescope specifications consisted of smaller (\(\approx {13}{\,\textrm{m}}\) diameter) and fast slewing (\(\approx {720}{^\circ }\)/min azimuth, \(\approx {360}{^\circ }\)/min elevation) telescopes (Petrachenko et al. 2008), combined with an improved broadband observing mode (four bands of dual linear polarized recording with a total of \({8}-{16}{\,\textrm{Gbps}}\)) to compensate for the sensitivity loss due to the small diameters of VGOS telescopes (Petrachenko et al. 2009).

Based on the new, fast slewing telescopes and the resulting improved estimation of tropospheric delays, there is now strong evidence that source structure effects can be responsible for the majority of the remaining measurement error budget (Anderson and Xu 2018; Xu et al. 2021b). The thermal noise level of VGOS group delays is shown to be less than 2 picoseconds (ps), but the systematic errors due to source structure are estimated to be on the order of 20 ps (Xu et al. 2021a). Therefore, the full potential of VGOS cannot be achieved without understanding, interpreting, modeling and monitoring the structure of the radio sources, which typically can have an angular size of up to several milliarcseconds (mas). Consequently, imaging of geodetic VGOS observations and deriving corrections for source structure effects will be an important part of future VGOS activities to provide geodetic parameters with the highest precision. Allowing for proper imaging of VGOS observations, adjustments in scheduling will be necessary. However, these adjustments must not compromise the geodetic performance of the session too much at all.

Several studies already investigated optimal VLBI scheduling for VGOS sessions. An early work by Sun et al. (2014) investigated different scheduling approaches for VGOS where up to four sources are observed simultaneously. Lovell et al. (2014) investigated the use of so-called dynamic scheduling, where the schedule is generated on the fly based on feedback from real-time observations. Schartner and Böhm (2020) investigated geodetic VLBI scheduling based on a brute-force approach of testing various optimization criteria based on large-scale Monte Carlo simulations. However, in most of the preceded studies, source structure effects are not properly taken into account and scheduling is not necessarily optimized for imaging purposes. With the development of a new VLBI scheduling software called VieSched++ (Schartner and Böhm 2019), which has already been successfully used for VGOS sessions (Haas et al. 2021; Petrachenko et al. 2022), new possibilities of combining geodetic scheduling with a source-centric scheduling approach, suitable for imaging, arise.

The main objective of this work is to propose a source-centric scheduling approach that produces good geodetic performance while allowing for proper source imaging. Based on simulations, it is demonstrated that this method does not compromise the precision of the estimated geodetic parameters while allowing a significantly higher number of sources to be imaged and providing improved astrometric results compared to today’s scheduling. Section 3 describes the scheduling approach as well as discusses the geodetic, astrometric and astronomical schedule evaluation (Sects. 3.23.5). Section 4 summarizes the session performance for two example networks based on the IVS VGOS sessions VO1203 (seven stations) and VO1119 (nine stations). Thereby, special emphasis was on a comparison of different thresholds for the minimum allowed number of stations per scan, which is one of the main differences between astronomical and geodetic scheduling. Section 5 explains how the improvements have been achieved by providing some scheduling statistics (Sect. 5.1), a detailed discussion of the astronomical simulation results (Sects. 5.2 and 5.3), and by investigating the source distribution (Sect. 5.4), a different target number of scan thresholds (Sect. 5.5) and potential future VGOS networks (Sect. 5.6). Finally, Sect. 6 concludes the work and Sect. 7 provide an outlook.

3 Method

In this investigation, a source-centric scheduling approach is investigated and simulations are conducted to assess the expected precision of the estimated geodetic parameters. Furthermore, comparisons with a standard geodetic schedule, represented through the submitted IVS schedules, are made. The investigation is based on the conditions of two IVS VGOS sessions, namely VO1203 (2021-07-22) and VO1119 (2021-04-29). The source-centric schedules are generated based on the same station network, source list and telescope characteristics as the actual schedules that were observed, allowing for a direct comparison of the different scheduling approaches. The source lists contain about one hundred potential sources that can be used for scheduling. The station network of session VO1203 consists of seven stations while VO1119 consists of nine stations. Furthermore, the proposed source-centric scheduling approach is tested for a potential near-future and one far-future VGOS network (see Sect. 5.6). Figure 1 depicts the different station networks.

Fig. 1
figure 1

Station network: GGAO12M (Gs), ISHIOKA (Is), KOKEE12M (K2), MACGO12M (Mg), ONSA13NE (Oe), ONSA13SW (Ow), RAEGYEB (Yj), WESTFORD (Wf), WETTZ13S (Ws). Red downward-pointing triangles mark the 7-station network (VO1203) while red upward-pointing triangles mark the additional two telescopes in the 9-station network (VO1119). Future VGOS telescopes, investigated in Sect. 5.6, are highlighted in blue. The three blue upward-pointing triangles mark the additional southern hemisphere telescopes that are expected to be operational soon, while blue downward-pointing triangles depict another 18 additional telescopes, representing a potential 29-station VGOS network

3.1 Scheduling approach

The source-centric schedules are generated using VieSched++ (Schartner and Böhm 2019). In the first step, 120-second long calibration scans were scheduled every two hours where the first calibration scan was offset by 30 min from the session start and the last calibration scan was offset by 30 min from the session end. The calibrator scans were selected automatically in a way to include a maximum number of stations while also providing high SNR per baseline. They are intended as fringe finders but can also be used for calibration purposes, e.g., for band-pass calibration.

After the calibration scans were fixed, the main scheduling process starts to fill the times between the calibrator scans. To this end, a target number of scans per source \(n_{target}\) was defined as well as a minimum number of scans per source \(n_{\text {min}}\) and a minimum number of stations per scan \(n_{minsta}\). The main scheduling process followed the standard VGOS approach of using a fixed 30-second integration time with a minimum 30-second gap (120-second gap in case of calibrator scans) between two scans.

Based on the given network and the session start and end time, the duration t in which a source is visible by at least \(n_{minsta}\) stations was calculated. Here, an elevation cutoff of \({5}^{\circ }\) was assumed and the local horizon mask was applied as defined in the scheduling catalog files (Vandenberg 1997). Figure 2 depicts the visible areas of the sky per telescope in right ascension (ra) and declination (de) as well as the visible duration as a function of declination and number of stations.

Fig. 2
figure 2

Sky visibility of nine northern operational VGOS telescope (top). The colored areas highlight the part of the sky that is visible by the corresponding telescope. Source visibility for the 7-station network (bottom left) and the 9-station network (bottom right) as a function of declination and \(n_{minsta}\)

Next, the minimum time between two scans to the same source \(\varDelta t_{\text {min}}\) was calculated for each source using:

$$\begin{aligned} \varDelta t_{\text {min}} = \max \left\{ \frac{t}{n_{target}},\; {20}\,{\text {min}} \right\} \end{aligned}$$
(1)

A minimum of \({20}\,{\text {min}}\) was selected, which corresponds to the minimum used in today’s VGOS scheduling to ensure that \(\varDelta t_{\text {min}}\) does not get too short.

Based on the calculated minimum time between two scans to the same source, the main scheduling algorithm generates the schedule scan by scan based on a variety of optimization criteria, similar to standard geodetic scheduling. The selected optimization criteria were the number of observations per scan, the duration of a scan (including slew, idle and preob time), the station idle time, the station sky coverage and the number of independent closure phases and amplitudes per scan (see Schartner and Böhm (2019) for more details on the geodetic scheduling and Schartner and Böhm (2020) for more details on the optimization approach). However, after a source was scheduled for the first time, its weight and thus its likelihood of being scheduled again in the remaining part of the session were increased. This increases the likelihood that the source is scheduled every \(\varDelta t_{min}\) minutes resulting in \(n_{target}\) equally spaced scans. This approach leads to the best possible distribution of scans per source, and due to Earth’s rotation, it naturally optimizes the UV coverage and therefore the potential of the schedule for astronomical imaging.

However, the \(\varDelta t_{min}\) and \(n_{minsta}\) restrictions pose a significant constraint on the scheduling process, leading to some station idle time. To compensate for this effect, a specially designed fillin-mode was used after the main scheduling process finishes. The main purpose of the fillin-mode scans was to densify the observations for improved tropospheric parameter estimation while not impacting the main scheduling phase. In contrast to standard geodetic fillin-modes (Gipson 2010), which are executed during the main schedule generation every time a scan is added to the schedule, this new fillin-mode is executed as a dedicated process after the main scheduling phase is finished and is therefore called fillin-mode a posteriori. It investigates the station idle times between every two scheduled scans and will potentially squeeze in an additional scan given that enough time is available. The fillin-mode a posteriori scans were allowed to be scheduled with a minimum of two stations instead of the predefined \(n_{minsta}\), and the \(\varDelta t_{\text {min}}\) is set to five minutes, allowing for more flexibility in squeezing in such scans. For example, in the case of a nine-station network and \(n_{minsta} = 5\), it might happen that a source is scheduled that is only visible by five stations. The remaining four stations cannot participate in a simultaneous (subnetting) scan due to the \(n_{minsta}\) constraint and are thus idling. However, during the fillin-mode a posteriori mode, the remaining four stations can be scheduled in an additional scan in case of enough idle time.

Finally, the number of scans per source was compared with \(n_{\text {min}}\). In case some sources are observed in less than \(n_{\text {min}}\) scans, it may mean that the source list is too large and needs to be adjusted. Therefore, some sources with few scans were removed from the source list and the whole scheduling process is repeated. This iterative process was repeated a maximum of 30 times, whereby only \({30}{\%}\) of the sources that do not meet the \(n_{\text {min}}\) criterion were removed per iteration, allowing for a more gentle source list adjustment. The gentle source reduction was performed since it might happen that in the first iteration, which is considering the full source list, too many sources did not meet the \(n_{\text {min}}\) criterion, and thus, too many sources would otherwise be removed. If fewer than five sources did not satisfy the \(n_{\text {min}}\) criterion, the schedule was considered to be final in this step and could be simulated and further analyzed. In practice, three to four iterations were sufficient to properly adjust the source list based on this approach.

Figure 3 summarizes the scheduling process.

Fig. 3
figure 3

Scheduling flowchart

The main metrics defining the scheduling approach are the minimum number of stations per scan \(n_{minsta}\) and, to a lesser extent, \(n_{target}\). Therefore, the scheduling approach was performed with \(n_{minsta}\) between two and seven and the corresponding schedules are named “a2”–“a7” in Sect. 4. For every \(n_{minsta}\), a total of 71 different combinations of optimization criteria were investigated by using the so-called multi-scheduling tool. More information regarding the multi-scheduling tool and schedule optimization criteria can be found in Schartner et al. (2020). All of these 71 generated schedules were further simulated and analyzed, as described in Sects. 3.2 and 3.3, to assess their geodetic and astrometric performance. The version with the best geodetic and astrometric performance was selected as the final schedule. This version was then additionally analyzed based on astronomical quantities. Investigations of different \(n_{target}\) are later discussed in Sect. 5.5.

3.2 Geodetic and astrometric simulation

The simulations used to quantify the geodetic and astrometric session performance are based on Monte Carlo simulations of the observed minus computed observation delay.

The simulations consist of three parts: a troposphere model, a clock model and measurement error, similarly as done in previous studies such as Petrachenko et al. (2009); Pany et al. (2011); Schartner et al. (2020). Troposphere delays were modeled by taking temporal and spatial correlation into account as proposed by Nilsson et al. (2007). For the calculation, a tropospheric refractive index structure constant \(C_n\) of \({1.8}\times 10^{-7}\)m\({{}^{-1/3}}\) with a scale height of 2000 m was assumed at all stations, while the wind velocity was set to 8  m/s toward east. Clock drifts were simulated as random walk and integrated random walk processes assuming an Allan Standard Deviation of \({1}\times 10^{-14}\) s after 50  min. Finally, the measurement error was simulated as white noise with an amplitude of 4 ps.

3.3 Geodetic and astrometric analysis

The geodetic and astrometric analysis is performed based on least-squares adjustment. Within the analysis of the individual simulation runs, all five Earth Orientation Parameters (EOPs) (Petit and Luzum 2010), namely two polar motion parameters (XPO and YPO), two nutation parameters (NUTX and NUTY) and dUT1, were estimated as constant offsets. The clock behavior was parameterized with a quadratic term, as well as piecewise linear offsets (PWO) every \({60}{\,\text {min}}\) with a relative constraint of 1.3 cm. This was done for all stations except for the station with the highest number of observations, which was selected to provide the reference clock. Additionally, for all stations, the zenith wet delays were estimated every 30 min as PWO with relative constraints of 1.5 cm, while north and east gradients were estimated as PWO every 180 min with relative constraints of 0.05 cm. Furthermore, station coordinates were estimated for all stations, and the datum definition was realized using no-net-rotation and no-net-translation over all stations. Finally, source coordinates were estimated for sources observed in at least three scans and with at least five observations. The no-net-rotation datum definition was based on the position coordinates of the sources with at least 25 observations.

Based on the analysis results of 1000 simulation runs, repeatability values for each estimated parameter were calculated as the standard deviation over the estimated parameter values. These repeatability values represent the expected precision and are further used to compare the geodetic and astrometric session performance between different schedules.

3.4 Astronomical simulations

The astronomical performance was evaluated using two independent approaches based on source maps constructed from simulated visibilities. First, these maps were evaluated based on the so-called normalized root mean square error (NRMSE; Chael et al. 2018; Xu et al. 2021b), which is the normalized difference of the flux densities over every pixel between two images. Given two images A and B, with M pixels, the NRMSE of image A relative to B (model image) can be computed based on

$$\begin{aligned} \text {NRMSE} = \frac{\sqrt{\sum _{i=1}^M(A_i-B_i)^2}}{\sqrt{\sum _{i=1}^M(B_i)^2}} \end{aligned}$$
(2)

with i denoting the ith pixel. Second, the maps were evaluated based on their dynamic range, which is defined as the ratio of the peak brightness to the noise level.

For the evaluation based on the NRMSE, an imaging pipeline based on the EHT-imaging library (Chael et al. 2016, 2018) and difmap (Shepherd 1997) was developed. Here, the visibility data were simulated at 8.6 GHz for each schedule. The superresolution image of source 1803+784 on 2020-02-20, from the Monitoring Of Jets in Active galactic nuclei with VLBA Experiments (MOJAVE; Lister et al. 1996), was used as a reference to calculate the phase and amplitude of each observation. Therefore, all the radio sources in the schedules are assumed to have the same structure. The selected reference image shows an extended jet in right ascension spreading up to \({\approx 1.3}\) mas away from the core, which has \({\approx 75}\)% of the total flux density. This moderate structure may represent the common characteristic of the structure of the geodetic radio sources. The simulated visibility data were used to construct images in difmap, and the constructed images were evaluated based on the MOJAVE image as the ground truth model. Moreover, another set of reference visibilities was simulated for each source in a session by assuming that the network is only observing the said source all the time. This dataset provides the best possible UV coverage for each source and therefore represents the best possible imaging capability for the given network. The corresponding beamsize, which is source- and network-dependent, was calculated based on those reference visibilities and is further utilized during the astronomical analysis as described in Sect. 3.5.

For investigating the dynamic range, the simulations were performed using a dedicated processing pipeline that was originally built when VGOS was designed (Petrachenko et al. 2009). A full description of the initial version of this pipeline is available in Collioud and Charlot (2012). The pipeline takes as inputs the session schedule, a structure model of the source and a frequency setup. The current VGOS frequency setup, which includes four frequency bands (A, B, C and D) with central frequency values of 3.224, 5.464, 6.584 and \({10.424}{\,\textrm{GHz}}\), and a bandwidth of \({448}{\,\textrm{MHz}}\) per band, was utilized for the simulations. The structure model has a total flux density of \({0.33}{\,\textrm{JY}}\) with a typical core-jet morphology represented by a point-like core component plus five approximately aligned Gaussian components extending up to \({5}\,\text {mas}\) from the core and with a flux density of \({0.13}{\,\textrm{JY}}\) in total. Simulated visibilities are generated with a realistic random noise level, corresponding to a SNR of 10. From these data, each source in the schedule was then automatically imaged using difmap in a standard way. The parameters of the imaging procedure are identical for all VGOS bands, with the exception of the pixel size, which is set according to the resolution of each band. By examining the noise level in these images, the imaging quality can be assessed, as described in the following section.

3.5 Astronomical analysis

The astronomical analysis is performed using two different and independent processing pipelines. Based on these two approaches, the number of sources with sufficient imaging potential is counted and used to compare the different scheduling approaches.

The astronomical analysis based on the NRMSE approach was performed using the following procedure. To obtain realistic NRMSE results, there are two tasks that were executed before deriving the NRMSE value: (1) the MOJAVE image was blurred with the beamsize obtained from the reference visibilities with the best possible UV coverage of the source, to produce a model image for each source one by one, and (2) each of the constructed images had to be aligned to the corresponding model image based on the maximum correlation between them.

The NRMSE value of the constructed image was calculated with respect to the corresponding model image. Here, it is to note that the NRMSE metric derived in this way evaluates the deviation of the constructed image with respect to the model image derived from the best possible UV coverage that could be provided by the observing network. The NRMSE can have a value in the range of 0–1. The smaller the NRMSE value, the better the constructed image agrees with the reference image. For simplicity, an empirical criteria of NRMSE \(< 0.3\) was chosen in this study to count the number of scheduled sources with sufficient imaging potential.

Additionally, the images were evaluated based on their dynamic range which is an indicator of the image quality, i.e., the higher the value, the better the imaging quality. As mentioned above, the dynamic range was computed as the ratio of the peak brightness of the image to the noise level. The latter was estimated by computing the RMS of the flux density in a box with a width of 1/5th of the image size located outside the source structure at 1/10th of the image size from the top left corner. For this investigation, an average dynamic range of 3000 per band was selected as a threshold for counting the number of sources with sufficient imaging potential. This level was defined based on our experience in achieving accurate enough source structure corrections in geodetic VLBI analysis.

Finally, the imaging capability was also quantified based on the combined number of independent closure phases and closure amplitudes. For one scan, it is given by

$$\begin{aligned} \# {\text {closures}} = \frac{{(N - 1)(N - 2) + N(N - 3)}}{2} \end{aligned}$$
(3)

with N representing the number of stations participating in the scan. Closure phases carry information of source structure from \(N(N-1)/2\) phase observables reduced by phase calibrations on \((N-1)\) baselines to a reference station, i.e., \((N-1)(N-2)/2\) independent closure phases, and closure amplitudes are equivalent to \(N(N-1)/2\) amplitude observables reduced by N antenna gains, i.e., \(N(N-3)/2\) independent closure amplitudes (Thompson et al. 2017). This equation shows that an increase in the number of the observing stations in each scan leads to a quadratic growth in the number of independent closure quantities and thus a significant improvement in the imaging capability.

4 Results

Figure 4 depicts the session performance w.r.t. geodetic, astrometric and astronomical performance. For the generation of the source-centric schedules, the target number of scans per source (\(n_{target}\)) was set to 20, while the minimum number of scans per source (\(n_{min}\)) was set to 10.

Fig. 4
figure 4

Session performance. Top charts: VO1203, bottom charts: VO1119. Left charts: EOP precision, middle charts: average station coordinate precision and standard deviation between the telescopes. Right charts, blue bars: spread of source position precision. The horizontal lines mark the minimum, maximum and median precision, while the shaded blue areas depict the distribution among the sources. Green wide bars: astronomical performance based on NRMSE metric. Red bars: average astronomical performance based on the dynamic range metric per band as well as minimum and maximum between bands

The simulation results from the (observed) geodetic scheduling (g) are compared with the proposed source-centric scheduling approach (a). As mentioned earlier, the source-centric scheduling approach is tested utilizing \(n_{minsta}\) between two and seven (a2–a7).

The top charts depict the VO1203 results, while the bottom charts depict the VO1119 results. The left charts represent the simulated precision of the EOPs, derived as the repeatability value of the 1000 simulation runs. The middle charts depict the simulated precision of the station coordinates based on the euclidean norm of the coordinate precision vector. Here, the average value of the participating stations is listed, as well as their standard deviation. The right charts represent the astrometric results of the source coordinate precision in blue. Here, the mean source coordinate precision, defined as the euclidean norm of the coordinate precision vector where the right ascension precision is scaled based on source declination, is highlighted, as well as the minimum and maximum precision among all sources. The blue areas depict the distribution of sources within the two extreme values. Furthermore, the right charts also depict the astronomical results. In green, the number of sources with a NRMSE \(<0.3\) is shown. In red, the number of sources with a dynamic range \(>3000\) is depicted. Here, the marker corresponds to the number of sources reaching a dynamic range \(> 3000\), averaged over the four bands. As later discussed in Sect. 5.3, the dynamic range differs between the four VGOS bands. Therefore, the whiskers represent the band with the minimum number of sources reaching a dynamic range of \(>3000\) as well as the band with the maximum.

4.1 VO1203

For session VO1203, the source-centric scheduling approaches a2 and a3 significantly improved the precision of the geodetic parameters. On average, EOP precision is increased by 23% for a2, 18% for a3 and 8% for a4. The station coordinate precision is improved by 15% for a2 and 10% for a3. For a2, the maximum absolute improvement is achieved for station KOKEE12M and WETTZ13S with 0.9 mm for both stations while the maximum relative improvement is achieved for KOKEE12M with 30%. For a3, the results are very similar. Again, absolutely, the improvement is highest for WETTZ13S (0.8 mm) followed by KOKEE12M (0.7 mm) while the highest relative improvement is achieved for KOKEE12M with 25%. Therefore, it can be concluded that solutions a2 and a3 lead to almost identical geodetic performance while the simulated precision of the EOPs and station coordinates deteriorates with higher \(n_{minsta}\).

Based on the astrometric analysis, it can be seen that the average source coordinate precision improved with the source-centric scheduling approach compared to the geodetic scheduling solution. Compared to the g-schedule, the improvement of the average source coordinate precision is 42% for a2, 54% for a3 and 71% for a4. Moreover, the source performance is much more homogeneous and the maximum repeatability error is decreased significantly. However, it is to note that simultaneously, the total number of observed sources and thus estimated source positions is reduced with higher \(n_{minsta}\) as discussed later in Fig. 5. Finally, the astronomical results reveal that compared to the geodetic scheduling, twice as many sources show a NRMSE of \(<0.3\) using the source-centric scheduling approach. Similarly, based on the dynamic range criterion, it is revealed that especially for a2–a4, twice as many sources reach the required threshold. With higher \(n_{minsta}\) (e.g., a5–a7), the number of sources passing the dynamic range requirement decreases due to the higher constraints in scheduling. Based on the two astronomic evaluations, it can be concluded that the imaging potential is significantly increased by the proposed source-centric scheduling approach.

4.2 VO1119

For session VO1119, the source-centric scheduling approach a2–a4 outperforms the geodetic solution based on the executed simulations. The improvements are comparable to those for VO1203, namely for EOP an improvement of 23% for a2, 19% for a3, 15% for a4 and 5% for a5. Furthermore, it is to note that the nine-station network solution VO1119 drastically outperforms the seven-station network solution VO1203 w.r.t. the simulated EOP precision. On average, dUT1 is improved by a factor of \(\approx 3.5\), polar motion by a factor of \(\approx 2.6\) and nutation by a factor of \(\approx 1.8\). This is mostly explained by the inclusion of station Ishioka (Is) in the network (see Fig. 1). The station coordinate precision is improved by 18%, 16%, 12% and 3%, respectively. The highest absolute improvement for a2–a4 w.r.t. the individual station precision can be seen for KOKEE12M with 1.9, 1.6 and 1.1 mm, and WETTZ12M with 1.0, 0.8 and 0.8 mm, respectively. Relatively, the improvement is highest for station KOKEE12M with 29, 24 and 17% for a2–a4. The astrometric performances also behave similarly as in VO1203 with improvements \(> {50}{\%}\) for the average simulated source coordinate precision.

Similar improvement w.r.t. the astronomical performance can be seen for VO1119 compared to VO1203, with almost doubling the number of sources reaching the threshold classifying proper imaging potential. Compared to the VO1203 results, the number of sources reaching a dynamic range \(> 3000\) is not reducing that rapidly w.r.t. \(n_{minsta}\), explained by the less severe scheduling constraint of \(n_{minsta}\) in the case of a larger network.

5 Discussion

5.1 Session statistics

Fig. 5
figure 5

Session statistics. Top charts: VO1203, bottom charts: VO1119. Left charts: number of scans color-coded by the number of stations participating in each individual scan. Middle charts: number of observed sources color-coded by number of observations. Right charts: number of observed sources color-coded by the combined number of independent closure phases and closure amplitudes (Eq. 3). The bottom charts share the color-code of the top charts

Figure 5 lists some basic statistics of sessions VO1203 and VO1119. Compared to the geodetic scheduling (g), the number of scans was slightly increased in VO1203 for a2 and a3, and in VO1119 for a2–a4, while the number of scans decreased with higher \(n_{minsta}\). It is to note that due to the fillin-mode a posteriori (see Sect. 3.1), which allows for scans to be added to the schedule ignoring the \(n_{minsta}\) constraint, even the aX version of a schedule can have scans with less than X stations with X being the value of \(n_{minsta}\).

Based on the number of observed sources, it is evident that schedules with a low \(n_{minsta}\) observe a high number of different sources, while this number drops with higher \(n_{minsta}\) values. Compared to the geodetic scheduling, the number of sources with \(< 33\) observations is significantly decreased from \(\approx {50}{\%}\) to below \({15}{\%}\) for a2 and below \({5}{\%}\) for the remaining versions. The same holds for the combined number of independent closure phases and closure amplitudes (#closures) (Eq. 3). On the other end, there are no more cases where individual sources are scheduled with more than one thousand observations in the source-centric scheduling and the number of sources with more than one thousand #closures is significantly reduced. Therefore, the source-centric scheduling approach results in a more even and homogeneous distribution of scans among sources and thus a more efficient scheduling for imaging purposes.

When looking at the number of sources grouped by the individual number of observations or #closures, it can be seen that more sources tend to have a higher value with increasing \(n_{minsta}\), especially for a2–a6. This means that by increasing the minimum number of stations per scan, the resulting schedule is focusing on fewer sources.

Based on these statistics and the results depicted in Fig. 4, one can conclude that using a minimum number of stations per scan of three is sufficient for the seven-station network of VO1203 while the number could be increased to four for the nine-station network without losing significant geodetic parameter precision. It is important to note that scans of only two stations cannot form closure phases and therefore cannot contribute to imaging, while those of three stations cannot form closure amplitudes, in which case the technique of closure imaging, based on these quantities, cannot be applied (Xu et al. 2021b).

5.2 Astronomical performance based on NRMSE

Figure 6 depicts the astronomical results based on the NRMSE metric per source and schedule individually.

Fig. 6
figure 6

NRMSE per schedule, color-coded by source declination. The red dashed line depicts the 0.3 NRMSE threshold used to evaluate images as having an adequate quality. For each pair of numbers shown in the bottom edges, the left number indicates how many sources stay within the NRMSE threshold, while the right number lists the total number of sources that are scheduled. Top: VO1203; bottom: VO1119

The sources with no imaging possibility (e.g., sources with only two-station scans) have an NRMSE with the maximum value, 1.0.

Compared to the geodetic schedule g, the source-centric scheduling approach increases the number of sources reaching the NRMSE requirement drastically, by up to a factor of two for some \(n_{minsta}\) settings. For VO1203 a4–a6, all sources have an NRMSE below 0.3, while this is also true for most sources in VO1119 a5–a7. In the source-centric schedules, most sources that do not reach the NRMSE criteria are sources at low declination, a demonstration of the intrinsic difficulty in achieving good imaging for low-declination sources with a northern hemisphere network. Within the geodetic schedule, however, also some sources at northern declinations have an NRMSE of \(> 0.3\).

Although for VO1119 the schedule a5 can be considered the best performing version based on the NRMSE metric, a4 is superior in other domains, such as the geodetic performance and imaging performance based on the dynamic range criterion, see Fig. 4. A general conclusion from the NRMSE investigation is that the imaging capability increases with \(n_{minsta}\).

Here, it is to note that the VO1203 results tend to have lower NRMSE compared to the VO1119 results, which is not unexpected. As the network increases and new, longer baselines are introduced (such as those to station Ishioka in VO1119), the corresponding beam size of the ideal visibilities decreases, meaning that the model image used for the imaging quality of schedules comparison will have a higher angular resolution. In other words, the bigger and more global the network, the higher the constraint applied to the calculation of the NRMSE (due to the higher angular resolution of the model image). Therefore, the NRMSE metric calculated as described in Sect. 3.5 should be used to compare the potential for imaging of schedules for a given network but not between networks.

5.3 Astronomical performance based on dynamic range

Figure 7 provides a more detailed insight into the results from the dynamic range investigation.

Fig. 7
figure 7

Dynamic range per VGOS band for VO1203 (top) and VO1119 (bottom). The geodetic schedule is depicted on the left, while one selected source-centric schedule is depicted on the right, a3 for VO1203 and a4 for VO1119. The numbers at the top indicate how many sources could be imaged (top left), how many provided a dynamic range \(> 3000\) (bottom left, red) and how many sources were scheduled in total (right). The source declination is color-coded

Here, the simulation results are depicted per band and source individually, once for the (observed) geodetic reference schedule (g) and for one selected source-centric schedule (a3 for VO1203 and a4 for VO1119). Some statistics are reported on the top edges of the charts for each scenario as a combination of three numbers: how many sources are included in the schedule (right), how many sources could be imaged (top left) and how many sources reached the dynamic range threshold of 3000 (bottom left, red).

Comparing the geodetic and the source-centric results, almost twice the number of sources fulfill the exact threshold. Furthermore, it can be seen that for the source-centric schedules, the distribution of the dynamic range results is a lot narrower. This indicates a more equal performance of the individual sources that can be explained by the better distribution of scans among sources. Additionally, it is to note that for the geodetic schedule, a significant portion of the sources could not have been imaged properly based on the provided observations.

Based on the dynamic range criterion, again, a clear declination-based trend can be seen. In particular, sources located in the south suffer from a lower dynamic range due to the imbalanced station network.

Furthermore, a clear frequency dependence can be seen, especially for the geodetic schedule. The dynamic range tends to decrease on average when the frequency increases from A to D band. This effect is to be expected to some extent because the peak brightness varies from band to band. Since the beam is smaller at the higher frequency bands, the source is more resolved and the peak brightness thus becomes smaller. As a result, the dynamic range decreases. In addition, the UV coverage differs from band to band due to the identical bandwidth in all bands, thus representing a higher fraction of the observing frequency in band A than in band D. In other words, the UV coverage is more spread in band A than in band D, leading on average to a better image reconstruction. This effect is mitigated for the source-centric schedules, probably because the UV coverage is in general better for these schedules.

5.4 Scan distribution

Figure 8 depicts the source distribution for the geodetic VO1203 schedule (top left), the a3 VO1203 schedule (top right), the geodetic VO1119 schedule (bottom left) and the a4 VO1119 schedule (bottom right).

Fig. 8
figure 8

Source distribution of the geodetic VO1203 schedule (top left), the a3 VO1203 schedule (top right), the geodetic VO1119 schedule (bottom left) and the a4 VO1119 schedule (bottom right). The marker size corresponds to the number of scans per source while the number of independent closure phases and closure delays is color-coded. Green outlines mark sources with an NRMSE \(< 0.3\) while markers with a white and red center represent sources with a dynamic range \(> 3000\)

Here, the number of scans per source is represented through the marker size, while the #closures are color-coded. Sources with adequate imaging quality, represented through an NRMSE \(< 0.3\), are marked with green edges. Similarly, sources with a dynamic range of \(> 3000\) throughout all bands are represented by a white and red center.

Comparing the marker sizes between the geodetic schedules (left) and the source-centric schedules (right), it can be seen that the scan distribution is more homogeneous for the latter. For the source-centric schedules, other than a declination trend, almost every source is observed with the same number of scans.

The same holds for the number of combined independent closure amplitudes and phases per source, represented by the marker color. In the case of the geodetic scheduling, #closures can become as big as 2700 for VO1119, while \(\approx {25}{\%}\) of the sources are scheduled with zero #closures. Furthermore, a clear north–south gradient is present. This gradient can easily be explained by the reduced number of stations that can simultaneously observe the same source per scan as depicted in Fig. 2.

Based on the NRMSE requirement, all northern hemisphere sources are imaged properly with the source-centric scheduling approach a3 for VO1203, while \({71}{\%}\) of the southern hemisphere sources reach the required threshold. The numbers for VO1119 a4 are \({84}{\%}\) for northern and \({32}{\%}\) for southern hemisphere sources. In contrast, based on the geodetic schedules, the distribution of sources meeting the requirement is a lot less homogeneous. For VO1203 g, only \({50}{\%}\) of the northern hemisphere and only \({32}{\%}\) of the southern hemisphere sources have an NRMSE below 0.3. The numbers for VO1119 g are \({48}{\%}\) and \({19}{\%}\), respectively.

Based on the dynamic range requirement, for VO1203 a3, \({78}{\%}\) of the northern hemisphere sources reach the required threshold, while the number drops to \({38}{\%}\) for the southern hemisphere sources. The numbers for VO1119 a4 are \({100}{\%}\) for northern and \({40}{\%}\) for southern hemisphere sources. Based on the geodetic schedules, the distribution of sources meeting the requirement is, again, a lot less homogeneous. For VO1203 g, only \({50}{\%}\) of the northern hemisphere and only \({13}{\%}\) of the sources hemisphere sources have a dynamic range above 3000. The numbers for VO1119 g are \({57}{\%}\) and \({11}{\%}\), respectively.

Additionally, it is to note that the source-centric scheduling approach does no longer include sources with a declination \(< {-30}{^\circ }\). This is due to the \(n_{minsta}\) requirement and the resulting limited visibility of these low-declination sources with enough stations as depicted in Fig. 2.

5.5 Target number of scans

In the previous example, the target number of scans \(n_{target}\) was set to 20. In this section, different \(n_{target}\) values between 15 and 55 for the VO1119 network are exploited to further test the scheduling approach. The \(n_{min}\) value is set to \(\left\lfloor {0.8 \cdot n_{target}}\right\rfloor \). Thus, within this analysis, every source should be observed for an almost equal amount of time, which further challenges the scheduling approach and, in particular, the scan selection algorithm.

Figure 9 depicts the investigation results.

Fig. 9
figure 9

Statistics and simulation results for various \(n_{target}\) thresholds (abscissa) for the VO1119 network. Top charts: statistics. Middle charts: simulation results. Bottom charts: source distribution examples for \(n_{target}=30\) (left) and 50 (right). Detailed plot descriptions can be found in Figs. 5 (top), 4 (mid) and 8 (bottom)

Based on this analysis, one can see that the total number of observed scans is only decreasing slightly by increasing the \(n_{target}\) value. Additionally, the number of stations per scan also stayed relatively constant. This is a strong indication that the scheduling approach and the source selection algorithm do a good job of still providing reasonable schedules although high scheduling constraints are applied. When looking at the number of observed sources, it is obvious that the number decreases with higher \(n_{target}\) thresholds. Furthermore, one can see that the number of scans and observations per source increases with higher \(n_{target}\) values. Both of these effects are expected and prove that the scheduling approach is working as intended and can handle different \(n_{target}\) thresholds.

Based on the simulated geodetic parameter precision, no strong differences can be seen. The simulated EOP precision mostly stayed at the same level, only for polar motion in the y-direction (YPO), a small negative trend, and for nutation in the y-direction, a small positive trend can be seen. The station coordinates also depict a slight negative trend by increasing \(n_{target}\) although it is not significant. Overall, the differences are comparatively small. Based on this investigation, it is to conclude that it is possible to achieve high precision of geodetic parameters by distributing observations among many sources, or by focusing observations on a small subset of sources, as long as the source distribution is homogeneous, and thus, a proper sky coverage can be achieved at each station. However, it is to note that no source-based systematic errors have been simulated which might change the simulation results in favor of observing more sources. In case of significant source structure effects, distributing observations among many sources might be advantageous.

The astrometric results depict an increase in the average source coordinate precision, explained by the higher number of scans per source. The astronomical results reveal that in this setup, the number of sources meeting the NRMSE and dynamic range criterion decreases with higher \(n_{target}\) values. However, this is, again, mostly explained by the lower total number of observed sources. In Fig. 9, two examples for \(n_{target}=30\) and \(n_{target}=50\) are depicted. In both examples, almost all sources meet the astronomical requirement. Only some low-declination sources have too low imaging potential. Furthermore, these plots highlight that the source selection algorithm manages to distribute the observed sources reasonably well, even for high \(n_{target}\) values, especially w.r.t. declination. This is especially important for geodetic parameter estimation since this allows for a good station sky coverage and thus a decent estimation of tropospheric parameters, explaining why no degradation of the geodetic parameters is present.

5.6 Future VGOS networks

Next, the source-centric scheduling approach is tested for two future VGOS networks. First, the VO1119 network is extended by three southern hemisphere stations, the two AuScope telescopes (Lovell et al. 2013) Katherine and Hobart (Australia), as well as a VGOS telescope at Hartebeesthoek (South Africa), leading to a total of 12 stations. Second, the network is further extended by several planned or already constructed stations based on Behrend et al. (2022). The telescope locations are depicted in Fig. 1. For the full-network investigations, sites with twin telescopes are only represented through one station. This leads to a total of 29 stations.

In case of stations that are not yet built and thus no antenna characteristics are available, standard VGOS parameters were assumed, in particular \({720}{^\circ /\textrm{min}}\) slew rate in azimuth and \({360}{^\circ /\text {min}}\) slew rate in elevation, a total azimuth cable wrap of 540\(^\circ \) and a local elevation cutoff of 5\(^\circ \).

Figure 10 depicts the simulation results as well as two examples of the resulting source distribution.

Fig. 10
figure 10

Simulation results for various \(n_{minsta}\) thresholds (abscissa) for future VGOS network. Top charts: VO1119 network + three southern hemisphere stations. Middle charts: full 29-station VGOS network. The corresponding station networks are depicted in Fig. 1. Bottom charts: source distribution examples for the southern telescope network solution a3 (left) and the full-network solution a6 (right). Detailed plot descriptions are provided in Figs. 4 (top, mid) and 8 (bottom)

The top charts depict the results from the VO1119 network including the three southern hemisphere stations. Here, the a2 and a3 solutions (\(n_{minsta}=2\) and 3) perform significantly better w.r.t. the EOP parameter precision compared to higher \(n_{minsta}\) versions. This can be explained by the fact that only three southern hemisphere telescopes have been added. When aiming for 4+ station scans, the southern stations are not able to observe southern declination sources by themselves and thus have a worse sky coverage and the session a poorer source distribution in general. Therefore, the use of \(n_{minsta}\) of three can be seen as a good choice. The resulting source distribution is depicted in Fig. 10 bottom left. This case highlights that it is not possible to derive a simple rule regarding the recommended minimum number of stations per scan based on the number of stations within the network alone. Additionally, the network geometry and in particular the station distribution has to be considered as well.

Compared to the VO1119 results (Fig. 4), the precision of the polar motion and nutation parameters improved by a factor of two to three by adding the three southern hemisphere stations, explainable by the new long north–south baselines. This highlights the importance of southern hemisphere VGOS stations for geodesy. In contrast, dUT1 was only improved by approximately \({15}{\%}\). However, the average simulated station coordinate precision was slightly degraded (by \(\approx {10}{\%}\)). In particular, the southern hemisphere station coordinates can only be determined with less precision compared to the northern stations since the number of scans and sky coverage is lower, leading to a reduced average performance. Regarding the astrometric results, the improvement is a factor of two for a2–a4 which gradually decreases with higher \(n_{minsta}\) to only \({5}{\%}\) for a7. Again, the special geometry of the network and the additional north–south baselines improve the source coordinate determination, especially for the declination parameter.

The astronomical results further indicate that for this 12-station network, \(n_{minsta}\) cannot be increased too much. Interestingly, the gap between the NRMSE and the dynamic range results is bigger compared to previous results. In particular, based on the NRMSE metric, only very few sources meet the target of 0.3. The reason for this are the newly added very long baselines between southern and northern stations without many intermediate baselines. Therefore, the scheduled observations were not able to recover the fine structure with the highest angular resolution that is determined by the longest baselines. When the images were compared to the model images, which were obtained by blurring the ground truth image with the beamsizes from a schedule with full UV track, it results in slightly higher NRMSE values than the chosen criteria for many sources, e.g., for the depicted case of a3, 19 sources have a NRMSE \(< 0.3\), while already 50 sources have a NRMSE of \(< 0.35\). This suggests that the constructed images based on the network of these 12 stations can have a high dynamic range but tend to lose the fine structure at the scale of about 0.6 mas (equivalent to the angular resolution at 8.6 GHz provided by this network) in the model image.

For the fully extended 29-station VGOS network (Fig. 10, middle charts), the geodetic parameter precision was further improved by a factor of 1.5\(-\)2.0 in the case of EOP, while the station coordinates were improved by 10–\({30}{\%}\) w.r.t. the 12-station solution, depending on which of the a2–a7 solutions is considered. Here, the a2–a7 solutions perform equally well due to the larger VGOS network. Additionally, based on the astronomical investigations, almost all sources meet the required thresholds, with the exception of the deep-southern sources. The study was further extended with higher \(n_{minsta}\) values. Values of up to \(n_{minsta} = 9\) do not show a degradation of the geodetic parameters.

6 Conclusions

Within this work, a source-centric scheduling approach is presented. The algorithm is designed to address the growing importance of proper source structure imaging for VGOS. The key features are a better distribution of scans among sources and thus a more homogeneous source distribution, resulting in improved quality of astronomical and astrometric analysis, while not degrading the geodetic performance. In fact, it is demonstrated that this scheduling approach improves the precision of the estimated geodetic parameters as well compared to the currently observed VGOS sessions.

Based on two investigated VGOS networks, one with seven and one with nine stations, simulations reveal that the geodetic precision can be improved by \(\approx {15}{\%}\), while astrometric results represented through the mean source coordinate precision can be improved by \(\approx {50}{\%}\) while being significantly more homogeneous between sources. Furthermore, almost twice the number of sources meet the defined astronomical criteria for providing good imaging quality based on two independent analysis approaches, one based on NRMSE and one based on the image dynamic range.

In addition, different values for the minimum number of stations participating in a scan \(n_{minsta}\) and different thresholds for the target number of scans per source \(n_{target}\) were exploited. Although raising these two requirements poses a significant constraint on the scheduling process, the proposed algorithm still manages to generate high-quality results. Only the \(n_{minsta}\) value should not be raised too much. A \(n_{minsta}\) value of three works well for the seven-station network, while the number can be increased to four for the nine-station network. Increasing \(n_{target}\) had a positive impact on the astrometric performance, at the cost of fewer observed sources, while the precision of the estimated geodetic parameters did not significantly change, indicating that the algorithm manages to keep an adequate station sky coverage even when focusing on fewer sources. For building a VGOS celestial frame, an appropriate trade-off should be found between observing a large number of sources per session, allowing for faster construction of the frame at the cost of astrometric precision and imaging quality, or observing fewer sources, allowing for increased astrometric and imaging performances, but requiring more observing time.

Finally, the proposed source-centric scheduling approach was further tested with two potential future VGOS networks. Based on one test utilizing a 12-station network with three southern hemisphere stations, the EOP precision is improved by a factor of two highlighting the importance of southern hemisphere VGOS stations. Here, it is to note that in this case, although a 12-station network was utilized, \(n_{minsta}\) should be kept to three due to the network geometry. Another factor of two on the EOP precision is gained by utilizing a full VGOS network of 29 stations. With the full VGOS network, almost all sources meet the requirement of good imaging potential. In summary, both tests confirm that the proposed source-centric scheduling approach generalizes well for future VGOS networks.

7 Outlook

Within this work, two current and two future VGOS networks were investigated. Based on the results, it is already evident that some scheduling parameters are network-dependent, e.g., the minimum number of stations per scan, especially when adding southern hemisphere telescopes, thus producing a more imbalanced network distribution. Therefore, further investigations of optimal scheduling parameters need to be conducted for different VGOS networks as soon as new stations are operational and extend the existing network. In particular, the use of evolutionary algorithms for scheduling parameter tuning (Schartner et al. 2021) has been proved to be a promising approach for this task.

Furthermore, the source selection algorithm needs to be adjusted to properly cycle through a given source list, e.g., the complete list of ICRF3 defining sources (Charlot et al. 2020), to ensure that not only the scan distribution among sources within one session is homogeneous, but also in general over various sessions within several years. This would especially strengthen future celestial reference frame solutions.

Finally, the current Monte Carlo simulations providing the geodetic and astrometric performance only account for tropospheric delays, clock drifts and white noise. The impact of source structure is not reflected in the geodetic parameter precision. One interesting future study could investigate the proper inclusion of source structure effects within these simulations.