1 Introduction

Hydrological models have become essential tools during previous decades to assess and manage water resources (Zeinalie et al. 2021). Some models allow to simulate different kinds of scenarios (e.g., climate change, water demand, management, extreme conditions, etc.), providing water authorities with valuable data (Gassman et al. 2014). Complex hydrological models require comprehensive calibration processes, which might reduce their applicability for non-expert users. This has caused water managers to employ models that simulate management actions accurately, but usually lack a detailed simulation of hydrological processes. Models focused on management actions can be extremely useful for water management and planning purposes (Andreu et al. 1996; Koch and Grünewal 2009; Sieber and Purke 2007). However, if these models are based on simple hydrological models with coarse spatial resolutions (e.g., CEDEX 2020), their use under changing conditions, for example due to climate change, might lead to faulty decisions (Kirchne 2006).

On the other hand, widely used catchment scale hydrological models with an accurate representation of environmental processes (e.g., SWAT, TOPMODEL, MIKE SHE) might not allow to simulate the numerous actions implemented for water resources management. This limits their usefulness for water resources managers and their reliability, since most river basins are not in a natural regime (Grill et al. 2015; Haddeland et al. 2014; Sofi et al. 2020). Some authors have coupled hydrological models with others that allow incorporating management actions (Dash et al. 2022; Doulgeris et al. 2015; Gaiser et al. 2008; Kolokytha and Malamatari 2020). However, coupling models requires extensive user expertise, and thus might not be a viable option.

Some large-scale hydrological models simulate hydrological processes and human actions on water resources simultaneously (e.g., Alcamo et al. 2003; Burek et al. 2020). This scale might be useful for global assessments, but limited for water management purposes at smaller scales (e.g., catchment scale) due to their coarse resolution and the simplification of hydrological and management processes. To the best of our knowledge, there is no open-source model that is capable of simulating both processes simultaneously in appropriate detail, i.e., with a realistic simulation of hydrological processes, a flexible configuration of management actions, and a detailed spatio-temporal scale. Bridging this gap would facilitate integrated planning of water resources considering different scenarios (Silva-Hidalgo et al. 2009).

Among catchment scale hydrological models, the Soil and Water Assessment Tool (SWAT) is one of the most widely used (Arnold et al. 2012; Fu et al. 2019; Gassman et al. 2014). SWAT was always able to simulate some management actions (i.e., reservoir release, water transfers, irrigation) albeit in a simplistic and limited manner (Neitsch et al. 2009). Thus, it was usually coupled with other software (e.g., Ashraf et al. 2017; Ayele et al. 2022; Dash et al. 2022; Phung et al. 2022; Zhang et al. 2023). A completely restructured version of the model, SWAT + , was released some years ago and incorporates new capabilities (Bieger et al. 2017). The flexibility of the model when simulating management actions has been noticeably improved in SWAT + .

Management actions in SWAT + are configured through decision tables (Arnold et al. 2018), which evaluate input conditions to trigger actions. The simulation of water transfers requires the use of a new water allocation module developed for SWAT + , which is used to control the movement of water (both for irrigation and water transfers) between SWAT + objects. This module has the potential to significantly improve SWAT + capabilities. Despite the relevance of these novel SWAT + features for water management, studies implementing them (Chawanda et al. 2020; Tigabu et al. 2024) and evaluating their implementation (Wu et al. 2020) are scarce.

Incorporating water management might increase the reliability of SWAT + models when simulating highly managed river basin. This hypothesis was tested in the Upper Tagus River basin (UTRB), the most populated river basin in the Iberian Peninsula. The complex management of this catchment, with more than 60 reservoirs, numerous water transfers, and relevant demands of water, was assessed through comprehensive compilation and analysis of data and documents. There is a link between climate change and diminishing water resources in the Spanish portion of the Tagus River basin. Streamflow at its outlet was 20% lower between 1980 and 2018 than between 1940 and 1980, which coincides with a 12% reduction in precipitation (CHT 2023). A reliable hydrological model, capable of simulating both hydrological and management processes, is required to evaluate and address the future management of water resources in this relevant basin.

This study pioneers the use and analysis of the new SWAT + water management functions applied to a case as complex as the UTRB. A SWAT + model of this basin was previously calibrated in areas in natural regime to ensure that the starting point of the management implementation was an accurate simulation of hydrological processes. Decision tables and the water allocation module were used for simulating different management actions (irrigation, water transfers outside the basin, between reservoirs and for human consumption), and rules for reservoir outflows were defined for more than 30 reservoirs with different purposes. The objectives were to explore and explain the functioning of these new features, to analyse the implementation of the management, and to assess if incorporating this management (based on the actual management) improves the simulation of streamflow at the basin outlet. Including the simulation of water management is paramount to improve the reliability of models, and this work aims to serve as reference for future SWAT + water management studies.

2 Methods

2.1 Study Area

The Tagus River is the longest river (1,092 km) on the Iberian Peninsula, and its basin is the third largest (83,678 km2) in this region. Its upper and middle reaches are located in Spain and its lower reaches in Portugal (Fig. 1). It is the most populated river basin on the Iberian Peninsula, with more than 11 million inhabitants and important cities such as Madrid and Lisbon. This study focuses on the Upper Tagus River basin (UTRB), which is entirely located in Spain and densely populated (around 70% of the basin population lives in this region). The Tagus River Basin Management Plan (TRBMP), by the Tagus River Basin Authority (Confederación Hidrográfica del Tajo in Spanish, hereafter CHT), regulates the management of water resources in the Spanish part of the basin (CHT 2023). This management is performed at exploitation system scale (Fig. 1), areas whose water demand can be met using their own water resources and mostly coinciding with the drainage areas of the Tagus River main tributaries.

Fig. 1
figure 1

Location of the UTRB and exploitation systems, and subbasins, reservoirs, irrigated HRUs, and channels defined in the model

The UTRB accounts for about 40% of the Tagus River Basin, but it is a very relevant area for water resources. It supplies a significant portion of the basin’s renewable water resources and satisfies most of the water demands, including the Tagus Segura Water Transfer (TSWT), which on average diverts 330 cubic hectometres (hm3, i.e., 1e6 m3) per year to the south-east of Spain to supply agricultural demands and water for human consumption to 2.6 million inhabitants (San-Martín et al. 2020). The outlet of the UTRB is located at the streamflow gauging station in Talavera de la Reina (Fig. 1).

Three mountain ranges surround the UTRB: the Central System in the north, the Iberian System in the east, and the Mountains of Toledo in the southwest. The elevation of the basin varies considerably (Fig. 1), reaching 2,350 m.a.s.l. in the highest peaks of the Central System, 1,900 m.a.s.l. in the Iberian System, and 1,600 m.a.s.l. in the Mountains of Toledo. The outlet of the basin is at 360 m.a.s.l. Steep slopes can be found mostly in the headwater regions, whereas the lower parts of the catchment are flatter.

The UTRB is characterized by a continental Mediterranean climate, which varies depending on factors such as altitude, proximity to the ocean, and latitude. The mean precipitation for the period 1980–2010 was 540 mm, with values higher than 1,200 mm in the Central System, and values below 400 mm in the driest areas in the south. The mean temperature also varied across the UTRB, with an average temperature of 13 °C in the east, 17 °C in the west and 8 °C to 10 °C in the Central System. Climate change effects on this basin are already noticeable (especially from the 80s), with an increase in temperature and a reduction of precipitation and streamflow (CHT 2023; San-Martín et al. 2020).

The geology of the basin is highly heterogeneous, with igneous and metamorphic rocks in the north and southwest, a predominance of calcareous rocks in the east, Tertiary and Quaternary detrital materials in the central section, and areas with evaporitic rocks in the southern part of the basin (IGM 1972–2003). Different soil types can be found in the UTRB; calcaric cambisols (43%) are dominant followed by other types such as dystric regosols (15%), vertic luvisols (9%), calcaric fluvisols (8%) and humic cambisols (7%) (FAO et al. 2012).

Covering 49% and 43% of the area, respectively, natural vegetation and agricultural land dominate the UTRB landscape (Ministerio de Foment 2018). Half of the natural vegetation areas consist of forests (evergreen, deciduous and mixed), while other kinds of vegetation (pastures, shrublands, mixed vegetation, etc.) cover the other half. Cereals (wheat and barley) are the dominant crops, followed by other crops such as sunflower or legumes (MAPA 2020). Irrigated crops were estimated to cover around 5% of the agricultural land (EEA 2012), and corn is the predominant irrigated crop. Urban and industrial areas cover around 4% of the UTRB, and other uses such as water bodies and barren lands are minor.

A detailed evaluation of the water resources infrastructure and management in the UTRB is presented in Appendix A. Further information about the hydrological and geological characteristics of the UTRB can be found in previous studies (Sánchez-Gómez et al. 2024a, 2024b).

2.2 Previous Work: Construction and Calibration of the Model

Work that preceded this study involved the set-up and calibration of a detailed SWAT + model for the UTRB. The model setup included an accurate implementation of the agricultural land uses and their management. This was followed by a zonal sensitivity analysis and soft calibration in geological regions (Sánchez-Gómez et al. 2024b), and concluded with a multi-spatial and multi-criteria hard calibration and validation. A more detailed description is out of the scope of this manuscript, but the comprehensive calibration performed ensures that the starting point of work presented in this manuscript is a model which accurately simulates the hydrological processes and the streamflow in areas of the UTRB under natural regime. The model performance in simulating the inflow of reservoirs that have catchments under natural regime is presented in Table 1, using several commonly used performance metrics (Moriasi et al. 2015): Nash–Sutcliffe Efficiency (NSE), Coefficient of Determination (R2), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and Kling-Gupta Efficiency (KGE) (Gupta et al. 2009).

Table 1 Statistical performance of monthly reservoir inflows for the simulated period (2010–2018). Only reservoirs with catchments in a natural regime were considered

A good performance was achieved for most of the reservoirs, especially for the most important ones. This is particularly satisfactory considering that a large and heterogeneous catchment is simulated with a single model set-up (opposed to individual models for each reservoir catchment, which would not be operational for the purpose of this study). Thus, as above mentioned, the calibrated model is a good starting point for simulating the management of water resources.

2.3 Simulating Water Resources Management in the Upper Tagus River Basin

The simulation of water management actions in SWAT + can be done through decision tables and the water allocation module. Decision tables allow defining flexible management schedules by considering conditions (Arnold et al. 2018). There are three kinds of decision tables in SWAT + : one for land management (planting, fertilizing, etc.), one for reservoirs release, and one for water transfers (in the lum.dtl, res_rel.dtl and flo_con.dtl files, respectively). The water allocation module serves to define source and demand objects, and to configure water allocations among them, with potential applications such as irrigation or water transfers. The configuration of management actions in the UTRB was based on the compilation and analysis of management data, presented in Appendix A. The novelty of SWAT + new features motivated presenting a description and examples of their functioning in Appendix B.

2.3.1 Implementation of Irrigation

The total demand of water for irrigation and its monthly variation in the UTRB were analysed (Appendix A.2.) and estimated to be 422 hm3 per year on average. Irrigation occurs mainly from April to October, and the demand is highest during summer, especially in July and August. Rivers were found to be the most common source of water for irrigation (80% of the demand is supplied from this source, Appendix A.2.). The water allocation module was configured to irrigate HRUs with irrigated crops (AGRR) using channels as sources.

A custom irrigation decision table was created for this basin (Fig. 2), using the month of the year, the water stress (w_stress), and the number of days since the previous irrigation application (days_irr) as conditions. Water stress represents the proportion of plant growth occurring on a given day relative to the potential growth if water were not a limiting factor. A value of 0 indicates no growth due to water stress, while a value of 1 indicates no inhibition. The created decision table (Fig. 2) triggers irrigation between April and October when w_stress drops below 0.9 and during the rest of the year if w_stress drops below 0.6. From the different irrigation types predefined in the model (irr.ops file), the sprinkler_med option was selected.

Fig. 2
figure 2

Decision table created in the lum.dtl file to configure irrigation in the UTRB and flow chart of its functioning. Different colours indicate the decision table quadrants: blue for conditions, orange for alternatives, green for outcomes, and yellow for actions

The water allocation file (see Appendix B.1.) was configured to define demand and source objects, and to specify the irrigation amount. The demand (irrigated HRUs) and source (closest channels) objects were obtained from the HRUs shapefile created during model construction. In the water_allocation.wro file, one table was created for each source (which is necessary when the source is a channel) with all its demand objects using an R script (provided as Supplementary material). In several cases, source channels with small drainage areas had insufficient water to meet the water demand of the HRUs. Consequently, a nearby larger channel was defined as the source object instead of the closest channel, as it is done in practice. Finally, 118 water allocation tables were constructed to irrigate the 282 AGRR HRUs. The first table of the example water_allocation.wro file included in Appendix B.1. (Figure 10) illustrates one of the water allocation tables used for irrigation. In these tables, the name of the irrigation decision table created (irr_crt) was introduced in the WITHDR column. The amount of irrigation applied (mm per application) and the irrigation decision table were adjusted to values resulting in a simulated demand similar to the estimated demand.

2.3.2 Implementation of Water Transfers

The simulated water transfers can be classified into three types: water transfers that extract water from the modelled basin (TSWT, Alberche irrigation canal, and two power plants that use water for cooling), water transfers that divert water from one reservoir to another, and water transfers for human and industrial consumption, which transfer water from reservoirs to Water Purification Plants (WPPs). Most of the water that is transferred for human consumption is eventually discharged back into the stream network through Wastewater Treatment Plants (WWTPs). Daily records of water transferred were obtained (CEDEX 2021) and analysed (see Appendix A.4.). Figure 3 presents a scheme of the different water transfers implemented in the model, classified according to their type and transferred volumes.

Fig. 3
figure 3

Scheme of the different water transfers and diverts in the UTRB modelled

The water allocation file and flow condition decision tables were used to simulate water transfers. A different water allocation table was created for each type of transfer, and individual water allocation tables were created for each of the six water transfers whose source was a channel, resulting in nine tables. In these tables, the source, receiving object, and amount transferred (or decision table used to calculate the demand) for each water transfer were specified (see Appendix B).

In the water allocation tables, for each water transfer, the reservoir actually used as source was defined as source object. For those small reservoirs not included in the model set-up, a channel was used as the source instead.

Demand objects definition varied among water transfer types. For water transfers between reservoirs (five in total, amounting to 120 hm3/year), the reservoirs actually receiving the water were defined as demand objects. Four water diversions outside the basin were modelled. Two are water transfers (TSWT and Alberche irrigation canal), and two are power plants which evaporate large volumes of water, amounting to 459 hm3 of water per year in total. Arbitrary objects not included in the model were used as demand objects (e.g., for the TSWT an arbitrary object called tswt was defined). An example water allocation table similar to the one used for water transfers outside the basin is presented in Fig. 10 (see Appendix B).

The water transferred for human consumption amounts to 600 hm3 per year according to the available data (CEDEX 2021). Since the current version of SWAT + does not support simulating all water management actions yet (such as withdrawal from sources, transport, purification in WPPs, transport to urban centres, purification in WWTPs, and release to the environment), the simulation of water transfers for human consumption was limited to the initial and final steps: withdrawing the diverted amounts from the sources and releasing them at WWTP locations. Seventeen release points were defined based on the national authorized discharge census dataset (MITECO 2022). These points were defined considering the distribution of the WWTPs and ensuring that every exploitation system had at least one. The transferred volume (543 hm3/year) was proportionally distributed among these 17 points based on the calculated discharge volume and their location relative to the sources (Appendix A.5.).

Once the source and demand objects were defined in the water allocation tables, the conditions of the water transfers were defined using flow conditions decision tables (flo_con.dtl file). These tables were configured to transfer for every two months the average volume of water observed for the period 2000–2020 (CEDEX 2021). For the two power plants, a daily volume was used instead of a decision table due to the lack of accurate data. For the 20 water transfers modelled through decision tables, 36 flow condition decision tables were created. The reason is that, for human consumption water transfers, the total volume of water transferred was distributed among the 17 release points, which led to some water transfers sending water to more than one release point and vice versa. In the water allocation tables, the created flo_con decision tables were assigned to their respective source and demand objects.

2.3.3 Implementation of Reservoirs

Reservoirs in the UTRB that fulfilled the criteria explained in Appendix A.3. were implemented in the model using a reservoir shapefile during the watershed delineation. The default reservoir volumes were changed to realistic values following the recommendations explained in Appendix B.2. The initial reservoir volume was also configured: the observed volume (as fraction of the total volume) was extracted for the first day of the warm-up period for each reservoir, and eight different initial conditions ranging from 0.2 to 0.9 were created and assigned to the reservoirs.

Reservoirs outflows and storage were analysed using observed data (CEDEX 2021), and simulated using reservoir release decision tables. Five different reservoir groups were defined according to their storage and release patterns (see Appendix A.3.), and three different types of release decision tables were created: one for Group 1 (reservoirs with few storage variation), one for Groups 2 to 4 (which have different release and storage patterns), and one for reservoirs with no data (Group 5). All the release decision tables were built to work under three different conditions: normal functioning, i.e., when the reservoir capacity is within the normal range (below an emergency threshold and above a water scarcity threshold); an emergency scenario, i.e., when the reservoir volume is above an emergency threshold; and a water scarcity scenario, i.e., when reservoir volume is below a water scarcity threshold. The emergency and water scarcity thresholds were defined for each reservoir after exploring the distribution of daily storage records grouped per month and adapted in some cases to match the observed records (final values are presented in Table 9, Appendix C).

The water scarcity and emergency scenarios were similar for all three types of release tables. For the water scarcity scenario, reservoirs were configured to release the quarterly environmental flows established by the TRBMP (CHT 2023) for each of the reservoirs. For the emergency scenario, a release of the excess volume (volume of water above a certain threshold) in few days was used. The number of days to release the exceeding volume was initially set to three and then adapted to improve the simulation (final values are presented in Table 9, Appendix C).

For normal conditions, there were differences between the three release table groups. For Group 1, the aim was to maintain a constantly high volume of water, so under normal conditions the release tables were configured to release the greater of two values, the inflow to the reservoir or the quarterly environmental flow. The emergency and water scarcity volume thresholds for these reservoirs are very high (e.g., 0.96 for emergency and above 0.8 for water scarcity, Table 9, Appendix C), so the range of normal volumes is narrow. However, looking at the records, these values are justified (25th percentile of the daily volume is always above 75% of the storage capacity).

There were three reservoirs without release and storage data. For one of them, La Tosca Reservoir, a release table similar to the one used for Group 1 was used. Since this reservoir currently does not have a purpose and it is situated upstream of other reservoirs, releasing the entire inflow was deemed an appropriate solution to avoid affecting the operation of the downstream reservoirs. For the remaining reservoirs without data (Guajaraz and Finisterre Reservoirs), an emergency release was configured when the volume exceeds 90% of the total capacity, and the release in normal conditions was configured to match the environmental flow. The current release of these reservoirs (Guajaraz and Finisterre Reservoirs) is extremely low, even though the environmental flows should be released according to the TRBMP. For the remaining reservoir groups (2, 3, and 4), a flexible normal scenario was created, releasing a rate for every two months calculated as the median of recorded values for the period 2000–2019 (Appendix C). Although for some reservoirs three or four different release rates would have been enough to capture their monthly variation, establishing two-monthly rates allowed to use a similar table for all these groups and to better account for seasonal variations. A detailed example of release tables can be found in Appendix B.2.

2.4 Evaluation of the Management Application

Once all water management actions were implemented in the model, it was run for the period 2010–2019 using the parameter set identified as best during model calibration. To evaluate the implementation of the management actions, the following output files were used: water_allo_day.txt, which contains the outputs for water transfers and irrigation, and reservoir_day.txt, which contains the simulated reservoir storages and outflows. Variables extracted and analysed for reservoirs were inflow, storage and outflow. Variables extracted and analysed for the water allocation were the type of demand, type and number of the demand and source objects, and the daily demand and withdrawal in m3/day.

Reservoir outputs (outflow and storage) were compared with observed records, assessing the simulation performance at monthly scale using different metrics (NSE, R2, PBIAS and RMSE) and creating graphs of simulated vs. observed daily data. Daily water allocation outputs were evaluated to determine the irrigation demand, withdrawal, and unmet demand in each HRU (average annual) and in the entire basin (annual values). The relationship of irrigation demand and supply with other variables was explored using linear regression and Pearson correlation coefficients (PCCs). The simulation of water transfers was compared with the observed average annual values.

Finally, the impact of the management implementation on the performance of the calibrated model was evaluated. Two scenarios were compared, one in which all the management files configured were active and other in which any of the management files were used (no water allocation, no water management decision tables) and in which reservoirs became operational after the simulation period (thus were no active during simulation), i.e., simulating the watershed in natural regime. Daily channel outputs were extracted for both scenarios and compared to the Talavera gauging station records (CEDEX 2021), located at the outlet of the UTRB. Widely used performance metrics (NSE, R2, PBIAS and RMSE, Moriasi et al. 2015) for hydrological modelling and the hydrographs were used for this comparison.

3 Results

3.1 Irrigation Implementation

The definition of source and demand objects resulted in 118 channels used as sources and 282 irrigated HRUs, and irrigation outputs were extracted at daily scale for every irrigated HRU and aggregated to annual values at basin scale. Table 2 presents the average simulated demand and withdrawal (in hm3/year) for each year and for the entire simulation period, the percentage of withdrawal relative to the demand, and the average precipitation and potential evapotranspiration in the basin. The relationships among these variables were explored using scatter plots and linear regressions, extracting PCC values.

Table 2 Average and annual values of precipitation, demand, withdrawal and percentage of the demand supplied for the simulated period in the UTRB

An average annual demand of 409 hm3 was simulated, which is 97% of the estimated value (422 hm3, Appendix A.2.). The irrigation decision table (particularly the parameter days_irr) and water allocation tables (particularly the amount of water per application) were adjusted to achieve this value. This process involved balancing accurate demand, realistic irrigation timing, and a realistic irrigation amount that meets most of the demand. Actual evapotranspiration correlation with precipitation is very high (0.86 PCC), because it ultimately depends on water availability, so potential evapotranspiration was compared with water demand instead. The irrigation demand was found to be highly influenced (−0.85 PCC) by precipitation (Table 2), and consequently by actual evapotranspiration. Demand in years with more precipitation is generally lower than during drier years. Demand increases with potential evapotranspiration, being the most explanatory variable (0.92 PCC) for water demand among those evaluated. As water demand increases, the withdrawal of water also increases (0.96 PCC), but limited by the availability of water in the sources. Thus, the percentage of the demand met is generally reduced (−0.83 PCC) for years with higher demand. It ranges from 68% in the driest year with the highest demand to 79% in the wettest year with the lowest demand. Surface water resources supply around 80% of the irrigation demand in the UTRB (Appendix A.2.), while the remaining 20% comes from groundwater. Accordingly, implementing groundwater sources (i.e., aquifers) as additional sources of irrigation water in the model, especially in areas where the percentage of the demand met is low, would increase the withdrawal and the demand met, and therefore lead to simulated irrigation amounts closer to the estimated amounts.

The demand and withdrawal of water for irrigation in the UTRB not only vary between years, but also seasonally. Irrigation is necessary from March to October (CHT 2023), when the crops are growing, precipitation is low, and air temperatures are high. Due to these factors, the demand for irrigation water is highest during summer months (from June to September). Consequently, an irrigation decision table was developed to reflect this seasonal variation (see Sect. 2.3.1.). Figure 4 displays the average monthly values of simulated demand and water supply and the estimated demand in the UTRB (CHT 2023).

Fig. 4
figure 4

Average estimated demand and simulated demand and withdrawal for the simulated period (hm3/month)

The model´s representation of demand variation aligned well with the estimated data (Fig. 4). Since the decision table was created to irrigate from April to October, some demands observed in March were not simulated. The simulated demand in April was almost negligible. On the other hand, the simulated demand at the end of the summer, particularly in September, was noticeably higher (80%) than the estimated (Fig. 4). This might be due to some of the irrigated crops in the UTRB being planted and harvested earlier than the corn implemented in the model, but also to potential irrigation restrictions that are not considered in the irrigation decision table. Despite the higher demand, the simulated withdrawal in September is very close to the estimated demand (Fig. 4). Slight modifications in the decision table (e.g., including the month of March and reducing the water stress threshold to increase irrigation) or scheduling an earlier harvesting for part of the irrigated crops may improve the fit of simulated and estimated demand. The withdrawal seasonal variation is similar to that of simulated demand, and can be therefore considered appropriate. Increasing this variable (e.g., by including groundwater sources) to meet the entire demand would improve its simulation.

The spatial distribution of these variables at HRU scale was explored. Figure 5 shows the average annual values for demand (in mm/year) and the percentage of the demand met for each irrigated HRU. HRUs with the highest demand are located in the central part of the basin, where the Jarama, Tajuña, and Tagus Headwaters exploitation systems converge. Irrigated lands are historically spread on its terraces due to the availability of water and fertile soils. However, warm and dry conditions characterize this area, and some HRUs simulated demand is larger than 1,500 mm. Some HRUs with a high water demand can be found in the north of the basin, due to the arid conditions in the upper part of the Henares exploitation system.

Fig. 5
figure 5

Spatial distribution of the demand (mm/year) and percentage of the demand met at HRU scale

Examining the percentage of demand supplied (Fig. 5), it appears that the average 27% of unmet demand in the basin (Table 2) is caused by a few HRUs where only a small fraction or none of the demand is met. The most significant case is an HRU near the outlet of the Tajuña exploitation system. This HRU has the highest demand, totalling 14 hm3/year. Because its water source often cannot meet this demand, only 7% of its demand is supplied. Other HRUs where a significant portion of the demand is unmet are located in the upper part of the Henares exploitation system and near the UTRB outlet (Fig. 5). Overall, an accurate simulation of irrigation in the UTRB (considering demand and water supply amount and timing) has been achieved with the SWAT + water allocation module and decision tables, even if small adjustments in the sources used for irrigation could further improve it.

3.2 Water Transfers

Twenty-two water transfers were implemented in the model: four transfer water outside the basin, five from one reservoir to another, and thirteen extract water from reservoirs (in some cases channels) and release it at the defined release points. To control the amount of water transferred, 36 decision tables were created in the flo_con.dtl file. Table 3 presents the observed demand and the simulated demand and withdrawal for each source of water transfers in the UTRB (some sources supply to more than one water transfer).

Table 3 Simulated water transfers in the UTRB: source, type, observed transferred amount, simulated demand, and percentage of the demand met. In Source column, R and C stand for reservoirs and channels, respectively

The SWAT + decision tables implemented to trigger the demands successfully simulated the observed volume of water transferred. The average annual volume of water demanded for water transfers in the UTRB amounts to 1,174 hm3 per year: 455 hm3 are diverted outside the basin, 120 hm3 from one reservoir to another, and 599 hm3 are transferred for human consumption. Given that the average annual streamflow recorded at the Talavera streamflow gauging station for the UTRB outlet was 1,322 hm3 from 2013 to 2020, accurately simulating these complex water transfers is crucial for a realistic representation of hydrological processes in this basin.

On average, 90% of the simulated demand was met, with an average annual total volume of 1,053 hm3. The entire demand was met for 11 of the 22 water transfers simulated, and more than 90% of the demand was met in 15 of the 20 sources used to meet the demands (Table 3). Many water demands that were not fully met had channels as sources. In reality, small reservoirs supply the water for these water transfers. However, these reservoirs were not implemented in the model, and since the channels implemented as sources instead of these reservoirs store less water, they are unable to meet the demands, especially during the dry season. The only case for which almost all the demand is not met and the source is a reservoir is the TSWT (Table 3). The issue arises from the large water demand (358 hm3, Table 3) relative to the small size of the Bolarque Reservoir (31 hm3), which serves as the source for the TSWT. This reservoir is located downstream of two larger reservoirs, Entrepeñas and Buendía, which together have a total capacity of 2,540 hm3. Although 79% of the TSWT demand is met, approximately 80 hm3 are not diverted outside the basin (Table 3). This could lead to an overestimation of streamflow in the Tagus River. A more complex water allocation table for this water transfer, including Entrepeñas and Buendía reservoirs as sources, could improve the simulations. Nonetheless, SWAT + demonstrated its capability to simulate a complex water transfers system.

3.3 Reservoirs

Daily reservoir outputs were extracted and compared with records for the simulation period. The observed and simulated performance and hydrographs were compared for inflow, outflow, and storage. The simulation of reservoirs is not only influenced by release decision tables, but also by model calibration (which determines the inflow) and by water transfers (which affect reservoir storage). The release tables (Sect. 2.3.3.) were modified for some reservoirs to improve their simulation: the emergency and scarcity thresholds were adapted to achieve a better representation of storage, and the magnitude of emergency releases was adjusted. In addition, a new action (extra release) was added for some reservoirs for which storage was overestimated due to a lower release than the observed. These differences between observed and simulated outflows might be due to the period used for calculating the median observed rate (2000–2019). The new action, used for some of the reservoirs (Reservoirs 2, 20, 21, 24, and 26), add an extra release to the release simulated. This release action was configured similarly to the emergency release, but triggered at a lower volume and with a larger delay. Figure 6 presents the hydrographs for outflow and storage and the performance for the simulated period for three different reservoirs, one for each group from 2 to 4. Reservoir 28 is the source of a relevant water transfer (88 hm3/year, Table 3), while Reservoirs 2 and 15 do not have water transfers. Performance metrics for all the reservoirs can be found in Appendix B.

Fig. 6
figure 6

Observed vs. simulated outflow and storage for Reservoirs 2, 15, and 28 (from reservoirs groups 2, 3, and 4, respectively) and monthly performance. The performance was calculated for the entire simulated period (2010–2019) while the hydrographs are shown for a shorter period for better visualisation

The reservoirs shown were selected because they belong to different groups and exhibit noticeably different outflow patterns (Fig. 6). For Reservoir 28, a small amount of water (below 0.5 m3/s) is released in most of the days, but on some days, the observed outflow can reach 40 m3/s. The typically low outflow is due to the reservoir diverting most of its water to WPPs, with larger releases occurring when the reservoir is nearly full. Reservoir 15 has very regular outflows, which are lower in winter (around 0.2 m3/s) and higher in summer (around 1 m3/s), as it supplies water for irrigation canals. Reservoir 2, on the other hand, has a very irregular outflow that varies from day to day.

The accuracy of the outflow simulations varied among the reservoirs, but despite some limitations, a reasonably good performance was achieved. For Reservoir 28, the low outflows were simulated accurately most of the days, and most of the larger simulated outflows were in a similar order of magnitude as the observed ones (Fig. 6). In some cases, high outflows were simulated on days where no high outflows were observed. One possible reason is that the water transfer may sometimes be underestimated (e.g., in early 2012 and late 2013), leading to an overestimation of storage, which in turn triggers emergency releases (Fig. 6). The overprediction of emergency releases results in a high PBIAS for the outflow, while the positive PBIAS for the inflow and the punctual underestimation of the water transfer result in a high PBIAS for storage. Nonetheless, the overall storage and release dynamics for Reservoir 28 were accurately reproduced.

The model simulations closely matched both the magnitude and the timing of the observed regular outflow of Reservoir 15. However, a small overestimation of inflow and a slight underestimation of outflow resulted in an overestimation of storage (Fig. 6). This, in turn, resulted in simulated emergency releases that were not observed, significantly impacting the model performance. Like Reservoir 28, the general dynamics of the reservoir, were captured well with the customized release tables.

While Reservoir 2 exhibited atypical outflow patterns, its release was accurately predicted. The successful simulation can be attributed to the model’s precise representation of the timing and magnitude of major outflow peaks, resulting in satisfactory performance metrics. Both inflow and outflow were slightly overestimated, but the simulation of storage was very good considering PBIAS, NSE, and R2 values. The simulated storage slightly exceeded the maximum capacity of the reservoir in short periods of 2013 and 2014 (Fig. 6), which could be corrected lowering the emergency spillway threshold.

Accurate simulation of reservoir outflows and storage was achieved using the created decision tables. Performance metrics for all the simulated reservoirs can be found in Appendix C (Table 10). Simulating water withdrawals from reservoirs was crucial for accurately modelling storage, which is essential for precise outflow simulation. In this case, with more than 30 reservoirs modelled, the release tables were kept relatively simple. More complex decision tables could enhance the simulation of these variables and might improve performance for the simulated period, but they could also limit the model’s usefulness for other periods.

3.4 Impact of Water Management on Streamflow Simulation Performance

The overall impact of the implementation of management actions in the UTRB was evaluated using the streamflow at the outlet of the basin as an indicator. Figure 7 shows the hydrograph and the performance achieved for two simulated scenarios: the basin under in natural regime and the basin with all management implemented.

Fig. 7
figure 7

Observed (black line) vs. simulated hydrographs and streamflow simulation performance (daily basis) for the model in natural regime and with the management implemented compared to observed records (in black)

Both hydrographs show an accurate prediction of the general streamflow patterns regarding the timing (R2 > 0.55). In contrast, the NSE values are negative for both scenarios, which can be explained by the UTRB not operating under natural conditions in the natural regime scenario and by the complex actual management of the basin for the scenario with management. This management depends on numerous factors (water transfers, reservoirs, demands, etc.), which make it difficult to simulate the streamflow at daily scale accurately. Despite the low NSE value for both scenarios, the improvement achieved in this metric with the management scenario is noticeable. On the contrary, R2 is slightly better (0.07 higher) for the natural regime scenario (Fig. 7).

The improvement achieved by the implementation of water management actions in the model becomes clear when looking at the hydrographs and the error related performance metrics (PBIAS and RMSE). This improvement is primarily driven by the volume of water extracted for transfers outside the basin and the influence of reservoirs, which delays runoff and enhance evaporation. The maximum flows in the natural regime model that occur during the wet period in many years, are much higher than in the observed records (Fig. 7). The maximum observed streamflow value was 281 m3/s in February 2014, while the maximum simulated streamflow was 389 m3/s for the model with management (on a day in March 2010, for which observed values are missing) and 725 m3/s for the model in natural regime (in March 2018). While the differences in the high flows in the model with management are sporadic (e.g., like in the first high peak of 2013), high flows are consistently overestimated in natural regime (Fig. 7). The accuracy of simulating low flow periods is improved when implementing the management, simulating a less variable flow due to the regulation caused by reservoirs release (Fig. 7). The minimum observed flow (1 m3/s in mid-2012) is not captured by any of the models, as this extremely low flow is not in agreement with the TRBMP environmental flow requirements and thus not reproduced by the created release decision tables.

PBIAS and RMSE are noticeably improved after the management implementation (Fig. 7). The RMSE is reduced by 42%, while the PBIAS was reduced from 87 to 40%. This substantial improvement is mostly due to the water diverts outside the basin, the irrigation withdrawals, and the resulting reduction in the high flows. However, a 40% positive BIAS persists, potentially due to unmet demands for irrigation and water transfers outside the basin (unmet demands for irrigation and the TSWT alone amount to 929 hm3 for the entire simulated period). Considering that the total observed streamflow for the simulated period is 12,384 hm3, by achieving a better percentage of demands supply for these demands the PBIAS would be reduced by 8 points. The remaining error can be attributed to other factors: overestimation of water yield in the model, the need for improvement in some reservoirs release, using average and median values for water transfers and the configured release tables, inaccuracy of the water transfer records, oversight of other consumptive demands, etc. Implementing a consumptive use for water transfers for human consumption, returning a fraction of the volume extracted (e.g., 80%) instead of the full amount can significantly improve the simulated water volume, making the model more realistic. While there is still room for improvement, implementing water resources management has notably enhanced the streamflow simulation at the UTRB outlet.

4 Discussion

The results of this study demonstrate the capability of SWAT + to accurately simulate not only hydrological processes, as many studies have shown (Tan et al. 2020), but also complex water management practices in highly regulated river basins. The ability to define detailed water management actions within a versatile hydrological model like SWAT + represents a significant advancement in the field of hydrological modelling. These new capabilities enhance the model’s applicability and reliability, especially considering the widespread influence of human activities on river basins (Haddeland et al. 2014; Sofi et al. 2020). This innovative development presents new research opportunities for exploring how to model catchments with substantial human intervention, such as comparing calibration methods (natural regime vs. including management operations), evaluating different approaches for introducing management actions (use rates or others), and understanding how these factors impact the simulation of future scenarios.

The simulated management actions in this work (i.e., irrigation, water transfers, and reservoirs) are present in most river basins (Grill et al. 2015). This study serves as a guide and example of implementing management actions in SWAT + . Irrigation demands were accurately simulated using a decision table created for the UTRB, and thus we recommend other modellers to develop decision tables tailored to their specific study area. The configuration of the irrigation timing and sources was found crucial for a proper application and should also adapted for the modelled area. Despite the numerous studies applying SWAT for irrigation (Samimi et al. 2020), limitations of the model are highlighted in most of them, leading in many cases to use SWAT to simulate hydrological processes and then couple it with other models (e.g., Ahmadzadeh et al. 2022; Zhang et al. 2023) or to adapt the code (e.g., Xie et al. 2014; Delavar et al. 2020). Auto irrigation function, both implemented in SWAT and SWAT + , has been previously used to estimate demand and quantify water scarcity (Arnold et al. 2018; Chawanda et al. 2020; Mikosch et al. 2020). However, the implementation of detailed irrigation practices (such as from different sources, seasonal demands, etc.) has been made possible only with SWAT + . Almahawis et al. (2024) developed a comprehensive evaluation of the effect of irrigation practices in SWAT + , but coupled with the gwflow module (Yimer et al. 2023). To the best of our knowledge there are no studies which have implemented the water allocation module in SWAT + .

Water transfers demand was accurately simulated with the designed decision tables, since rates to reproduce the observed demand were used. The definition of the source objects was important in this case: while most of the demands whose source was a reservoir were fully met, many water transfers whose source was a channel were not met. The implementation of water transfers has been enhanced in SWAT + compared to previous versions of SWAT, allowing to calculate the demand based on conditions and increasing the flexibility regarding the source objects. Several works simulated inter-basin water transfers using SWAT (e.g., Marak et al. 2020; Woo et al. 2021) despite of its limitations to evaluate different scenarios and the impact of these transfers on water resources. However, a complex water transfer system such as presented in this study, where more than 20 water transfers with different purposes are modelled, has not been addressed before. The results pointed to some potential improvements, both in the configuration of the UTRB model but also on the code. The future enhances of the water allocation module will allow to simulate more actions related to water management (consumptive use fraction, wastewater treatment, etc.) and thus improve model reliability and adherence to real-world conditions.

Reservoir´s simulation was the most challenging task of this work because of multiple factors involved (model calibration, implementation of water transfers, upstream reservoirs, reliability of observed data) in addition to the decision tables. The constructed release decision tables allowed to consider different scenarios and resulted in an accurate simulation for many of the simulated reservoirs. The introduction of release decision tables in SWAT + has increased the flexibility when simulating these relevant water management elements. Reservoirs simulation with SWAT has been addressed in several works (e.g., Liu et al. 2019; Marak et al. 2020; Zhang et al. 2012), but the limitations of previous SWAT versions led to changes in the code (e.g., Jordan et al. 2022; Kim et al. 2021; Wang et al. 2023) or coupling the models with other software (Anand et al. 2018). SWAT + was also used to simulate reservoirs operation by Wu et al. (2020), who presented the SWAT + reservoir routines and developed and implemented numerous release decision tables for the USA, including a calibration procedure for the decision tables. The large scale that was used by Wu et al. (2020) and the automatic calibration procedure limited the detailed evaluation and the goodness of fit of the results. Chawanda et al. (2020) implemented a simulation of reservoirs using SWAT + for Southern Africa using global datasets, but the methodology performed, and the results presented in the study regarding reservoirs are not instructive. The methodology performed in the UTRB might serve SWAT + users to develop their own decision tables and demonstrate the satisfactory simulation of reservoir outflow and storage.

The simulation of the complex water resources management in the UTRB has been possible due to the flexibility allowed by the new features of SWAT + combined with a detailed study of the management in the catchment. This study demonstrated the capability of SWAT + for simulating complex water management systems and the improvement that it supposes. Further work will focus on improving the simulation of the water resources management in the UTRB (increase irrigation amount applied, increase the demand met for relevant water transfer such as the TSWT, simulate consumptive use for human and industrial water demands, etc.). Future development of the water allocation module will enable more realistic simulation of the entire water supply process for human consumption, including withdrawal, transport, consumptive use, treatment, and release. In addition, some slight modifications on the current code (e.g., withdrawing the water available although not being enough to meet with the entire demand) would improve the module. Even in its current state, the module significantly enhances the model’s capabilities, as shown in this study.

5 Conclusions

To increase its reliability and applicability, hydrological modelling must incorporate the simulation of water resources management, as most river basins are no longer under a natural regime. The SWAT + modelling tool includes novelties for that purpose compared to its previous version (i.e., SWAT 2012), allowing to simulate reservoir management and water allocations with more detail using decision tables and the water allocation module. The functioning and application of these new features have been tested and evaluated in a highly managed river basin: the Upper Tagus River basin (UTRB, Spain). The main outcomes of this study are as follows:

  • SWAT + decision tables and water allocation module allow to simulate water management actions in a flexible way. These features were used in the UTRB to implement irrigation supply, reservoirs management, and different kinds of water transfers.

  • The configuration of the decision tables for irrigation and water transfers allowed to trigger accurately the observed demand. The percentage of the demand met simulated depended on the availability of water on the sources used, and in the case of irrigation, also on the water allocation tables parameterization (particularly on the amount applied per application). As a general recommendation, the configuration of management actions should be based on the reality of the study case.

  • The release decision tables developed for reservoirs captured the main patterns of their management and resulted in a satisfactory simulation for many of them. These tables had a common structure capable of considering three scenarios (emergency, scarcity and regular) but were adapted for the different reservoirs. The configuration of the properties and initial conditions of the reservoirs, the implementation of water transfers extracting water from reservoirs, and the previous calibration of the model (ensuring realistic reservoirs inflow), was crucial to achieve satisfactory results.

  • Implementing management actions in the UTRB improved the simulation of the streamflow at its outlet, reducing the PBIAS more than 50% regarding the natural regime scenario. Some potential improvements were identified, both in the UTRB model configuration and in the SWAT + code.

  • This work presents and explains, for the first time, a complete application of SWAT + ’s new features for simulating water resources management in a highly managed basin, with satisfactory results. The simulation of water management actions using SWAT + suppose new research opportunities towards increasing modelling reliability, for which this study will serve as a reference.