1 Introduction

Effective conservation measures are urgently needed to reduce and reverse the ongoing global biodiversity loss and ecosystem degradation under increasing human pressures [1, 2]. In recent years, we have witnessed an explosion of open biodiversity data (raw observed records), including organism occurrence records (e.g., Global Biodiversity Information Facility (GBIF): https://www.gbif.org/; Ocean Biogeographic Information System (OBIS): https://obis.org/), ecological community data collected at local scales [3, 4], and citizen-science-based observations (e.g., iNaturalist: https://inaturalist.org; eBird) [5]. Accordingly, accumulated data play a central role in biodiversity mapping [6,7,8] and data-driven conservation planning [9,10,11]. This also provides a basis for spatial conservation planning (SCP), in which optimization algorithms are often employed to select an optimal set of protection sites [12, 13]. A number of protected areas, a central measure of area-based conservation [14], have been introduced globally to bend the species extinction curve. Moreover, recent technological improvements allow for mapping biodiversity features at fine resolutions (e.g., 100 m × 100 m) within a large spatial extent [15], which can substantially promote fine-tuned SCP.

Simultaneously, the development of high-resolution biodiversity databases has actively been accelerating and these have been adopted for biodiversity conservation (Fig. 1). Can such high-resolution data actually improve SCP? In this note, we argue that simply promoting SCP with high-resolution data does not necessarily improve its long-term (after a transient period; i.e., years) conservation efficacy. Here, we define ‘conservation efficacy’ as the influence of management actions on focal organisms or ecosystems intended by practitioners to solve a particular conservation problem. On the contrary, it risks leading to a decrease in the conservation efficacy because the impact of temporal change on SCP may increase if the resolution of SCP becomes high (Fig. 1). This occurs because the number of individuals emigrating from a protected area at a given time interval is higher in smaller areas than that in larger areas (Fig. 2).

Fig. 1
figure 1

Improvement of data resolution used in spatial conservation planning (SCP), and the potential impact of temporal change on SCP

Fig. 2
figure 2

Schematic image of (top) fine-scale spatial conservation planning (SCP) and (bottom) coarse-grained SCP. A fine-scale SCP allows for highly flexible and compact protected networks. However, it is vulnerable to the temporal change of individuals’ locations

To demonstrate this claim, we need to scrutinize the inherent tradeoffs between the conservation efficacy over short-term and its persistence over long-term. In order to develop such an argument, it is necessary to consider the temporal dynamics of the ecosystem, but such discussions are largely limited in existing studies. Temporal changes in ecosystems can be short-term, for example, due to population fluctuations via demographic and dispersal processes; intermediate-term, due to land-use changes in the surrounding area; or long-term, due to shifts in the distribution range caused by climate change.

It is worth mentioning here that this paper does not intend to criticize the recent development of biodiversity data, but our key message is that an appropriate data resolution may exist for area-based conservation measures of SCP, which is not necessarily the finest one. Importantly, fine-scale quality data can strengthen our proposed approach though such data should be processed before application.

In this paper, we further point out the benefit of the conversion of data resolution in ecosystem conservation; the data resolution in SCP determines the magnitude of temporal fluctuations in the conservation effect of area-based conservation measures (e.g., establishment of protected areas). This provides useful insights into the scale issue of ecosystem management and time-robust SCP in the era of data explosion of biodiversity data. This does not limit specific taxa but can be applied to any biodiversity feature. Using a simple worked example, we also discuss inherent tradeoffs between the conservation efficacy over short-term and its persistence over long-term, and this provides the rationale for matching the data resolution with ecosystem management.

Our proposed method further complements the existing framework of SCP and facilitates biodiversity conservation that also promotes the Sustainable Development Goal 15 “protecting, restoring and promoting sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and biodiversity loss” [16].

2 Matching data resolution for spatio-temporally robust spatial conservation planning in a dynamic environment

Motivated by the necessity of cost-effective SCP, mathematically grounded decision-support tools, such as Marxan [12] and Zonation [13] have been widely utilized in optimal site selection for protected areas and a number of extensions are ongoing [17, 18]. Our proposed method is compatible with these existing methods; our method can be used to input data for existing decision-support tools. Generally, existing decision-support tools are based on the single-time optimization provided by objective and constraint functions. This is not optimal over longer periods, as it simply approximates the optimal configuration of conservation efforts at a particular moment. However, in the long term, ecosystem dynamics erode the effect of this (snapshot) optimal effort allocation at a past time point. The significance of this erosion of conservation efficacy is expected to be large in the case of SCP developed via high-resolution biodiversity data, suggesting the above-mentioned tradeoff.

Here, we propose a data resolution matching (DRM) method to adjust the data resolution to a (lower) resolution to be suitable for SCP to promote its time-robust conservation efficacy. In particular, the DRM method requires the operation of coarsening of the obtained high-resolution data, if available (Fig. 3).

Fig. 3
figure 3

Schematic explanation for the data resolution matching (DRM) method. This method requires the operation of converting the a original high-resolution data to b appropriate coarse-grained data for spatial conservation planning

The essence of the DRM method (recalculation of fine-scale data into coarser scales) in SCP is to reduce the number of possible combinations of spatial conservation actions, suppressing variations in spatiotemporal ecological dynamics at the scale of the input data resolution. Let us imagine a simple situation where an individual randomly moves in a 1 km × 1 km habitat with a 500 m × 500 m resolution map. The probability of finding the individual in one subdivision is 1/4, while with a map of 50 m × 50 m, it becomes 1/400. The chance of accurately defining the location of individuals becomes even smaller when the number of individuals is greater than one. In the latter case (50 m resolution), the probability of correctly determining the locations of two individuals is 1/16000, whereas in the former case (500 m resolution) it is 1/16. The effect of such changes of scale has previously been discussed in the context of ecological sampling, and tradeoffs between fine-scaled occurrence maps and data accuracy have been identified [19, 20].

3 Effect of data resolution matching

3.1 A worked example

Here, we demonstrate a simple worked example of DRM method on long-term efficacy of protected areas by applying stochastic macroecological model developed by Takashina et al. [21] to describe the spatiotemporal dynamics of individuals of multiple species. We introduced a reserve score to characterize the temporal efficacy of protected areas and evaluated how the reserve score varies under spatiotemporal ecosystem dynamics. While the conservation efficiency of a protected area at a given point in time can be evaluated by the reserve score, the long-term variation is evaluated by its coefficient of variation (CV) in order to evaluate the degree of variation of conservation efficiency by considering the effect of the mean value as in Takashina [22].

Figure 4 represents the dynamics of reserve score as well as its frequency under four different data resolutions (2− 4, 2− 2, 20, or 22 km2). In each scenario, the optimal protected area sites (20% of the total area 26 km2) are selected that maximize the reserve score at the beginning of the simulation (t  = 0). The scenario with the highest resolution (2− 4 km2) shows the highest reserve score at the beginning, but the reserve score decreases significantly over time and still shows relatively large time variability even after settling to a certain state. The scenario with the lowest resolution (22 km2), on the other hand, does not have a high initial reserve score, but the temporal phenomenon of the reserve score does not occur as in the former case, and the temporal variation of the reserve score remains small. The CV shows that the higher the resolution, the lower the value, indicating that the higher the resolution, the greater the temporal change in conservation efficacy.

Fig. 4
figure 4

Example of the effect of the data resolution conversion on the conservation efficacy (presented by reserve score) of spatial planning over time where different colors represent the different data resolutions being used (2− 4, 2− 2, 20, or 22 km2). The reserve score is defined in the range between 0 and 1, where the lowest and highest scores are realized when no individuals are protected and all individuals are protected, respectively. In each simulation, an optimal reserve selection was made at the beginning of the simulation (at t = 0) that occupies 20% of the concerned region with the area 26 km2. We measure the persistence of conservation efficacy using the coefficient of variation (CV; Takashina 2021) of the reserve score over 1.5 × 105 time steps. The CV of each optimal selection is 0.02116, 0.00833, 0.00309, and 0.00231 for the data resolution being used 2− 4, 2− 2, 20, and 22 km2, respectively, demonstrating that using a coarser data resolution increases the persistence of the conservation effectiveness. In this example, five species were drawn randomly from the model

This simple worked example demonstrates that the time-robust conservation efficacy of SCP can be achieved by coarsening the data resolution, but it reduces temporal reserve score, i.e., there is a tradeoff between the efficacy of conservation actions over short-term and its persistence over long-term.

3.2 Summary of the methods

The initial spatial pattern of individuals was generated based on the stochastic macroecological model presented by Takashina et al. [21], in which all individual locations in the ecosystem are described by the spatial point processes. The model has the capacity to recover multiple well-known macroecological patterns across scales such as the species-area relationship and the relative species abundance. See Appendix B in Takashina et al. [21] for details regarding point generations. Key parameters used in the worked example were the species and individual intensities (\({\lambda }_{s}\), \({\lambda }_{i}\)) that characterize the average density of species and individuals, respectively. We set the species intensity as \({\lambda }_{s}= 0.00064~(=5\times {10}^{4}/{\left(5\times {10}^{3}\right)}^{2}{\uppi })\) and individual intensity as \({\lambda }_{i}=100, \text{r}\text{e}\text{s}\text{p}\text{e}\text{c}\text{t}\text{i}\text{v}\text{e}\text{l}\text{y}.\) We chose each species to have a circular geographic range with a radius of 30 km and individual locations were determined randomly within this range. At each simulation time step, the spatial dynamics were described by randomly moving the position of a randomly selected individual within a circular (not mutually exclusive) region unique to each individual chosen from \(N\left(0.01, 0.1\right)\).

The reserve score at time t is defined as \(\text{R}\text{S}\left(t\right)={\sum }_{s,i}{w}_{s,i}{I}_{s,i }\left(t\right)\), where \({w}_{s,i}\) is the weight of the \(i\)th individual of species \(s\) and \({I}_{s,i}\left(t\right)\) is an indicator variable that takes a value of 1 if the \(i\)th individual of species \(s\) is within the reserve at time t, and 0 otherwise. We used the simplest situation \({w}_{s,i}=1/{\sum }_{s}{n}_{s}\), where \({n}_{s}\) is the population size of species \(s\) in the concerned region. Hence, in this example, reserve score measures the individual coverage by protected areas.

4 Toward robust SCP in the era of biodiversity big data

The increasing availability of biodiversity databases enables us to conduct SCP analyses at finer spatial resolutions. Refined input data would widen the applicability of SCP to conservation practices, including invasive species management [23, 24], management of successional statuses within a landscape [25], and urban green space planning [26, 27], as well as addressing the classic reserve design problems [28]. However, as we showed here, there is a tradeoff between the data resolution and the persistence of conservation efficacy over time. This implies that finer input data would not always be the best in SCP analyses, and analysts need to tune the data resolution depending on one’s specific purpose. For example, a long term persistence of efficacy would be particularly important when transaction and relocation costs are prohibitively expensive, such as in the case of establishment of protected areas [29].

Our simple worked example showed that conservation efficacy was vulnerable to temporal fluctuations of species distribution status if optimal protected areas are selected using high-resolution data (Fig. 4). To some extent, this can be viewed as spatial overfitting based on a snapshot of biodiversity status. The impact of overfitting becomes more significant as the spatial resolution becomes finer [30], suggesting an inherent scale issue of ecosystem management. In most cases, the existing optimization algorithms for SCP are based on snapshot data and potentially involve this problem. However, species distribution status fluctuates due to inherent stochasticity in nature, and hence conservation efficacy also varies over time [22]. Previous studies have argued the importance of dynamics mainly from the viewpoint of future predictions focusing on climatic and land-use changes. Researchers have built snapshots of future biodiversity state by projecting predictive models on future environmental scenarios, and examined the congruence with the current snapshots eg. [31, 32]. Our idea of the time robustness of conservation efficacy is conceptually different from these studies in the sense that our approach does not involve future predictions.

The choice of the data resolution is a classic issue in conservation science that has been discussed in two separate contexts, i.e., mathematical suboptimality and cost-efficiency issues resulting from spatial biases [33, 34], and spatial aggregation of conservation areas to improve long-term conservation efficacy [35]. Biodiversity data are spatially, temporally, and taxonomically biased [36,37,38,39] due to shortages of survey efforts [40] and a lack of mobilization of potentially usable information, including undigitized records [37, 41]. Such incomplete and biased information can degrade the performance of conservation planning [42]. Previous studies have concluded that finer spatial resolution would be preferable in terms of cost-efficiency [34] or representation of important micro habitats [43]. In contrast, it has been pointed out that spatial aggregation of conservation areas could be an effective way to keep the connectivity among habitats which contributes to ensuring the long-term persistence of biodiversity [35]. Moilanen et al. [35] proposed a smoothing approach in which fine-scale input data (e.g., species distribution maps) is smoothed using a two-dimensional kernel. The data resolution matching introduced in this study can reconcile the two alternative viewpoints by balancing better fitting to the current state (cost-efficient and representative) and the long-term persistence of conservation efficacy.

5 Conclusion

In this paper, we argued that a naive integration of high-resolution biological data into spatial conservation planning (SCP) to recommend fine-tuned spatial actions does not necessarily improve the efficacy of the SCP. This is based on the fact that high-resolution data generates a fine-tuned optimal protected area network at establishment while highly-resolved spatial actions produced are largely vulnerable to temporal changes of ecosystems due to the situation we term as “spatial overfitting”. To overcome this challenge, we suggest the conversion of data resolution to a coarser scale to utilize the high-resolution data while reducing the temporal vulnerability of SCP.

While we present our argument in an intuitive manner in this short paper, a formal analysis will be necessary. For example, since we demonstrated tradeoffs between the conservation efficacy over short-term and its persistence over long-term with regard to the scale of data resolution, there may exist an ideal spatial scale to balance the time-robust conservation effect and data resolution. The optimal scale may also vary with target ecosystems. We anticipate that species’ migration ability within the concerned ecosystem affects such optimal size; hence, understanding the interplay between management scale and biological scale would be required to achieve an ideal data conversion.

Finally, while decision-support tools for SCP have been utilized to understand an optimal scenario in practice, explicit considerations for socioeconomic, political, and cultural realms in implementation issues, such as those related to ownership and land use options [44, 45], would be necessary. The recognition and the integration of practical issues into our framework will further improve the concept’s utility.