1 Introduction

The global average temperature has increased by about 0.8 °C since the mid-19th century. It has been shown (e.g., Bloomfield 1992; Gao and Hawthorne 2006; Wu and Zhao 2007; Keller 2009) that this increase is statistically significant and that it can, for the most part, be attributed to human-induced climate change (IPCC 2013; Foster and Rahmstorf 2011). A temperature increase is obvious also in regional and local temperatures in many parts of the world. However, compared with the global average temperature, the regional and local temperatures exhibit higher levels of noise, which has largely been removed from the global temperature due to the higher level of averaging. It is therefore not always clear that a regional or local warming signal, although apparent “to the naked eye” in the temperature data, can, under strict assumptions, be considered statistically significant. Because climate change is one of the most serious environmental issues today, the question of statistical significance in local and regional temperature trends is not only of scientific but also of public interest.

In this article, we consider the time series of Finnish average temperatures in 1847–2013. Because Finland is located in northern latitudes, it is subject to the polar amplification of climate change-induced warming, which is due to the enhanced melting of snow and ice and other feedback mechanisms (see, e.g., Screen and Simmonds 2010; Serreze and Barry 2011). Therefore, warming in Finland is expected to be approximately 50 % higher than the global average. Conversely, the location of Finland between the Atlantic Ocean and continental Eurasia causes the weather to be very variable, and thus the temperature signal is rather noisy.

The concept of trend in itself is not completely free of ambiguity (e.g., Wu et al. 2007). Ambient temperature time series, for example, exhibit autocorrelation created by processes that are not completely understood. Therefore, the choice of the autocorrelation model is somewhat arbitrary, which is reflected in the obtained trend and its significance level. It is relatively straightforward to calculate different averages and linear trends from the observed temperatures. However, to evaluate the significance of the observed changes relative to the natural year-to-year variability and to give realistic uncertainty estimates of the trends we need statistical modeling. In this paper we have used dynamic regression to model the seasonality and the background level of the average temperature in Finland for the years 1847–2013. As our model fits the observed data well and the non-modeled part of the variability, the model residuals, can be seen to consisting of independent Gaussian noise, we can safely say that the uncertainty attributed to the trend values given here is well justified.

2 Data

Tietäväinen et al. (2010) created an over 160-year-long time series of monthly mean temperature grids with 10 km resolution for Finland. Homogenized station values of monthly mean temperature (Tuomenvirta 2001) from Finnish weather stations as well as monthly mean temperatures from selected weather stations in Sweden, Norway, and Russia near the Finnish border were used for the spatial interpolation. A kriging interpolation method (Matheron 1963; Ripley 1981), especially developed for climatological applications in Finland (Henttonen 1991), was used for creating the monthly mean temperature grids. As external forcing parameters, the kriging method took into account the geographical coordinates, elevation of the terrain, and the percentage of lakes and sea in each grid box. At the 10 km resolution, a total of 3,829 grid boxes were needed to cover the whole of Finland. Besides, according to Tietäväinen et al. (2010), this spatial model has previously been applied in climatological research projects conducted by Venäläinen and Heikinheimo (1997), Vajda and Venäläinen (2003), Venäläinen et al. (2005), Vajda (2007), and Ylhäisi et al. (2010).

The spatial representativeness of the observation station network is highly dependent on time. A meteorological observation network was initiated in 1846 by the Societas Scientiarum Fennica—The Finnish Society of Sciences and Letters (Finska Vetenskaps-Societeten). The extent of data is limited to temperature measurements from six stations in the first year of the time series in 1847. After decades of slow growth in the number of observation stations, in the 1880s, many new observation stations were established in different parts of southern and central Finland; however, in northern Finland, the first weather stations were not set up until the early 20th century. Therefore, data from Sweden and Norway is crucial. The number of stations used for the interpolation process increased continuously until the 1970s, when there were 179 stations in the network, after which it has slowly decreased. The density of the station network is still higher in southern and central Finland than in the northern part of the country. Stations outside of Finnish borders were removed from the kriging interpolation after 2002 and currently there are more than 120 stations in the network. More details on the station network can be found in Tietäväinen et al. (2010).

The limited amount and uneven distribution of the observation stations is the main source of uncertainty in the interpolated temperature fields. Tietäväinen et al. (2010) determined the errors and uncertainties in the annual and seasonal mean temperatures calculated from the monthly grids for the whole of Finland. According to their study, the uncertainty in annual and seasonal mean temperatures of Finland during the 19th century was large, with a maximum of more than ±2.0 °C in wintertime in the mid-1800s. At the beginning of the 20th century, the uncertainty related to the limited station network was in wintertime less than ±0.4 °C and during other seasons less than ±0.2 °C. For the monthly mean temperature grids, corresponding uncertainty calculations have not been made. Even though the Finnish station values of monthly mean temperatures were homogenized, minor uncertainties may have been introduced into the temperature grids both by inaccuracies in the homogenization process and possible remaining heterogeneities in the station time series (Tietäväinen et al. 2010). Figure 1a shows the annual mean levels of the temperature in Finland and 1b shows the monthly values from the last decade in order to demonstrate the yearly variation in the time series. In this paper, we use the data set of Tietäväinen et al. (2010) that has been extended to the end of year 2013.

Fig. 1
figure 1

a Annual means of the temperature in Finland b seasonal variation of temperature within period 2002–2013

3 Statistical methods

A trend is a change in the statistical properties of the background state of a system (Chandler and Scott 2011). The simplest case is a linear trend, in which, when applicable, we need to specify only the trend coefficient and its uncertainty. Natural systems evolve continuously over time, and it is not always appropriate to approximate the background evolution with a constant trend. Furthermore, the time series can include multiple time dependent cycles, and they are typically non-stationary, i.e., their distributional properties change over time.

In this work, we apply dynamic regression analysis by using dynamic linear model (DLM) approach to time series analysis of Finnish temperatures. DLM is used to statistically describe the underlying processes that generate variability in the observations. The method will effectively decompose the series into basic components, such as level, trend, seasonality, and noise. The components can be allowed to change over time, and the magnitude of this change can be modeled and estimated. The part of the variability that is not explained by the chosen model is assumed to be uncorrelated noise and we can evaluate the validity of this assumption by statistical model residual diagnostics.

Our model is, of course, just one possibility to describe the evolution of the observed temperatures. We see it as a very natural extension to non-dynamic multiple linear regression model. The method allows us to estimate both the model states (e.g. time-varying trends) and the model parameters (e.g. variances related to temporal variability), and we can assess the uncertainties and statistical significance of the underlying features. In this study, we are not trying to use the model to predict future temperatures, but to detect trends by finding a description that is consistent with the observed temperature variability. To study the adequacy of our chosen model, we examine the model residuals to see if the modeling assumptions are fulfilled.

With a properly set-up and estimated DLM model, we can detect significant changes in the background state and estimate the trends. The magnitude of the trend is not prescribed by the modeling formulation, and the method does not favor finding a “statistically significant” trend. The statistical model provides a method to detect and quantify trends, but it does not directly provide explanations for the observed changes, i.e., whether for example natural variability or solar effects could explain the changes in the background level. Model diagnostics and the increase in the observational data will eventually falsify incorrect models and other poorly selected prior specifications (see e.g. Tarantola 2006).

Dynamic linear models are linear regression models whose regression coefficients can depend on time. This dynamic approach is well known and documented in time series literature (Chatfield 1989; Harvey 1991; Hamilton 1994; Migon et al. 2005). These models are sometimes called structural time series models or hidden Markov models. The latter comes from the fact that dynamic regression is best described by the state space approach where the hidden state variables describe the time evolution of the components of the system. Modern computationally oriented references of the state space approach include Petris et al. (2009) and Durbin and Koopman (2012). The first describes a software package dlm for R statistical language that can be used to do the calculations described in this paper. We have used the Matlab software and computer code described in Laine et al. (2014). In this work, we use a DLM to explain variability in the temperature time series using components for a smooth varying locally linear mean level, for a seasonal effect, and for noise that is allowed to have autoregressive correlation. The autoregressive stochastic error term is used to account for long-range dependencies, irregular cycles, and the effects of different forcing mechanisms that a model with only second order random walk for mean and stochastic seasonality does not suffice to explain.

A DLM can be formulated as a general linear state space model with Gaussian errors and written with an observation equation and a state evolution equation as

$$ y_{t} = F_{t} x_{t} + v_{t} ,\,\,\,v_{t} \sim N\left( {0,V_{t} } \right), $$
(1)
$$ x_{t} = G_{t} x_{t - 1} + w_{t} ,\,\,\,\,\,w_{t} \sim N(0,W_{t} ), $$
(2)

where \( y_{t} \) are the observations and \( x_{t} \) is a vector of unobserved states of the system at time t. Matrix \( F_{t} \) is the observation operator that maps the hidden states to the observations and matrix \( G_{t} \) is the model evolution operator that provides the dynamics of the hidden states. We assume that the uncertainties, represented by observation uncertainty \( v_{t} \) and model error \( w_{t} \) are Gaussian, with observation uncertainty covariance \( V_{t} \) and model error covariance \( W_{t} \). The time index \( t \) will go from 1 to n, the length of the time series to be analyzed. In this work, we analyze univariate temperature time series, but the framework would also allow the modeling of multivariate series. We use notation common to many time series textbooks, e.g., Petris et al. (2009).

Trend will be defined as a change in the mean state of the system after all known systematic effects, such as seasonality, have been accounted for. To build a DLM for the trend we start with a simple local level and trend model that has two hidden states \( x_{t} = \left[ {\begin{array}{*{20}c} {\mu_{t} } & {\alpha_{t} } \\ \end{array} } \right]^{T} \), where \( \mu_{t} \) is the mean level and \( \alpha_{t} \) is the change in the level from time t-1 to time t. This system can be written by the equations

$$ y_{t} = \mu_{t} + \varepsilon_{\text{obs}} ,\,\,\,\varepsilon_{\text{obs}} \sim N(0,\sigma_{t}^{2} ), $$
(3)
$$ \mu_{t} = \mu_{t - 1} + \alpha_{t} + \varepsilon_{\text{level}} ,\,\,\,\,\varepsilon_{\text{level}} \sim N(0,\sigma_{\text{level}}^{2} ), $$
(4)
$$ \alpha_{t} = \alpha_{t - 1} + \varepsilon_{\text{trend}} ,\,\,\,\varepsilon_{\text{trend}} \sim N(0,\sigma_{\text{trend}}^{2} ). $$
(5)

The Gaussian stochastic “ε” terms are used for the observation uncertainty and for random dynamics of the level and the trend. In terms of the state space Eqs. (1) and (2) this model is written as

$$ x_{t} = \left[ {\mu_{t } \alpha_{t} } \right], G_{\text{trend}} = \left[ {\begin{array}{*{20}c} 1 & 1 \\ 0 & 1 \\ \end{array} } \right], F_{\text{trend}} = \left[ {\begin{array}{*{20}c} 1 & 0 \\ \end{array} } \right],W_{\text{trend}} = \left[ {\begin{array}{*{20}c} {\sigma_{\text{level}}^{2} } & 0 \\ 0 & {\sigma_{\text{trend}}^{2} } \\ \end{array} } \right]{\text{and}} V_{t} = \left[ {\sigma_{t}^{2} } \right]. $$
(6)

Note that only the state vector \( x_{t} \) and the observation uncertainty covariance (a \( 1 \times 1 \) matrix) depend on time t. Depending on the choice of the variances \( \sigma_{\text{level}}^{2} \) and \( \sigma_{\text{trend}}^{2} \), the mean state \( \mu_{t} \) will define a smoothly varying background level of the time series. In our analyses, we will set \( \sigma_{\text{level}}^{2} = 0 \) and estimate \( \sigma_{\text{trend}}^{2} \) from the observations. As noted by Durbin and Koopman (2012), this will result in an integrated random walk model for the mean level \( \mu_{t} \), which can be interpreted as a cubic spline smoother, with well-based statistical descriptions of the stochastic components.

Temperature time series exhibit strong seasonal variability. In our DLM, the monthly seasonality is modeled with 11 state variables, which carry information of the seasonal effects of individual months. In general, the number of states is one less than the number of observations for each seasonal cycle when the model already has the mean level term. The corresponding matrices \( G_{\text{seas}} \), \( F_{\text{seas}} \) (\( 11 \times 11 \) and \( 1 \times 11 \) matrices) and the error covariance matrix \( W_{\text{seas}} \) (\( 11 \times 11 \)) for the time-wise variability in the seasonal components are modeled as (Durbin & Koopman, 2012):

$$ G_{\text{seas}} = \left[ {\begin{array}{*{20}c} 1 & { - 1} & { - 1} & \cdots & { - 1} \\ 1 & 0 & 0 & {} & 0 \\ 0 & 1 & 0 & {} & 0 \\ \vdots & {} & \ddots & {} & \vdots \\ 0 & \cdots & 0 & 1 & 0 \\ \end{array} } \right], F_{\text{seas}} = \left[ {\begin{array}{*{20}c} 1 & 0 & \ldots & 0 \\ \end{array} } \right], W_{\text{seas}} = \left[ {\begin{array}{*{20}c} {\sigma_{\text{seas}}^{2} } & 0 & \cdots & 0 \\ 0 & 0 & {} & {} \\ \vdots & {} & \ddots & {} \\ 0 & {} & {} & 0 \\ \end{array} } \right]. $$
(7)

We allow autocorrelation in the residuals using a first order autoregressive model (AR(1)). In DLM settings, we can estimate the autocorrelation coefficient and the extra variance term \( \sigma_{\text{seas}}^{2} \) together with the other model parameters. For a first order autoregressive component with a coefficient ρ and an innovation variance, \( \sigma_{AR}^{2} \), we simply define

$$ G_{AR} = \left[ \rho \right], F_{AR} = \left[ 1 \right], W_{AR} = \left[ {\sigma_{AR}^{2} } \right], $$
(8)

and both ρ and \( \sigma_{AR}^{2} \) can be estimated from the observations.

The next step in the DLM model construction is the combination of the selected individual model components into larger model evolution and observation equations by

$$ G = \left[ {\begin{array}{*{20}c} {G_{\text{trend}} } & 0 & 0 \\ 0 & {G_{\text{seas}} } & 0 \\ 0 & 0 & {G_{AR} } \\ \end{array} } \right], F = \left[ {\begin{array}{*{20}c} {F_{\text{trend}} } & {F_{\text{seas}} } & {F_{AR} } \\ \end{array} } \right], \,W = \left[ {\begin{array}{*{20}c} {W_{\text{trend}} } & 0 & 0 \\ 0 & {W_{\text{seas}} } & 0 \\ 0 & 0 & {W_{AR} } \\ \end{array} } \right], $$
(9)

and the analysis then proceeds to the estimation of the variance parameters and other parameters in model formulation (e.g. the AR coefficient ρ in the matrix \( G_{\text{AR}} \)), and to the estimation of the model states by state space Kalman filter methods.

To get more intuitive meaning of the model and the stochastic error terms involved, we write the observation equation for our model as

$$ y_{t} = \mu_{t} + \gamma_{t} + \eta_{t} + \varepsilon_{t} , t = 1, \ldots , n, $$
(10)

where \( y_{t} \) is the monthly temperature at time t, \( \mu_{t} \) is the mean temperature level, \( \gamma_{t} \) is the seasonal component for monthly data, \( \eta_{t} \) is an autoregressive error component, and \( \varepsilon_{t} \) is the error term for the uncertainty in the observed temperature values. The simplification \( \sigma_{\text{level}}^{2} = 0 \) in Eq. (4) allows us to write a second difference process for the mean level \( \mu_{t} \) as

$$ {{\Updelta}}^{2} \mu_{t} = \mu_{t - 2} - 2\mu_{t - 1} + \mu_{t} + \varepsilon_{\text{trend}},\,\,\,{\text{with}}\,\,\varepsilon_{\text{trend}} \sim N(0,\sigma_{\text{trend}}^{2} ), $$
(11)

see e.g. Durbin and Koopman (2012) Sect. 2.3.1. For the seasonal component \( \gamma_{t} \), we have a condition that the 12 consecutive monthly effects sum to zero on the average, so for each t:

$$ \mathop \sum \limits_{i = 0}^{11} \gamma_{t - i} = \varepsilon_{seas} ,\,\,{\text{with}}\,\,\varepsilon_{\text{seas}} \sim N(0,\sigma_{\text{seas}}^{2} ). $$
(12)

The term \( \eta_{t} \) follows a first order autoregressive process, AR(1), with coefficient ρ:

$$ \eta_{t + 1} = \rho \eta_{t} + \varepsilon_{AR} ,{\text{with}}\,\,\varepsilon_{AR} \sim N(0,\sigma_{AR}^{2} ). $$
(13)

Finally, the observation uncertainty term \( \varepsilon_{t} \) is assumed to be zero mean Gaussian as

$$ \varepsilon_{t} \sim N\left( {0,\sigma_{t}^{2} } \right), $$
(14)

where the observation standard deviations \( \sigma_{t} \) are assumed to be known and correspond to the uncertainties from the spatial representativeness of the observations and from the averaging and the homogenization processes (Tietäväinen et al. 2010). The additional error terms \( \sigma_{\text{trend}}^{2} \), \( \sigma_{\text{seas}}^{2} \), and \( \sigma_{AR}^{2} \) account for the modeling error in the components of the model and are estimated from the data.

In the model construction above, we have four unknown model parameters: the three variances for stochastic model evolution, \( \sigma_{\text{trend}}^{2} \), \( \sigma_{\text{seas}}^{2} \), \( \sigma_{AR}^{2} \) and the autoregressive coefficient ρ. If the values of these parameters are known, the state space representation and the implied Markov properties of the processes allow estimation of the marginal distributions of the states given the observations and parameter by the Kalman filter and Kalman smoother formulas (Durbin & Koopman, 2012). The Kalman smoother gives efficient recursive formulas to calculate the marginal distribution of model states at each time t given the whole set of observations \( y_{t} , t = 1, \ldots ,n \). In a DLM these distributions are Gaussian, so defined by a mean vector and a covariance matrix. In addition, the auxiliary parameter vector θ = [\( \sigma_{\text{trend}}^{2} \), \( \sigma_{\text{seas}}^{2} \), \( \sigma_{AR}^{2} \), ρ] can be estimated using a marginal likelihood function that is provided as a side product of the Kalman filter recursion. This likelihood can be used to estimate the parameter θ using maximum likelihood method and the obtained estimates can be plugged back to the equations. We use Bayesian approach and Markov chain Monte Carlo (MCMC) simulation to estimate the posterior distribution of θ and to account for its uncertainty in the trend analysis.

The level component \( \mu_{t} \) models the evolution of the mean temperature after the seasonal and irregular noise components have been filtered out. It allows us to study the temporal changes in the temperature. The trends can be studied visually, or by calculating trend related statistics from the estimated mean level component \( \mu_{t} \). Statistical uncertainty statements can be given by simulating realizations of the level component using MCMC and the Kalman simulation smoother (Durbin and Koopman 2012, Laine et al., 2014).

The strength of the DLM method is its ability to estimate all model components, such as trends and seasonality, in one estimation step and to provide a conceptually simple decomposition of the observed variability. Furthermore, the analysis does not require assumptions about the stationarity of the series in the sense required, e.g., in classical ARIMA time series analyses and ARIMA analyses can be seen as special cases of the DLM analyses. For example, the simple local level and trend DLM of Eqs. (35) is equivalent to the ARIMA (0,2,2) model. In addition, the state space methods can easily handle missing observations; they are extendible to non-linear state space models, to hierarchical parameterizations, and to non-Gaussian errors (e.g. Durbin and Koopman 2012 and Gonçalves and Costa 2013). Details of the construction procedure of a DLM model and estimations of model states and parameters can be found in Gamerman (2006) and in Petris et al. (2009). We use an efficient adaptive MCMC algorithm by Haario et al. (2006) and the Kalman filter likelihood to estimate the four parameters in θ. The details of the estimation procedure can be found in Laine et al. (2014) who use similar DLM model to study trends in stratospheric ozone concentrations. We also conducted our analyses with dlm-package in R-software (Petris 2010) to verify the computations.

4 Results and discussion

We used a dynamic linear model with a local linear trend, a 12-month dummy type seasonal component, and an AR(1) autocorrelated error term to decompose the temperature time series. The time series consisted of 2004 monthly observations from years 1847–2013. Figure 2 shows the measurement series and the modeled mean background temperature \( \mu_{t} \). For clarity, the observations in the figure are annual averages, but in all of the statistical analyses monthly data are used. The mean temperature has risen in two periods, from the 1850s to the late 1930s and from the end of the 1960s to the present day, and was close to a constant between 1940 and 1970. It has been suggested that the global mean temperature oscillates quasiperiodically on a multidecadal time scale either globally (e.g., Henriksson et al. 2012, and references therein) or regionally (e.g., Sleschinger and Ramankutty 1994). The multidecadal oscillation is suggested to provide part of the explanation both for the near-constant global mean temperatures in recent years, despite the warming effect of increasing greenhouse gas concentrations, and for the declining global mean temperature in the 1950s and 1960s, along with the cooling caused by postwar anthropogenic aerosol emissions. Therefore, we tested the data for 60–80-year oscillations in order to see whether a multidecadal oscillation is present also in our data and whether the observed changes in the trend of the time series are due to this phenomenon. The results (not shown) indicated that taking account the multidecadal oscillations did not improve the model, as the change in BIC-value used in model comparison was almost negligible, thus we decided not to include them in the final model.

Fig. 2
figure 2

Yearly mean temperatures as dots and the mean temperature level \( \mu_{t} \) as a smooth solid line. The decadal average temperatures given by the model are shown as mean (solid black line) and with 50 and 95 % probability limits (darker and lighter gray bars)

The variance parameters in matrices \( V_{t} \) and \( W_{t} \) and the autocorrelation coefficient ρ used in the DLM were estimated using the MCMC simulation algorithm. The length of the MCMC chain was 10,000, the last half of the chain was used for calculating the posterior values, and the convergence of the MCMC algorithm was assessed using plots of the MCMC chain, by calculating convergence diagnostics statistics, and by estimating the Monte Carlo error of the posterior estimates.

From the first to the last 10-year period of the data (from 1847–1856 to 2004–2013), the average temperature in Finland has risen by a total of 2.3 ± 0.4 °C (95 % probability limits). This equals to an average change of 0.14 °C/decade. The number of measurement stations in the first years of the measurement period was rather low, which is accounted for in the observational error \( \sigma_{t}^{2} \), but this causes only a small increase in the uncertainty estimates at the beginning of the series. Figure 2 also shows the Finnish decadal average temperatures estimated from the model as grey bars for 50 and 95 % probability limits, and the actual numbers are presented in Table 1. The temperature change was negligible in the middle of the 20th century, but the current temperatures show an indisputably rising trend. The mean temperature within 2000–2010 was almost one degree higher than in the 1960s and more than two degrees higher than in the 1850s.

Table 1 Modeled decadal average temperatures [°C] with lower and upper limits of the 95 % probability limits as in Fig. 2

Figure 3 shows the prior and posterior probability distributions for the unknown parameters and the numeric values for prior and posterior means are shown in Table 2 with corresponding relative standard errors.

Fig. 3
figure 3

Parameter prior (dotted line) and posterior (solid) probability distributions. Priors are log-normal for variances and uniform U(0,1) for the correlation parameter ρ. The posterior is estimated from the MCMC chain by using kernel density estimation method

Table 2 Prior and posterior means and corresponding relative standard errors shown in Fig. 3

The residual diagnostics for the DLM model are shown in Fig. 4. The distribution of residuals agrees well with the normality assumption and there is no significant autocorrelation.

Fig. 4
figure 4

Residual diagnostics plots for the DLM model. Upper panel shows the autocorrelation function estimated from the standardized residuals; lower panel shows the normal probability plot of the standardized residuals

The same model, but without the seasonal component, was fitted for observations of each month separately. Figure 5 shows that the change in the temperature has not been even between the months. The increase in temperature has been highest in late autumn and in spring but the change in summer months, especially in July and August, has been smaller. The temperature changes from 1847–1856 to 2004–2013 for each month have been collected in Table 3.

Fig. 5
figure 5

Monthly mean temperatures with the mean modeled temperature and corresponding 95 % probability limits

Table 3 Temperature change, between the last and the first 10 years, for each month

5 Conclusions

By using advanced statistical time series approach, a dynamic linear model (DLM), we were able to model the uncertainty caused by year-to-year natural variability and the uncertainty caused by the incomplete data and non-uniform sampling in the early observational years, and to estimate the uncertainty limits for the increase of the mean temperature in Finland. The Finnish temperature time series exhibits a statistically significant trend, which is consistent with the human-induced global warming. Our analysis shows that the mean temperature has risen by a total of 2.3 ± 0.4 °C (95 % probability limits) during the years 1847–2013, which amounts to 0.14 °C/decade. The warming trend before the 1940s was close to linear for the whole period, whereas the temperature change in the mid-20th century was negligible. However, the warming after the late 1960 s has been more rapid than ever before. Within the last 40 years the rate of change has varied between 0.2 and 0.4 °C/decade. The highest increases were seen in November, December and January. Also spring months (March, April, May) have warmed more than the annual average. Impacts of long-term cold season and spring warming have been documented e.g. in later freeze-up and earlier ice break-up in Finnish lakes (Korhonen 2006) and advancement in the timing of leaf bud burst and flowering of native deciduous trees growing in Finland (Linkosalo et al. 2009). Although warming during the growing season months has been small in centigrade it has resulted in attributable growth in growth of boreal forests in Finland in addition to other drivers (forest management, nitrogen deposition, CO2 concentration) since the 1960s (Kauppi et al. 2014). The analysis of a 166-year-long time series shows that the temperature change in Finland follows the global warming trend, which can be attributed to anthropogenic activities (IPCC: Climate Change 2013). The observed warming in Finland is almost twice as high as the global temperature increase (0.74 °C/100 years), which is in line with the notion that warming is stronger in higher latitudes.