Introduction

Theoretical and empirical studies have shown that a positive relationship exists between financial markets and economic growth (e.g., Levine, 1997; Rajan and Zingales, 1998; Rousseau and Watchel, 2000; Beck et al., 2003; Guptha and Rao, 2018). Given the significance of financial markets, forecasting financial returns occupies a paramount position in investment decision making. However, stock markets are characterized by high volatility, dynamism, and complexity (Johnson et al., 2003; Cristelli, 2014; Wieland, 2015). Movements in stock markets are influenced by several factors, such as macro-economic factors, international events, and human behavior. Hence, forecasting stock returns can become a challenging task. The profitability of investments in stock markets highly depends on the predictability of stock movements. If a forecasting model or technique can precisely predict the direction of the market, investment risk and uncertainty can be minimized. It would enhance investment flows into stock markets and also be useful for policymakers and regulators in making appropriate decisions and taking corrective measures.

There are two distinct schools of thought—namely, fundamental analysis and technical analysis—for predicting stock price movements. Fundamentalists forecast stock prices on the basis of financial analyses of companies or industries. Technical analysts, meanwhile, use historical securities data and predict future prices on the assumption that stock prices are determined by market forces and that history tends to repeat itself (Levy, 1967). These theories coexisted for several decades as strategies for investment decision making. These approaches were challenged in the 1960s by random walk theory, popularly known as the efficient market hypothesis (Fama, 1970), which proposes that future changes in stock prices cannot be predicted from past price changes. Some empirical studies have shown the presence of ‘random walk’ in stock prices (e.g., Tong et al., 2014; Konak and Seker, 2014; Erdem and Ulucak, 2016). However, most empirical studies have found that stock prices are predictable (Darrat and Zhong, 2000; Lo and MacKinlay, 2002; Harrison and Moore, 2012; Owido et al., 2013; Radikoko, 2014; Said, 2015; Almudhaf, 2018).

Various forecasting techniques are available for time series forecasting. Autoregressive integrated moving average (ARIMA) models were proposed by Box and Jenkins (1970) for time series analysis and forecasting. Some studies have been conducted by employing ARIMA models to forecast stock market returns (Al-Shaib, 2006; Ojo and Olatayo, 2009; Adebiyi and Oluinka, 2014; Mondal et al., 2014). Quite a few studies found that ARIMA models produced inferior forecasts for financial time series data (Zhang, 2003; Adebiyi and Oluinka, 2014; Khandelwal et al., 2015). To account for nonlinearities resulting from regime changes in economies, some researchers have used Markov regime-switching models and threshold autoregressive (TAR) models assuming nonlinear stationary processes to predict stock prices (Hamilton, 1989; Tong, 1990). Tasy (1989) proposed a simple yet widely applicable model-building procedure for threshold autoregressive models as well as a test for threshold nonlinearity. Gooijer (1998) considered regime switching in a moving average (MA) model and used validation criteria for self-exciting threshold autoregressive (SETAR) model selection. Some empirical studies comparing different methods with SETAR found that this method produced superior results to linear models (e.g., Clements and Smith, 1999; Boero and Marrocu, 2002; Boero, 2003; Firat, 2017).

In the late 1980s, a class of artificial intelligence (AI) models—such as feedforward, backpropagation, and recurrent neural network models—were introduced for forecasting purposes. The distinguishing features of artificial neural networks (ANN) are that they are data-driven, nonlinear, and self-adaptive, and they have very few apriori assumptions. This makes ANNs valuable and attractive for forecasting financial time series. Among ANN models, the feed-forward neural network with a single hidden layer has become the most popular for forecasting stock market returns (Zhang, 2003). Many studies have shown that these models yield more accurate forecasts compared to naïve and linear models (e.g., Ghiassi et al., 2005; Mostafa, 2010; Qiu et al., 2016; Aras and Kocakoc, 2016).

In addition, there are various neural network models for forecasting stock returns. Lu and Wu (2011) used the cerebellar model articulation controller neural network (CAMC NN) model to forecast the stock market indices of the Nikkei 225 and the Taiwan Stock Exchange. The results showed that CAMC NN made more accurate forecasts than support vector regression and back-propagation neural network (BPNN) models. Guresen et al. (2011) observed that classical ANN models and multilayer perceptron (MLP) outperformed GARCH-class models for the NASDAQ index. Lahmiri (2016) employed variational mode decomposition (VMD) based general regression neural networks (GRNN) for four economic and financial data sets and found that VMD-GRNN models outperformed the ARIMA model and other neural network models. Nayak and Misra’s (2018) genetic algorithm-based condensed polynomial neural network (GA-CPNN) improved the accuracy of forecasting stock indices compared to radial basis function neural network (RBFNN) and multilayer perceptron and genetic algorithm (MLP-GA) models. Zhong and Enke (2019) observed that techniques such as deep neural networks using principal component analysis (PCA) and artificial neural networks performed better than traditional models. However, most studies have found that traditional ANN models, as well as ANN models combined with linear models, produce more accurate forecasts than other models (e.g., Asadi et al., 2010; Wang et al., 2011; Khandelwal et al., 2015; Mallikarjuna et al., 2018).

Recently, frequency-domain models, such as spectral analysis, wavelets, and Fourier transformations, have been proposed to improve the forecasting accuracy of financial time series. One widely used technique is singular spectrum analysis (SSA), which is a robust nonparametric method with no prior assumptions about the data (Golyandina et al., 2001; Hassani et al., 2013a). SSA decomposes a time series data into its components and then reconstructs the series by leaving the random noise component before using the reconstructed series to forecast the future points in the series (Hassani, 2007; Ghodsi and Omer, 2014). Since most financial time series data sets exhibit neither purely linear nor purely nonlinear patterns, the combination of linear and nonlinear, i.e., hybrid techniques to model complex data structures for improved accuracy has been proposed (Asadi et al., 2010; Khashei and Bijari, 2010; Khashei and Bijari, 2012; Khandelwal et al., 2015; Ince and Trafalis, 2017). Khashei and Hajirahimi (2017) compared linear and nonlinear models with hybrid models (HM) and concluded that hybrid models perform better than individual models.

Only a few studies have aimed to find a suitable method for forecasting the stock returns of a group of markets. Guidolin et al. (2009) evaluated the performance of linear and nonlinear models for forecasting the financial asset returns of G7 countries. They found that nonlinear models, such as threshold autoregressive (TAR) and smooth transition autoregressive (STAR) models, performed better than linear models in the case of US and UK asset returns. Meanwhile, simple linear models such as random walk and autoregressive models were better for French, German, and Italian asset returns. This suggests that no single model is suitable for forecasting the returns of all stock markets. Awajan et al. (2018) compared the performance of several forecasting methods by applying them to six stock markets and found that the empirical mode decomposition Holt–Winters method (EMD-HW) provided more accurate forecasts than other models.

Though there are various techniques for forecasting stock market returns, no single method can be employed uniformly for the returns of all stock markets. The literature indicates that there is no consensus among researchers regarding the techniques for forecasting stock market returns. The present study, therefore, aimed to evaluate different forecasting techniques—namely, ARIMA, SETAR, ANN, SSA, and HM models, representing linear, nonlinear, artificial intelligence (AI), frequency domain, and hybrid methods, respectively—as applied to individual stock markets. This study also examined the suitability of different forecasting methods for each category of the world stock markets—namely, developed, emerging, and frontier. Finding a single method that can produce optimal forecasts for all markets could help investors save time and resources and make better decisions. This study is mainly useful for international investors and foreign institutional investors who wish to minimize risks and diversify their portfolios, with the aim of maximizing profits. The objectives of the present study are outlined below.

Objectives

  1. 1.

    To forecast stock market returns using linear, nonlinear, artificial intelligence, frequency domain, and hybrid methods.

  2. 2.

    To find the most appropriate forecasting techniques among the five above-mentioned techniques for developed, emerging, and frontier markets.

  3. 3.

    To check whether any single technique can be applied to all markets to obtain optimal forecasts.

The rest of this paper is organized as follows. Section 2 describes the data and methods employed in the study. Section 3 presents the empirical results. Finally, the conclusions are given in section 4.

Data and methodology

In accordance with the objectives of this study, we considered three types of markets—developed, emerging, and frontier—based on the Morgan Stanley Capital International classification (MSCI, 2018). The market indices taken for the developed category are Australia (ASX 200), Canada (TSX Composite), France (CAC 40), Germany (DAX), Japan (NIKKEI 225), South Korea (KOSPI), Switzerland (SMI), United Kingdom (FTSE 100), and the United States (S&P 500). Those for emerging markets are Brazil (BOVESPA), China (SSEC), Egypt (EGX 30), India (SENSEX), Indonesia (IDX), Mexico (BMV IPC), Russia (MOEX), South Africa (JSE 40), Thailand (SET), and Turkey (BIST 100). Lastly, those in the frontier category are Argentina (S&P MERVAL), Estonia (TSEG), Kenya (NSE 20), Sri Lanka (CSE AS), and Tunisia (TUNINDEX). The daily closing prices of these indices for the period 1 January 2000 to 30 December 2018 were obtained from the website www.investing.com.

Asset returns (Rt) were calculated from the closing prices of all indices using the formula:

$$ {R}_t=\frac{\left({P}_t-{P}_{t-1}\right)}{P_{t-1}}\ast 100 $$
(1)

Where, Pt is the price of the asset in the current time period and Pt − 1 is the price of an asset in the previous time period.

Autoregressive integrated moving average (ARIMA)

Proposed by George Box and Gwilym Jenkins in 1970, ARIMA models are among the most popular linear models. In ARIMA models, the future value of a variable is obtained through a linear function of some past observations of the variable and some random errors. The process that generates the time series has the form of:

$$ {y}_t=c+{\phi}_1{y}_{t-1}+{\phi}_2{y}_{t-2}\dots \dots .,{\phi}_p{y}_{t-p}+{\theta}_1{\varepsilon}_{t-1}+{\theta}_2{\varepsilon}_{t-2}\dots .{\theta}_q{\varepsilon}_{t-q}+{e}_t, $$
(2)

where yt is the variable that will be explained at time t; c is the constant or intercept ;ϕi(i = 1, 2, …p) and θj(j = 1, 2, …. q) are the model parameters; p and q are integers and are often referred to as AR and MA orders of the model, respectively; and et is the error term. The assumption regarding the random errors εt is that they are independently and identically distributed with a mean zero and constant variance of σ2. This model involves a three-step iterative process of identification, estimation, and diagnostic checking. The identification step involves specifying a tentative model by deciding the order of the AR (p) and MA (q) terms. Once a tentative model is specified, the parameters of the model must be estimated, in such a way that the overall measure of errors is minimized, which is generally done with a nonlinear optimization procedure. After the estimation of parameters, diagnostic checking for the adequacy of the model must be done, which involves testing whether the model assumptions about the errors εt are satisfied. If the model is adequate, one can proceed to forecast; if not, a new tentative model must be identified following the parameter estimation and model verification. This process with three steps must be repeated until a satisfactory model is selected to forecast the data.

Self-exciting threshold autoregressive (SETAR)

The SETAR model, developed by Tong (1983), is a type of autoregressive model that can be applied to time series data. This model has more flexibility in the parameters which have regime-switching behavior (Watier and Richardson, 1995). Regime switching in this model is based on the dependent variable’s self-dynamics, i.e. self-exciting. In other words, the threshold value in the SETAR model is related to the endogenous variable whereas, in the TAR Model, it is related to an exogenous variable. This model assumes a different autoregressive process in accordance with particular threshold values. SETAR models have the advantage of capturing a commonly observed nonlinear phenomenon which cannot be captured by linear models like exponential smoothing and ARIMA models.

A Threshold Autoregressive model can be transformed into a SETAR model if the threshold variable is taken as a lagged value of the time series itself. The SETAR model with two regimes is specified as:

$$ {y}_t=\left\{\begin{array}{c}{\alpha}_0+\sum \limits_{i=1}^p\ {\alpha}_i{y}_{t-i}+{\varepsilon}_t\ if\ {y}_{t-d}\le \tau\ \\ {}{\beta}_0+\sum \limits_{i=1}^p\ {\beta}_i{y}_{t-i}+{\varepsilon}_t\ if\ {y}_{t-d}>\tau\ \end{array},\right. $$
(3)

where αi and βi are autoregressive coefficients, p is the order of the SETAR model, d is the delay parameter, and yt − d is the threshold variable, εt is a series of random variables that are independent and identically distributed with mean 0 and variance \( {\sigma}_{\varepsilon}^2 \). τ is the value of the threshold, and if the value of τ is known, the observations can be separated based on their value in comparision to the threshold, i.e. whether yt − d is below or above the threshold. Then, by using the ordinary least squares method, the AR model is estimated (Ismail and Isa, 2006). The threshold value must be determined along with other parameters of the SETAR model, since the threshold value is unknown in general.

Artificial neural networks (ANN)

Artificial Neural Networks are one of the flexible computing frameworks, that can be used for modeling a broad range of nonlinear data. The major advantages of ANN models are that they are data-driven and universal approximators, which can approximate a large class of functions with great accuracy. This model-building process does not require any prior assumptions about the model form since the characteristics of the data determine the network model. A feedforward neural network with a single hidden layer is one of the most widely used method to forecast the time series data (Zhang, 2003). The structure of the model is defined by a network of three layers of simple processing units connected by acyclic links. The mathematical representation of the relationship between output yt and the inputs (yt − 1yt − 2, … .. yt − p) can be defined as:

$$ {y}_t={w}_0+\sum \limits_{j=1}^q\ {w}_j.g\left({w}_{0,j}+\sum \limits_{i=1}^p{w}_{ij}.{y}_{t-1}\right)+{\varepsilon}_t, $$
(4)

where wj (j = 0, 1, 2, …,q) and wij (i = 0, 1, 2, … ..,p; j = 0, 1, 2, …,q) are the connection weights or the model parameters, p is the number of input nodes, and q is the number of hidden nodes. The transfer function of the hidden layer is given by the logistic function:

$$ Sig(x)=\frac{1}{1+\exp \left(-x\right)}. $$
(5)

Hence, the ANN model in eq. 4 performs the nonlinear functional mapping from past observations (yt − 1yt − 2, … .. yt − p) to the future value yt—that is,

$$ {y}_t=f\left({y}_{t-1},\dots ..,{y}_{t-p},w\right)+{\varepsilon}_{t,} $$
(6)

Where f is a function determined by the network structure and connection weights, and w is a vector of all parameters. Thus, this neural network model is similar to an autoregressive model with nonlinear functionality.

The choice of the value of q depends on the data, as there is no standard procudere for determining this particular parameter. Another vital task of modeling ANN is the choice of the input vector’s dimension and the number of lagged observations, p. This is perhaps the most crucial parameter that is to be estimated in an artificial neural network model, as the determination of the nonlinear autocorrelation structure of the time series depends on this parameter. However, there is no rule of thumb that can be followed to select the value of p. Therefore, often trials are conducted to select an optimal value of p and q. After specifying the network structure with the parameters p, and q, it is ready for training. This is done with efficient nonlinear optimization algorithms, such as gradient descent algorithms and conjugate gradient algorithms, other than the basic backpropagation training algorithm (Hung, 1993).

In ANNs, the most widely used activation functions are the sigmoid functions. Recently, in deep learning, several other functions have been suggested as alternatives to the sigmoid function, such as the hyperbolic tangent (tanh) function, rectified linear units (ReLU), softmax, and Gaussian. These functions are given below.

The hyperbolic tangent (tanh) function is one of the alternatives to the sigmoid function. It can be defined as:

$$ \tanh (x)=\frac{1-{e}^{-2x}}{1+{e}^{-2x}}. $$
(7)

This function is similar to the sigmoid function, however, it compresses real-value number to a range between − 1 and 1; i.e., tanh (x) ∈(−1, 1).

Rectified linear units (ReLU) are defined as:

$$ f(x)=\max \left(0,x\right), $$
(8)

Where x is the input for a neuron. In other words, the activation is simply set at a threshold of zero. The range of the ReLU is between 0 and ∞.

The softmax function, also called as normalized exponential function is a generalization of the logistic function that ‘compresses’ a K-dimensional vector Z from random real values to a K-dimensional vector σ(z) of real values in the range [0,1], which add up to 1. The function is defined as:

$$ \sigma {(z)}_j=\frac{e^{z_j}}{\sum \limits_{k=1}^K{e}^{z_k}},j=1,2,\dots, K. $$
(9)

The Gaussian activation functions are bell-shaped curves that are continuous. The node output is interpreted depending on how close the net input is for a chosen value of average, i.e. it is interpreted in terms of class membership (1 or 0). The function is defined as:

$$ f(x)=\frac{1}{\sqrt{2\pi \sigma}}{e}^{\frac{-{\left(x-\mu \right)}^2}{2{\sigma}^2}} $$
(10)

Singular Spectrum analysis (SSA)

Some studies have employed the SSA method to forecast financial time series (Hassani et al., 2013b; Ghodsi and Omer, 2014). The SSA method comprises two stages, one is decomposition and the other is reconstruction. In the first stage, the time series is decomposed to separate the signal and noise, then in the second stage, the series with less noise is reconstructed and applied to forecast by using the following steps (Hassani, 2007):

Step 1. Embedding. Embedding can be considered a mapping that transfers a one-dimensional time series YN = (y1,…, yN) to a multi-dimensional series X1, …,XK with vectors Xi = (yi,…, yi + L − 1)T ϵ RL, where L (2 ≤ L ≤ N − 1) is the window length, and K = N − L + 1. The result of this step is the trajectory matrix.

$$ \mathrm{X}=\left[{\mathrm{X}}_1,\dots, {\mathrm{X}}_K\right]={\left({\mathrm{X}}_{ij}\right)}_{i,j=1}^{L,K} $$

Step 2. Singular value decomposition (SVD). In this step, the SVD of X is implemented. Denoted by λ1…. , ,λL the eigenvalues of XXT arranged in decreasing order (λ1, , ≥ … ≥ λL ≥ 0) and by U1…. , ,UL the corresponding eigenvectors. The SVD of X can be written as X = X1 + … + XL, where, \( {\mathrm{X}}_{\mathrm{i}}=\sqrt{\uplambda_{\mathrm{i}}}{\mathrm{U}}_{\mathrm{i}}{\mathrm{V}}_{\mathrm{i}}^{\mathrm{T}} \).

Step 3. Grouping. This step involves splitting the elementary matrices into several groups and then adding the matrices within each group.

Step 4. Diagonal averaging. The main objective of diagonal averaging is to transform a matrix into the Hankel matrix form, which can be later converted into a time series.

Step 5. Forecasting. There exist two forms of SSA forecasting: recurrent singular spectrum analysis (RSSA) and vector singular spectrum analysis (VSSA). In this study, we employed RSSA. Let \( {V}^2={\pi}_1^2+\dots +{\pi}_r^2 \) where πi is the last component of the eigenvector Ui(i = 1, …, r). Additionally, for any vector U ϵ RL, denote by Uϵ RL − 1 the vector comprising of the first L − 1 components of the vector U. Let YN + 1, …YN + h show the h terms of the SSA recurrent forecast. Then, we can obtain the h-step ahead forecasts by using the following formula.

$$ {y}_t=\left\{\begin{array}{c}\overset{\sim }{y_i}\ for\ i=1,\dots, N\\ {}\sum \limits_{j=1}^{L-1}\ {\alpha}_j{y}_{i-j}\kern0.5em for\ i=N+1,\dots N+h\end{array},\right. $$
(11)

where \( \overset{\sim }{y_i}\ \left(i=1,\dots, N\right) \) is the reconstructed series, and vector A = (α1, …, αL − 1) can be computed by

$$ A=\frac{1}{1-{v}^2}\sum \limits_{i=1}^r\ {\pi}_i{U}_i^{\nabla }. $$
(12)

Hybrid model (HM)

Either purely linear or purely nonlinear models might not be adequate for predicting stock returns since the stock returns are complex in nature. Even data-driven ANNs have produced mixed results in forecasting the time series data. For example, Denton (1995) used simulated data and found that when there is multicollinearity or outliers in the data, neural networks can forecast the data better than the linear regression models. The sample size and noise level play a crucial role in determining the performance of ANNs for linear regression problems (Markham and Rakes, 1998). Therefore, it might not be useful to apply ANNs for all types of data.

Given the complexities in the stock market data, a method that can handle both the linear and nonlinear data, i.e., hybrid model might be an alternative for forecasting. Linear and nonlinear aspects of the underlying patterns in the data can be captured by combining different models.

It might be useful to consider time series data consisting of linear autocorrelation structure and a nonlinear component. That is,

$$ {y}_t={L}_t+{N}_{t,} $$
(13)

where Lt represents the linear component and Nt denotes the nonlinear component. Initially, we must apply a linear model for the data, and then the residuals from the linear model would contain only the nonlinear relationship. These residuals can be defined as: Let et denote the residual at time t from the linear model, then:

$$ {e}_t={y}_t-{y}_t+{\hat{L}}_t, $$
(14)

Where, \( {\hat{L}}_t \) is the forecast value at time t from the estimated relationship from eq. 13. Residuals are very crucial in diagnosing the adequacy of linear models because the presence of linear correlation in the residuals indicates the inadequacy of the linear model. In addition, any significant nonlinear pattern in the residuals also indicates the limitation in the linear model. Nonlinear relationships can be discovered by modeling residuals using ANNs. The ANN model for residuals with n input nodes will be:

$$ {e}_t=f\left({e}_{t-1},\cdot {e}_{t-2},\cdot \dots \cdot ..\cdot {e}_{t-n}\right)+{\varepsilon}_t, $$
(15)

Where f is a nonlinear function determined by the neural network, and εt is the random error. Denoting the forecast from (13) as \( {\hat{N}}_t \), the combined forecast will be:

$$ {\hat{y}}_t={\hat{L}}_t+{\hat{N}}_t, $$
(16)

where \( {\hat{y}}_t \) is the estimated value from the hybrid model, which is a combination of linear and nonlinear models. We used the inverse mean square forecast error (MSFE) ratio to determine the optimal weights for the hybrid models as it is a widely used method with a robust theoretical background (Bates et al., 1969).

For M models, the combined h-step ahead forecast is:

$$ {\hat{y}}_{t+h}=\sum \limits_{m=1}^M{w}_{m,h,t}{\hat{y}}_{t+h,m}, $$
(17)
$$ {w}_{m,h,t}=\frac{{\left(1/{msfe}_{m,h,t}\right)}^k}{\sum_{j=1}^M{\left(1/{msfe}_{m,h,t}\right)}^k}, $$
(18)

where \( {\hat{y}}_{t+h,m} \) is the point forecast for h steps ahead at time t from model m. In summary, this hybrid method contains two steps. The first step is to employ the ARIMA to model the linear part of the data. The second step is to apply ANN to model the residuals obtained from the ARIMA, these residuals have information about the nonlinearity in the data. The results from the ANN model can be used as forecasts for the error terms for the ARIMA model. In the manner mentioned above, the hybrid model encompasses the characteristics of the ARIMA and ANN models in modeling time series data. Thus, it could be beneficial to employ hybrid models to improve the accuracy of the forecasts.

Forecast performance measures

The accuracy of forecasts indicates how well a forecasting model predicts the chosen variable. Different accuracy measures are used to validate the suitability of a model for a given data set. There are several accuracy measures in the literature, such as mean error (ME), mean absolute error (MAE), mean absolute percentage error (MAPE), mean squared error (MSE), and root mean squared error (RMSE). In this study, we used RMSE because it is one of the most appropriate methods for measuring forecasting accuracy for data on the same scale, and this criterion has been employed in several previous studies (Lu and Wu, 2011; Wang et al., 2011; Hyndman and Athanasopoulos, 2015; Makridakis et al., 2015). Also, Chai and Draxler (2014) suggested that RMSE is a suitable measure for models with normally distributed errors. The present study found that the errors in most of the models follow the normal distribution.

If Yt is the actual observation for time period t, and Ft is the forecast for the same period, then the error is defined as:

$$ {e}_t={Y}_t-{F}_t, $$
(19)
$$ MAE=\frac{1}{n}\sum \limits_{t=1}^n\left|{e}_t\right|, $$
(20)
$$ MPE=\frac{1}{n}\sum \limits_{t=1}^n{PE}_t, $$
(21)
$$ MAPE=\frac{1}{n}\sum \limits_{t=1}^n\left|{PE}_t\right|, $$
(22)

Where,

$$ {PE}_t=\left(\frac{Y_t-{F}_t}{Y_t}\right)\ast 100 $$
(23)

The mean squared error (MSE) is:

$$ MSE=\frac{1}{n}\sum \limits_{t=1}^n{e_t}^2, $$
(24)

and the root mean square error (RMSE) is:

$$ RMSE=\sqrt{\frac{1}{n}\sum \limits_{t=1}^n{e_t}^2.} $$
(25)

Empirical results

Here, we present the empirical results, comprising descriptive statistics and the performance measures of various forecasting methods for stock returns in developed, emerging, and frontier markets.

Descriptive statistics of stock returns

Tables 1, 2, and 3 present the summary statistics (e.g., mean, standard deviation, skewness, kurtosis, JB statistic) for developed, emerging, and frontier stock market returns, respectively. From these tables, we can see that the mean returns in all markets are positive, indicating overall positive returns on investments during the period considered for this study. The kurtosis values of the return series of all the markets are observed to be greater than 3, indicating that all of the series are leptokurtic—i.e., they have thick tails, which is a common phenomenon in stock returns (Bouchauda and Potters, 2001; Humala, 2013; Mallikarjuna et al., 2017). The Jarque–Bera test showed that the series are non-normally distributed. Another key feature, from the Tsay (1989) test, is that the returns of all the markets are nonlinear.

Table 1 Descriptive Statistics for Developed Markets
Table 2 Descriptive Statistics for Emerging Markets
Table 3 Descriptive Statistics for Frontier Markets

Results of forecasting methods

Before applying the forecasting methods, we divided the data into the training set and the test set; we used 80% of the data for training the models and the remaining 20% for testing the models. To forecast the returns using the ARIMA (p, d, q) model, it was necessary to check stationarity to have valid inferences. To test the stationarity of the returns series, we employed the augmented Dickey–Fuller (1979) and Phillips–Perron (1988) tests; the results showed that the returns of all of the markets were stationary. We determined the optimal lag length for the autoregressive (p) and moving average (q) components using the Akaike information criterion (AIC). We observed different orders of AR and MA for different series and present them along with RMSE values in Tables 4, 5, and 6. In the SETAR model, the series exhibited nonlinear trends, and we identified two regimes by the minimum AIC values. Then, the model was used to forecast the returns of the markets. To forecast stock returns using the ANN model, we employed feedforward neural networks since many studies have shown that they fit well with asset return data (Zhang, 2003; Qiu et al., 2016). We employed a recurrent singular spectrum analysis (RSSA) model to forecast the returns after decomposing and reconstructing the original returns series by following the four steps involved in forecasting with SSA: embedding, reconstructing, grouping, and diagonal averaging. For the hybrid model, which is a combination of ARIMA and ANN, we fit the model by employing the widely used inverse mean square forecast error (MSFE) ratio (Bates et al., 1969; Winkler and Makridakis, 1983) for assigning the optimal weights for the models in forecasting. Tables 4, 5, and 6 present the RMSE values of the test sets of the forecast series for all techniques (i.e., ARIMA, SETAR, SSA, ANN, and HM) for developed, emerging, and frontier markets, respectively. The model with the lowest RMSE was chosen as the most appropriate model. In addition, we tested RMSE significance using the Diebold–Marino test (1995) and found that the RMSE of all of the models was significant, except for Japan, South Africa, and Sri Lanka.

Table 4 RMSE Values of the Forecasting Models for Developed Markets
Table 5 RMSE Values of the Forecasting Models for Emerging Markets
Table 6 RMSE Values of the Forecasting Models for Frontier Markets

From Tables 4, 5, and 6, we can observe that no single method performed uniformly for all markets. However, the nonlinear model (i.e., SETAR) performed better than the other models, producing optimal forecasts for 10 markets (i.e., four developed, four emerging, and two frontier markets). This result contrasts with Guidolin et al. (2009). In the case of developed markets, the SETAR model produced optimal forecasts for four of the nine markets (Australia, France, Japan, and Switzerland). The ARIMA model was optimal for Canada, Germany, and the UK, and the HM model was optimal for South Korea and the US. Thus, we can say that nonlinear models are more suitable for developed markets. Meanwhile, ANN and SSA models are not at all useful for developed markets since they did not provide any optimal forecasts.

For emerging markets, the SETAR model was found to be appropriate for four markets (Egypt, Mexico, Russia, Thailand). HM models were appropriate for three markets (China, India, and South Africa) and ARIMA models for two (Brazil and Turkey). The ANN model was appropriate for only one market (Indonesia), while the SSA model was not suitable for any emerging market. Though no single model was suitable for all emerging markets, the SETAR and HM models were relatively more useful. Regarding frontier markets, SETAR was suitable for Argentina and Kenya, ARIMA for Estonia and Sri Lanka, and SSA for Tunisia. The ANN and HM models were not appropriate for any market.

Out of twenty-four stock market indices, the SETAR model produced optimal forecasts for ten, ARIMA for seven, HM models for five, and ANN and SSA models for one market each. From these results, we can observe that nonlinear models are more useful for developed, emerging, and frontier markets alike. Another interesting observation is that the AI and frequency domain models were found to be appropriate only for one market each. Thus, we can say that, even with advancements in AI and frequency domain models, traditional statistical models have not become obsolete; they are still useful and in fact better than AI and frequency domain models for forecasting financial time series data.

Summary and conclusions

Over the years, stock markets have become alternative avenues for surplus funds among individual and institutional investors, especially following globalization and the integration of world financial markets. Given the inherent risk, uncertainty, and dynamic nature of stock markets, accurately forecasting stock returns can help to minimize investors’ risks. Thus, forecasting techniques can help with better investment decision making.

This study considered daily data for stock market returns during the period 1 January 2000 to 30 December 2018 to compare forecasting techniques (i.e., ARIMA, SETAR, ANN, SSA, and HM models) representing linear, nonlinear, AI, frequency domain, and hybrid methods. We took the stock indices of 24 stock markets in three market categories (nine developed, ten emerging, and five frontier) to find suitable forecasting techniques for each category. The results showed that no single forecasting technique provided uniformly optimal forecasting for all markets. However, SETAR performed better for ten markets, ARIMA for seven, HM for five, and ANN and SSA for one market each. SETAR and ARIMA techniques can thus be considered the clear winners in forecasting stock market returns for developed, emerging, and frontier markets, as these two methods provided optimal forecasts for seventeen of the twenty-four markets.