Modelling Joint Behaviour of Asset Prices Using Stochastic Correlation

Association or interdependence of two stock prices is analyzed, and selection criteria for a suitable model developed in the present paper. The association is generated by stochastic correlation, given by a stochastic differential equation (SDE), creating interdependent Wiener processes. These, in turn, drive the SDEs in the Heston model for stock prices. To choose from possible stochastic correlation models, two goodness-of-fit procedures are proposed based on the copula of Wiener increments. One uses the confidence domain for the centered Kendall function, and the other relies on strong and weak tail dependence. The constant correlation model and two different stochastic correlation models, given by Jacobi and hyperbolic tangent transformation of Ornstein-Uhlenbeck (HtanOU) processes, are compared by analyzing daily close prices for Apple and Microsoft stocks. The constant correlation, i.e., the Gaussian copula model, is unanimously rejected by the methods, but all other two are acceptable at a 95% confidence level. The analysis also reveals that even for Wiener processes, stochastic correlation can create tail dependence, unlike constant correlation, which results in multivariate normal distributions and hence zero tail dependence. Hence models with stochastic correlation are suitable to describe more dangerous situations in terms of correlation risk.


Introduction
Investors build and manage their portfolios comprising numerous assets taking into account the dependence structure between asset returns. In particular, the correlation between asset returns is a key ingredient for trading, portfolio optimization, and risk management. It is very often considered to reflect a structural correlation between fundamentals of asset returns and hence assumed not to vary a lot in time. While this may be true in the long run, instantaneous, temporary, or short-lived effects may have a variable influence on companies' prices of shares belonging otherwise to the same sector and having similar fundamentals. On the other hand, fire sales or, more generally, the sudden deleveraging of large financial portfolios may create unexpected spikes in volatility as well as in correlations of asset returns. The reason is that large investors build their portfolios with the use of indices and exchange-traded funds (ETFs) and assets traded on large national exchanges. Hence, their sales or buys affect those assets simultaneously (Cont and Wagalath 2016). This creates another key feature of co-movement of asset returns, i.e., they are more synchronized at large absolute values than at usual, frequent values. This phenomenon is called the tail dependence, and under its existence, Pearson correlations, measuring linear dependence only, are not sufficient for describing the interdependence or association of these returns (McNeil et al. 2005). In our perception, the association should be modeled as an integral part of the joint dynamics of the time propagation of the returns. Therefore, although copulas (and for their part dynamic copulas (Cherubini et al. 2011) as well) may represent the complete probabilistic association, they, again, are not completely satisfactory for the description of the association of returns. In summary, intense trading may create an association of asset prices that is highly non-linear, time-dependent, and random, pointing to a stochastic process as a model for it.
Similar to the case of volatilities, there are two conceptually different ways to obtain inference on temporally changing correlation. Estimating is straightforward from the historical asset price series, e.g., by windowing techniques. We are using the term historical correlation instead of "realized" one because, in some papers, the latter term is mixed with equicorrelation, having a restricted meaning of containing only one time-dependent component multiplied by equal constants for all assets in question. In the spirit of implied volatility, another approach draws on multi-asset or index-based derivative prices quoted in the market, creating implied correlation (Buss et al. 2017). In the present paper, we confine ourselves to the historic approach only, noting that historical correlation should be used very carefully since the correlation might generally be more unstable than volatility. Market data analysis (Driessen et al. 2006) reveals that implied correlation, reflecting market expectations, deviates from the realized correlation, creating a non-zero correlation risk premium (Buraschi et al. 2006).
Earlier works present statistical evidence that correlation should not, indeed, be assumed to be constant in time, see, e.g., van Emmerich (2006). Furthermore, there exist several approaches which generalize the constant correlation to be stochastic, like, e.g., the dynamical conditional correlation model of Engle (2002), local correlation models (Langnau 2009) and the Wishart auto-regressive process proposed by Gourieroux et al. (2009). We suggest considering the stochastic differential equation (SDE) description of asset prices together with a time-varying and random association described by the quadratic covariation of the price SDE-driving Wiener processes. The interdependence of these underlying Wiener processes may, in turn, generate the association of prices observable in the market. The name of this approach was coined as stochastic correlation in the pioneering work of Catherine van Emmerich (2006) and used in pricing several multi-asset derivatives like quantos (Ma 2009a) and spreads (Ma 2009b;Kolpakov 2014;Merkulov 2017) and multi-interestrate derivatives (Ma 2010) etc. Those mentioned papers suggest either the Jacobi process or a suitably transformed Ornstein-Uhlenbeck process as a stochastic correlation model (Schöbel and Zhu 1999). Further models may be found in Carr (2017) and Teng et al. (2016).
The question of what is this dependence structure like is studied in Markus and Kumar (2019). It is a crucial further step to test whether it is realistic in an application or not. None of the papers, mentioned by or known to the present authors, put to the test stochastic correlation models of real, observed, historical asset price data. Option pricing formulas were derived supposing but never testing the underlying association of assets. Although comparison with market option prices approved those formulas, they were tested on few data, spanning relatively short time intervals. Our study aims to fill an existing gap by testing this model's goodness in historical stock price data. These stochastic correlation models are compared in terms of quantile curves of centered Kendall functions of their copulas. Here we use these quantile curves as goodness-of-fit measures for model selection purposes. Also, the model created tail dependence, both in strong (Joe 1990) and weak (Coles et al. 1999) sense, are important measures for model fit. As a practical realization of the procedure, some 3271 consecutive daily close prices of Apple and Microsoft stocks are analyzed. First, the Heston model is fitted to them. The association of resulted residuals -predicting the generating Wiener processes -is modeled by constant, Jacobi, and Hyperbolic tangent transformed Ornstein-Uhlenbeck processes. These models' fit is then compared by the corresponding K-functions' quantile curves and the created tail dependence. While the constant correlation -Gaussian copula -proves to be inadequate, the other two models fit well and seem suitable to describe the association and hence correlation risk. The suggested model may serve well in multi-stock option pricing as well.

Stochastically Correlated Wiener Processes
Stochastic differential equations are frequently used to model data series such as interest rate, asset price, exchange rate, and so on. For diffusion processes described by SDEs, the dependence between the series is most often originates in correlated Brownian motions driving the equations. Suppose we are given a finite time horizon [0, T ] and a filtered probability space (Ω, A , F t , P ) with a filtration, satisfying the usual conditions. This is, e.g., the case when the filtration is the augmented filtration of a (multidimensional) Brownian motion. Suppose, we are given two independent Brownian motions, W 0 (t), W 1 (t) adapted to the filtration F t (i.e. the filtration is rich enough). Considering two adapted correlated Brownian motions W 1 (t) and W 2 (t) with corr(W 1 (t), W 2 (t)) = ρ, their quadratic covariation, equals the covariance and is symbolically written as Instead of a constant −1 ≤ ρ ≤ 1, let us given an adapted process ρ(t) with |ρ(t)| ≤ 1. Since ρ(t) is bounded and the time horizon is finite ρ(t) ∈ L 2 Ω × [0, T ] . By using ρ(t) we define the vector process W(t) = W 1 (t) W 2 (t) as Here the second coordinate W 2 (t) is clearly adapted to F t , and as a sum of two stochastic integrals w.r.t. Brownian motions of L 2 Ω ×[0, T ] processes it is a continuous martingale. As a direct consequence of Itō's formula, the quadratic (co)variation of W(t) is and in particular, the quadratic variation of W 2 (t) is t. Being an adapted continuous martingale with quadratic variation t, W 2 (t) is itself a Brownian motion, by virtue of Lévy's theorem (Øksendal 2000). Further, the quadratic covariation of W 1 (t) and W 2 (t) is To the analogy of the constant ρ in Eq. 1, we shall call the process ρ(t) the stochastic correlation of W 1 (t) and W 2 (t). The Pearson correlation is In particular, corr W 1 (t), W 2 (t) = R when ρ(t) is a stationary ergodic process with mean value R. By the ergodic theorem 1 t is large, indicating that the long term behaviour of stochastically correlated Brownian motions with stationary and ergodic stochastic correlation becomes similar to Brownian motions with constant correlation. Therefore, such a model choice is interesting primarily in the finite (short) time horizon context.

Models of Stochastic Correlation
In this paper, we consider two basic approaches to model the stochastic correlation. Suppose, that beyond W 0 (t), W 1 (t) we are also given a third Brownian motion, Z(t) adapted to the filtration F t (i.e. the filtration is rich enough). Note that we do not assume the independence of Z(t) from W 0 (t) and W 1 (t).
Consider first a bounded diffusion process, parametrised so as to have values within the interval [-1,1], since this requirement is prescribed for ρ(t) to hold. The best-known diffusion process satisfying this requirement is the properly parametrised Jacobi process. We refer the reader to Gouriéroux and Valéry (2004) for more details, noting that the infinitesimal generator of the process was studied even much earlier in the work of Karlin and McGregor (1960). There are various ways to specify a Jacobi process, and we do it as the unique strong solution of the SDE: with an initial condition ρ(0) = ρ 0 . Here k represents the mean-reversion parameter, R the mean of the process and σ the diffusion coefficient. The process is known to be geometrically ergodic, is centered on the equilibrium value R and has two reflecting boundaries R − and R + (Demni and Zani 2009). When R − = 0 and R + = 1 the term under the square root reduces to ρ(t) − ρ(t) 2 , and the process takes values between 0 and 1. This is how we shall use it in our application, while the choice R − = −1 and R + = 1 puts 1 − ρ(t) 2 under the square root and results in values between -1 and 1. We choose ρ 0 in the initial condition so as to guarantee stationarity. In particular Eρ 0 = R. The marginal distribution of a stationary Jacobi process with ρ(t) − ρ(t) 2 under the square root is beta, with parameters 2kR As for the second approach, a non-bounded diffusion X(t) -typically an Ornstein-Uhlenbeck process -is mapped into the [0, 1] or [−1, 1] interval, by using an appropriate transformation with a smooth real function g(x) as ρ(t) = g(X(t)). Some of the functions used previously in the literature are the normalized inverse tangent (van Emmerich 2006), the normal cdf (Carr 2017), and the hyperbolic tangent (Teng et al. 2016). Neither of the mentioned papers puts the model to the test of historical asset prices directly but rather intends to obtain derivative pricing formula etc. Markus and Kumar (2019) compares the constant correlation, the Jacobi, and these latter mentioned theoretical models in terms of the centered Kendall functions of their randomly changing copula. In the present paper, the performance of models with the Jacobi process and the Hyperbolic tangent transformation of an Ornstein-Uhlenbeck process (referred to as the HtanOU model) is tested on historical asset price data and compared to the constant correlation model.
To introduce jumps into correlation is also a possibility as, e.g., Itkin (2017) uses transformed Lévy process in modeling stochastic correlation, but we will not consider this approach in the present paper.

Modeling a Single Stock Pice; the Heston Model
In order to analyze the association of the generating Wiener processes for observed asset prices, a proper model is needed and its discretized version fitted to the price data. Time dependent and random nature of volatility has been noted for long in quantitative analyses of financial assets and to our days the Heston model (Heston 1993) became one of the industry standards to incorporate this property into the description. In the model the asset price follows a square-root diffusion while the spot volatility process follows a CIR process. The general form of the model is given as follows: where, W s (t) and W v (t) are Wiener processes with a constant correlation ρ H . In our setup, when two stock prices are considered, the price driving Wiener processes i.e. the corresponding W s (t)-s are supposed to be stochastically correlated creating the association of prices.
In order to estimate the parameters from a discrete sample and obtain residual processes predicting the Wiener generators, we consider the Euler-Maruyama scheme to obtain a discretized version of the model. First, by Ito's lemma, we have Consider the log-returns within Δ period of time Y (t) = log S(t + Δ) − log S(t), then the approximation of Eqs. 10 and 9 can be written in the discrete setup as Equation 11 can be further rearranged to obtain an expression for ε s (t) Now, the aim is to estimate the parameters of the Heston model from n + 1 discrete, Δperiod time-equidistant observations of the stock price S(t) in the time interval [0, T ], i.e. from the sample S(0), S(Δ), S(2Δ), . . . , S(nΔ) where T is given by n · Δ = T . In this model, the parameters to be estimated are μ (expected return of asset price), κ (mean reversion speed), θ (long run mean of variance), σ v (volatility of variance) and ρ H (correlation between asset price and its variance process).
The estimation of the Heston model's parameters is not straightforward, and various methods have been elaborated to obtain it. Li, Wells, and Yu (2008) suggested a sophisticated MCMC algorithm to obtain the estimations, and more recently, Cape et al. (2015) derived the necessary prior distributions for the effective implementation of the algorithm. It is mentioned in Cape et al. (2015) that the variance process's estimation is not perfect in the MCMC procedure. This is completely in line with our experience. Beyond that, in the application described in the next section, the obtained residuals predicting the Wiener processes do not fit normal distributions; neither of the known normality tests accepts them. Hence we suggest a minor modification in the procedure. Originally the estimated volatility process is updated by a normally distributed random variable with 0 mean and a small variance in the MCMC algorithm. We suggest changing the distribution to a gamma one. Our motivation is that the volatility process satisfies a CIR equation known to have a stationary solution with gamma marginal distribution, and the latter has the same gamma distribution as its conjugate. By fine-tuning the applied gamma update parameters, a delicate balance can be achieved that brings the distribution of the residuals of both the price driving equation (let us call them price-residuals for short) and the variance equation equally close to normality. As a result of the Heston model fit, the price-residuals are obtained from the observed stock price series, opening the way to study the dependence structure of the price driving Wiener-processes.

Stochastic Correlation Model: Goodness of Fit by Quantile Curves of K-functions
Having the fitted models to the prices, the price-residuals as predictors of the price driving Wiener processes can be analyzed further. We stress here that as the Δ period Wiener increment pairs have the same two-dimensional distribution, the differenced price-residual pairs can be regarded as a sample from that distribution. In the case of two stocks, the first aim is to model their time-dependent stochastic correlation. To this end, "local" correlations of the two price-residual processes are generated by taking a backward-looking window around a time point t. Looking through it to the two residuals, the Pearson correlation is estimated the usual way, and the value is allocated to t. Running then the window in time through the price residuals, the time-dependent estimated stochastic correlation is obtained. Each of the mentioned stochastic correlation models: HtanOU, Jacobi, and constant is fitted to this estimated correlation process.
The choice of the window length is crucial in this procedure, and it can be chosen by using a "proxy" t-copula. First, we fit a t-copula to the differenced price-residuals. Then, having the estimated correlation process, we can simulate with that stochastically correlated Wiener process pairs, differencing it and fitting a t-copula again to these pairs. Although the t-copula fit is far from perfect, the window size, which preserves best the parameter of the t-copula, is likely to be the best choice.
Once we have an estimated stochastic correlation process, it is pretty straightforward to fit the mentioned models. The fitting of a Jacobi process can be carried out using the estimator described in Gouriéroux and Valéry (2004). To obtain the HtanOU model parameters, the estimated stochastic correlation process is back-transformed first, and then the usual least squares estimator of the Ornstein-Uhlenbeck process is applied. The constant correlation model has only one parameter, the constant, which is chosen as the average of the estimated stochastic correlation process.
Assessment of the goodness-of-fit of the mentioned three stochastic correlation models is possible based on the copulas they induce and their Kendall functions or K-functions for short. Following (Markus and Kumar 2019), the simulated quantile curves of the centered Kfunctions are determined. The difference between the empirical K-function and the averaged simulated K-functions is tested against the curves. Remark here that throughout the paper, the empirical copula's notion is used in the rigorous sense of Ghoudi and Rémillard (2004), Section 3.2 or more specifically, and explicitly Section 3.4. Correspondingly the empirical K-function is the Kendall function of the empirical copula. We recall, in short, the procedure described in Markus and Kumar (2019) in detail.
In order to create the empirical quantile curves, 10000 -10000 pairs of Wiener increments are simulated from the Jacobi, HtanOU, and constant stochastic correlation models. In each simulation, the Wiener increment pair's empirical copula, and subsequently, their empirical K-function, is computed. These empirical K-functions are then centered modelwise by their average value, taken in the 10000 simulations from the same model. In every point where these centered K-functions are evaluated, the 2.5% and 97.5% quantiles of the 10000 values are computed, giving the pointwise 95% confidence bounds for every model. By doing so for every point, a closed curve and in it a domain are obtained for every model. These are only used as reference curves and domains, and a constant multiplier is determined to them so that exactly 5% of the centered K-functions go out from the domain. These curves and domains are considered as the confidence curves and domains of the corresponding models. As it has been said, the fit is good or statistically acceptable on a 95% level if the difference of the price-residuals induced empirical K-function and the simulated model average K-function remains within the domain.

Tail Dependence
Given two random variables, their tail dependence or asymptotic dependence measures the association between their extreme values and depend only on their copula. Consider a random vector X = (X, Y ) with marginals F X and F Y . The coefficient of lower tail dependence is then defined as the limit of the conditional probability that X is less than or equal to the quantile F −1 X (u) provided thatY is less than or equal to F −1 Y (u) as u approaches 0 i.e.
In the context of financial risk management, the lower tail usually corresponds to losses. It hence is of utmost importance because the clustering of high-severity losses can have a devastating effect on the well-being of firms and is thus of pivotal importance in risk analysis. Gaussian-copula-based credit risk models like that of Li (2000) have faced harsh criticism in the past on the ground of having zero, i.e., no tail dependence (Shibuya 1960).
In bivariate random vectors with dependence structures given by the copula function C : [0, 1[ 2 → [0, 1] (Joe 1990) suggested to measure the lower tail dependence as Non-zero values of λ L suggest lower tail dependence in C. In what follows we shall call this measure the strong tail dependence. The introduction of the distinctive adjective "strong" is motivated by the fact that when the limit (16) is zero, it can be useful to rely on a more delicate index defined in Coles et al. (1999) and called the lower weak tail dependencẽ λ L ∈ [−1, 1] which is given asλ In Fischer and Klein (2007) one may find more details about the intuition behind this measure of extreme co-movements. Similarly as u ↑ 1 in Eqs. 16 and 17, we may have the upper strong and weak tail dependences λ U ,λ U , but we only shall consider the lower dependences in this paper so, we will not mention the upper ones any further. The empirical counterpart of the strong lower tail dependence for a given sample size n is denoted as L n (u) and can be given as whereF is the empirical distribution function of the variable written in its index. With the same notations, the weak lower tail dependence is estimated as As the mentioned Wiener increment pairs sample the same two-dimensional distribution, their strong and weak tail dependence can be estimated by these formulas.

Pseudo-algorithm of the Methodology
This section gives a high-level overview of the methodology explained in Sections 4 and 5.
1. From the given market data (i.e., closing prices of the two stocks), calculate the logreturns, and estimate the parameters of the discretized Heston model by MCMC, as explained in Section 4. With the estimated parameters, price residuals of each stock can be estimated by using Eq. 14.
2. Next, test the normality and independence of the above price residuals processes using any standard statistical tests (Anderson-Darling, Shapiro-Wilk, etc.). Having passed the tests, the price residual pairs can be regarded as a sample of Wiener increment pairs, as explained in Section 5. 3. Estimate the 'local' correlations of two price residuals processes by using a sliding window technique. The reasoning behind the choice of appropriate window size is explained in Section 5. This procedure gives the estimated stochastic correlation process. 4. Fit the three models: HtanOU, Jacobi, and constant to the estimated stochastic correlation process, as explained in Section 5. 5. Simulate the desired number of paths of the stochastic correlation process and assess each model's fit based on the copulas and K-functions they induce, as explained in Section 5. For the in-depth details of this step, follow the methodology presented in Markus and Kumar (2019). 6. Simulate the price driving Wiener process increment pairs associated according to the simulated stochastic correlations and compute the strong and weak tail dependence as defined in Section 6. Compare it to the tail dependence of the price residual processes.

Data of Two Stock Prices; Fitting the Model
In the present study, we analyze the association of prices of two stocks, Microsoft (MSFT) and Apple (AAPL). The data are registered 12 years long between January 05, 2006 -January 04, 2019, and consist of 3271 daily close prices. First, we fit Heston stochastic volatility models to both MSFT and AAPL stock price data, using the MCMC algorithm mentioned in Section 4, with the gamma-distributed update (shape = 5.6, rate = 236 for both Apple and Microsoft) for the variance process. From the fitted Heston models, two price-residuals ε A,s and ε M,s are obtained for the two stocks. Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk tests reject deviation from the normal distribution for all residuals. True though, while the serial dependence of the price-residuals is rejected, e.g., by the powerful Brock-Dechert-Scheinkmann test and some other simpler independence tests (differencesign, turning-point, or rank tests), the volatility-residuals show a minor autocorrelation for the first three lags. The obtained variance process nicely follows the empirical one obtained by a windowed estimation. To our experience, however, the use of normal updates suggested in Cape et al. (2015) for the volatility process for the AAPL and MSFT data seriously distorts the empirical variance processes. As a consequence, the generated residual processes of AAPL and MSFT are no longer normal and independent, although the estimated model parameters do not change substantially. Since we could not observe a similar phenomenon when instead of AAPL and MSFT, the procedure is applied to "artificial" prices, simulated from the Heston model, we conclude that the need for gamma update may lay in the fact that the observed two stock data do not perfectly follow Heston models. The two price-residuals are regarded as predictors of the price-governing Wiener processes. The pair of their increments ε A,s (t + Δ) − ε A,s (t) , ε M,s (t + Δ) − ε M,s where Δ is the discretisation time-length, can be regarded for every t = 1, 2, . . . , 3271 as a predicted sample from the model of the same two dimensional distribution, i.e. the joint distribution of the two Wiener process increments in Δ time. Hence, this quasi-sample makes it possible to compute the empirical copula of the predicted Wiener process increment pair, its K-function and ultimately its strong and weak tail dependences.

Evaluating Model Goodness
The association of stock price processes is inherent from the stochastically correlated Wiener processes governing the SDEs. As a result, the association is random for both the prices and the Wiener processes. Therefore, the "observed" snapshot of the association, represented by the price-residuals' empirical copula, i.e., the predicted Wiener increments, is not a fixed model characteristic, but only one realization, a "path", from the randomly changing structure. Consequently, the question of model goodness of fit should not be raised so as "which model creates a copula closest to the empirical", but the way "which model creates sufficiently rich variability of copulas so as to accommodate the observed sample conveniently, reliably".
We now have to check whether the obtained correlations, copulas, and tail dependence are coherent with those obtained from the models. For that purpose, the Δ length increments of 3271 · Δ long stochastically correlated Wiener process pairs are simulated 10000-10000 times from the fitted HtanOU, Jacobi, and constant stochastic correlation models, respectively.
Consider first the correlations, as presented in Table 1. Even though the price paths correlate by 0.9104, the log-returns do only by 0.456, due to eliminating the exponential growth. The correlations, averaged for simulated synthetic log-returns in both stochastic correlation models (third column), fit this value well. However, the constant correlation model shows some bias in this respect. The standard deviation of these correlations (4th column) are comparable for the TanhOU and Jacobi models, the first being slightly larger, while markedly lower for the constant correlation model showing less variability in the dependence of the created log-returns. This is a consequence of the random association rather than any error or noise in the model and reflects the randomly changing environment where the prices are set. So, it is rather a strength than a weakness of the model.
The mean of the estimated (from the price residuals) stochastic correlation process is 0.386, and the means of the 10000 simulated stochastic correlation processes (7th column) coincide stunningly well with this value. The variability of these values is very low in the simulations (8th column), showing their robustness in this respect. The standard deviations of the same processes are considerable, reflecting the random nature of the temporally The mean and standard deviation of the estimated stochastic correlation process of the price residuals and the average and standard deviation of the means and standard deviations of the 10000 simulated stochastic correlation processes localized association structure. The average standard deviations of the simulated stochastic correlation processes (9th column) are, again, close to the estimated one and pretty stable (10th column), although the Jacobi model shows somewhat less variability. In the case of the constant correlation, such considerations are meaningless, of course. Summarizing, we can say those correlations are realistic and reflect well the observed situation while giving sufficient room to random changes, an essential source of the risk modeled.
Next, the empirical copula and the empirical K-function are computed from the 3270 Wiener increment pair in every simulation. We use these simulations to obtain the quantile curves for centered K-functions as in Markus and Kumar (2019) and the model induced lower tail dependence by using the expression in Eq. 18.
We create the 5% confidence domains model-wise for the centered K-functions, as described in Section 5. Having the empirical K-function computed from the stocks by the described procedure, we center it corresponding to each model by the simulation averages, obtaining three centered empirical K-function curves, one to each considered model. All we have to check now is whether these curves are enclosed within the corresponding model's confidence domains. This can be observed in Fig. 1. (Note that the original image contains more than 12 million K-function points; hence it has to be processed, and the vertical white stripes result from image compression only.) Clearly, the domains corresponding to the Jacobi and HtanOU models conveniently accommodate the curves, while in the constant correlation case, the curve goes out from the domain. This means that the fit is acceptable for the Jacobi and HtanOU models, whereas it has to be rejected for constant correlation.
Turning to tail dependence, we have 10000 simulations from each model, and so, 10000 copulas of Wiener process increments model-wise. With these, we use Eqs. 18 and 19 to obtain an approximation of the strong and weak tail dependence, respectively. The expression in Eqs. 18 and 19 are still functions of u, and for tail dependence, we are interested in it only for the smallest u-s since theoretically, we would need the limit in 0. However, for the lowest u values, the estimation is based on just a few simulated values; hence it is unreliable. So, we need a balance between closeness to 0 and the reliability of estimation. Therefore, we drop the first 5 values to get rid of the bias they cause and average L n (u) and L w n (u) for the 6 to 15 smallest u values in order to capture the true behavior in the left tail.  In order to characterize the models, we present in Table 2 the mean and the standard deviation along with the 95% confidence bounds of these values characterizing the weak and strong tail dependence corresponding to realizations of the randomly changing association structure.
As the table shows, the strong and weak mean tail dependence of both Jacobi and HtanOU models are close to the observed values, and the Jacobi model is proven to be the better in these terms. The 95% bounds are large enough to accommodate the observed values for both models. However, the constant correlation model fails to fit in this sense as well. The mean values for constant correlation reflect the known fact that the multivariate normal distribution has no tail dependence. Even the confidence intervals are too narrow to include the observed values. Therefore, based on both strong and weak tail dependence, the assumption of constant correlation has to be rejected on this level.
It is also worth noticing that the mean tail dependencies of the Jacobi and HtanOU models are also not included in the confidence interval of the constant correlation model. This indicates that at least in some sense, these latter models truly create tail dependence.

Conclusion
We built up models to describe associations of two stock prices, based on the stochastic correlation described by Jacobi and transformed Ornstein-Uhlenbeck processes. We developed two selection criteria based on the copula of the price driving Wiener process increments in the Heston model. One of them tests whether the centered K-function obtained from the price residuals of the corresponding model remains within the 95% confidence domain determined by simulated quantile curves. The other tests the strong and weak tail dependence of the mentioned empirical copula to the %95 confidence bounds obtained from simulations. We apply these criteria in the case of daily close prices of the Apple and Microsoft stocks. The three different models to compare or select from are defined by stochastic correlation types of constant correlation, Jacobi process, and hyperbolic tangent transformation of an Ornstein-Uhlenbeck (HtanOU) process. All criteria reject the constant correlation and hence the Gaussian copula for the Wiener processes. Both Jacobi and HtanOU models can be accepted on the 95% levels by all criteria. As a consequence, it can be said that while the Gaussian model is inappropriate for describing the interdependence, both the HtanOU and the Jacobi models can be suitable alternatives. In terms of K-functions, the HtanOU model accommodates most easily the empirically observed interdependence.
On the other hand, the Jacobi model's mean tail dependence is the closest to the observed one, although the difference from the HtanOU is not large.
It is also important to see that stochastic correlations can create tail dependence even for Wiener processes when the marginals are normally distributed. This is in sharp contrast to two one-dimensional Wiener processes having constant correlation, which results in a bivariate normal distribution of the increments. As such, it corresponds to the Gaussian copula having 0, i.e., no tail dependence (Shibuya 1960).
We found that the price residuals are tail dependent and hence only tail dependent stochastic correlation may work for simultaneous modeling of the stock prices. This indicates that the source of stock price association is not exclusively the similarity of the random environment in which they are set, i.e., the volatility. Parallel to that, the internal price driving forces, i.e., some fundamentals, must also act in a synchronized way. These effects increase the correlation risk; hence they must be accounted for, e.g., in portfolio selection or other investment decisions.
A question for further research is how much tail dependence is inherited from the Wiener generators to the prices themselves. The association of the variance processes, their driving Wiener processes, and their effect on the price associations are also subject to further study.