Introduction

The coronavirus disease 2019 (COVID-19) pandemic has posed a severe threat to global health and economy while producing some of the richest data we have ever seen in terms of infectious disease tracking. The quantity and quality of data placed epidemic modeling and forecasting at the forefront of worldwide public policy making. Compared to previous infectious diseases, COVID-19 shows special transmission characteristics, yielding significant fluctuations and non-stationarity in the new COVID-19 cases. This poses grand challenges in effective prediction, and, on the other hand, draws attention of the global community to epidemic tracking and prediction.

In the last three years, various models and methods have been developed to predict COVID-19 cases (see survey in1 and references therein). These models can be roughly grouped into two categories: mechanistic models and data-driven models. The mechanistic models aim at directly characterizing the underlying mechanisms of COVID-19 transmission. Typical examples of mechanistic models are based on differential equations, such as the compartmental models SIR and SEIR2,3,4,5. The data-driven models formulate the prediction of the COVID-19 cases primarily as a regression problem and exploit fully data-adaptive approaches to understand the functional relationship between COVID-19 cases with a set of observable variables. Data-driven models include classical statistical models such as Autoregressive models (AR)6,7,8, Support Vector Machine (SVM)9,10,11, and the deep learning models12,13,14,15,16,17,18. In this paper, we will focus on data-driven models.

An Autoregressive model expresses the response variable as a linear function of its previous observations19. Its simple structure and strong interpretability are found to be powerful in capturing short-term changing trends in time series. AR models have been applied in various application fields, including infectious decease modeling20,21. However, they may fail to capture the highly nonlinear patterns and long-term effects in the data-generating dynamics. On the other extreme of the predictive model complexity spectrum, deep learning models, particularly LSTM22, have demonstrate impressive power in capturing complex dependence structures in sequential data. LSTM has been used to achieve the best-known results for many problems on sequential data. However, a well-known limitation of the deep learning models is the short of interpretability due to their black-box nature. This lack of interpretability prevents people from drawing useful conclusions from the model outputs, thus hinders effective policy making23, especially in crucial fields such as public health. This observation motivates us to consider a hybrid model in which the two seemingly distinct types of models join forces while maintaining both good predictive power and certain interpretability.

In this paper, we propose such a hybrid model that additively combines the LSTM and the AR model for the task of COVID-19 cases prediction. The proposed hybrid model is formalized as a neural network with an architecture that connects an AR model and a LSTM block, and the relative contribution of these two component models is decided in a totally data-adaptive way in the training procedure. To demonstrate the predictive power of the proposed hybrid model, we consider both county-level and country-level data. Specifically, in 8 counties in the state of California, USA, and 7 other different countries (results available in the Supplementary Material), our method performs favorably compared with either AR or LSTM model alone, as well as other commonly-used predictive models under various evaluation metrics. All codes are accessible through links on the reference page24.

In addition to the predictive accuracy, the importance of predictive models’ interpretability has been discussed in plenty of previous works23,24,25,26,27. A higher model interpretability facilitates human’s ability to understand its predictions, and thus promotes bias detection and other factors that contribute to policy making. Specifically, we demonstrate how the coefficients from the AR part of the trained hybrid model shed light onto understanding the underlying disease transmission mechanism, and thus could help to predict its prevalence trends, and to inform public health policy makers to improve pandemic planning, resource allocation, and implementation of social distancing measures and other interventions. A long-term mission of this paper is to stretch the application of hybrid models beyond COVID-19 forecasting: toward other fast-moving epidemics and cases that require accurate prediction and interpretability.

Although in this paper we focus on confirmed cases prediction, we note that the proposed framework can be easily extended to tackle other COVID-19 or more general epidemiological tasks (e.g., hot spot prediction). Furthermore, the proposed method has its own research significance from a methodological perspective. For example, it raised the open questions on studying its theoretical guarantees, mathematical quantification of prediction, and interpretability.

Related work

Recently, numerous studies have employed machine learning techniques to investigate various tasks on COVID-19 and achieved impressive results. Examples include using deep learning to detect COVID-19 through CXR images and predicting death status based on food categories to recommend healthy foods during the pandemic28,29,30. In light of these advances, our research focuses on predicting confirmed cases of COVID-19.

In this section, we provide a more detailed review of data-driven models that formulate the prediction problem as a regression problem. Regression-based models, including simple AR models and more complex models such as Random Forest, Gradient Boosting, and CNN-LSTM, have been widely used for COVID-19 prediction. For example, Mumtaz et al.31 used ARIMA to predict the daily confirmed cases in European countries, while Yesilkanat32 used a Random Forest model to predict the number of cases and deaths. Muhammad et al.33 used a CNN-LSTM model to predict the number of confirmed cases and deaths in Nigeria, South Africa, and Botswana. We summarize a list of recent work from year 2020 to 2022 in Table 1.

One advantage of these models is that they do not require a priori knowledge of the disease dynamics and can capture rich relationships in the data. They have been shown to be effective in predicting COVID-19 cases in various regions around the world. However, COVID-19 data displays rich variability, and therefore a single predictive model may not be sufficient and has its own limitations. For example, one major disadvantage of ARIMA models is that they may not be able to capture non-linear patterns in the data, which can lead to inaccurate predictions. On the other hand, more complex models such as Random Forest and CNN-LSTM may suffer from overfitting, where the model becomes too specialized to the training data and cannot generalize well to new data. These complex models may also lack interpretability, making it difficult to understand the factors driving the predictions and thus provide little to none guidance to actual public health policy making.

Hybrid predictive models that combine different regression models may offer the best of both worlds by capturing both linear and non-linear patterns in the data while maintaining some degrees of interpretability. The idea is to decompose a model into different components that are designed to capture specific characteristics of the data. It has proven to be an effective way of improving empirical predictions in various applications, including those in COVID-19 prediction34,35,36,37,38.

Comparison to previous works on hybrid modeling

The idea of using an additive combination of AR models (or more generally, ARIMA models) with LSTMs has recently appeared in the literature for time series forecasting, with applications in gas and oil well production and sunspot monitoring39,40. However, there is a significant difference between our approach and previous methods: our approach trains the two components in the model jointly, while previous hybrid modeling techniques take a sequential approach to training. Specifically, Zhang41 proposed a hybrid model of ARIMA and artificial neural networks, aiming to capture more patterns in the data and improve forecasting performance. The preprocessed data is used to fit an ARIMA model first, before the residual term is used as input to train a neural network model. Fan et al.39 followed a similar procedure, using an ARIMA model and an LSTM model. The logistics of these methods is to use an ARIMA model to capture the linear pattern of the data first and rely on the neural networks capture the non-linear pattern in the residuals. The main goal of these previous works is to explore whether a hybrid model produces better performance than the single models.

In our study, we design a general network architecture that includes both an AR part and an LSTM part additively and trains the entire architecture jointly by minimizing the empirical risk. By doing so, we do not arbitrarily give preference to any of the two additive components. Instead, the relative weights of the interpretable AR part and the predictive LSTM part are determined fully by the data.

Table 1 A non-exhaustive list of previous works on data-driven models for COVID-19 cases prediction in the past three years.

In summary, our contributions can be summarized as:

  • Development of a novel approach to hybrid modeling for COVID-19 cases prediction: we have designed a general network architecture that combines AR and LSTM models additively and trains the entire architecture jointly, allowing the relative weights of the interpretable AR part and the predictive LSTM part to be fully determined by the data. This approach is a departure from traditional sequential modeling approach and has the potential to contribute to the literature of sequential data prediction.

  • Extensive numerical studies on data sets from two sources that displays a rich variety of variability: we have shown that the proposed hybrid model demonstrates better forecasting performance than single models. This finding is important as it shows that the hybrid model is an effective way to combine the strengths of different modeling techniques and can be used as a framework for future research.

  • Exploration of interpretability: we have also explored the interpretability of the hybrid model, which is an important contribution as it allows for a better understanding of the model and can lead to improved decision-making based on the model’s output. This contribution enhances the practical applicability of our proposed hybrid model.

Methods

In this section, we first overview the two building blocks of our additive hybrid model, namely the AR and the LSTM model, and their relative advantages. Then we present our hybrid model which combines these two building blocks additively, and we intuitively elaborate why it is better than the two individual components.

Autoregressive (AR) models

In time series, we often observe associations between past and present values. For example, by knowing the price of a stock in the past few days, we can often make a rough prediction about its value tomorrow. AR is a simple model that utilized this empirical observation and can yield very accurate prediction in certain applications. It represents the time series values using linear combination of the past values. The number of past values used is called the lag number and often denoted by p. Let \(\epsilon _t\) denote the Gaussian noise at time t with mean 0 and variance \(\sigma ^2\). The structure equation of AR(p) model can be represented as

$$\begin{aligned} Y_t =a_0+a_1 Y_{t-1}+a_2Y_{t-2}+\cdots +a_p Y_{t-p}+\epsilon _t \end{aligned}$$
(1)

where \(a_0\) is the intercept, and \(a_1,\cdots ,a_p\) represent the coefficients. AR model is often effective on stationary data. To ensure stationarity, a common trick is to apply the differencing operation on the time series. A time series value at time t that has been differenced once, \(Y^{(1)}\), is defined as follows:

$$\begin{aligned} Y^{(1)}_t= Y_t-Y_{t-1}, \end{aligned}$$
(2)

and higher order differencing operation can be defined recursively. However, an AR model is not sufficient to capture the non-linear dependence structure, which is found to be an important feature of the COVID-19 data, indicated by Fig. 1. A purely AR based model is thus often insufficient for the task of COVID-19 cases prediction.

Figure 1
figure 1

An example of visualizing daily observations, where blue line represents the data before smoothing, orange line represents data after smoothing. The data is collected from the Los Angeles county.

Long short term memory networks (LSTM)

RNN (Recurrent Neural Network)52 is known to suffer from the long term dependency problem: as the network grows larger through time, the gradient decays quickly during back propagation, making it impossible to train RNN models with long unfolding in time. To solve this problem, Hochreiter and Schmidhuber (1997) introduced a special type of RNN called LSTM with a proper gradient-based learning algorithm22.

We employ a LSTM regression model, which is represented as

$$\begin{aligned} Y_t = G_{\theta }(Y_{t-1},...,Y_{t-p}), \end{aligned}$$
(3)

where we use \(Y_{t-1},...,Y_{t-p}\) as the sequential input data; G represents the neural network architecture shown in Supplementary Fig. 1 and \(\theta\) represents the weight parameters in neural networks.

The core concepts of a LSTM cell are the cell states and the associated gates, as illustrated in Supplementary Fig. 2. The cell state \(C_{t-1}\) at time step \(t-1\) acts as a transport highway that transfers relative information all the way down the sequence chain, which intuitively characterizes the “memory” of the network. The cell states, in principle, carry relevant information throughout the processing of the sequence. So even information from the earlier time steps can make its way to later time steps, reducing the effects of short-term memory. The Forget Gate decides what information should be kept. The Input Gate decides what information is to be added from the current step and update the cell state \(C_t\) at time step t. The Output Gate determines what the next hidden state \(h_t\) should be. The four gates comprise a fully connected feed forward neural network.

To achieve optimal prediction results using LSTM model, it is crucial to have a careful hyperparameter tuning, including the choice of units (dimension of the hidden state), the number of cells (i.e. the number of time steps), and layers. This is usually a difficult task in practice. For example, few LSTM cells are unlikely to capture the structure of the sequence, while too many LSTM cells might lead to overfitting. However, just like other neural networks, a well-known limitation of LSTM is its lack of interpretability23.

The hybrid model

As discussed above, both AR and LSTM have their relative strength and limitations in their prospective domains. We propose to combine the two models additively into one single hybrid model, which is expressed as

$$\begin{aligned} Y_t = \alpha \textrm{AR}(p) + (1-\alpha ) G_{\theta }(Y_{t-1},...,Y_{t-p}), \end{aligned}$$
(4)

where p is the lag number and \(\alpha\) weights the contribution of two components: by tuning the value of \(\alpha\), one can strike a balance between the prediction given by AR and LSTM parts, and thus a prediction of linear and nonlinear signals.

We illustrate the structure of the hybrid model in Fig. 2. The hybrid model is characterized as one neural network architecture where the two composing models are added through the last layer. The AR component captures the linear relationship in time series and the LSTM component would describe the nonlinear patterns. In section “Training” of the Supplementary Material, we show how to train the weights in each of the two components in a fully data-adaptive manner by minimizing the empirical risk. We will compare the contribution of the hybrid model’s AR component and LSTM component in section Results.

Figure 2
figure 2

Visualization of the hybrid architecture.

Results

The results include four sections: Model evaluations, Prediction, Interpretability, and Comparative study on the WHO datasets. In Model evaluations, we introduce the metrics we use to evaluate the models and on which we compare the models’ performances. In section Prediction, we exhibit the visualizations of several interesting trials and compare the numerical predictions and evaluations of the three models. In Interpretability, we compare the AR component of the hybrid model with the AR model. This is to examine how we may interpret the hybrid model. We leave other training details in Supplementary Material. In Comparative study on the WHO datasets, we further examine the performance of the proposed hybrid model by applying it to data of 7 different countries around the world and comparing its performance with that of its component models and 3 additional models.

Data description and statistical analysis

We utilize two primary data sources. The first data source is a dataset specific to California counties, which is available in the CHHS Open Data repository under the title COVID-19 Time-Series Metrics by County and State. This dataset includes information on populations, positive and total tests, number of deaths, and positive cases. We conducted a preliminary statistical analysis to examine correlations between these variables and the number of daily cases. The results of this analysis can be found in Supplementary Fig. 3 in Supplementary Material, and we anticipate that they will provide valuable insights for future research.

The second data source, used for comparative analysis, can be found in the WHO repository at the WHO Coronavirus (COVID-19) Dashboard. This resource presents official daily counts of COVID-19 cases, deaths, and vaccine utilization, as reported by countries, territories, and areas. In this study, we use 7 countries: Japan, Canada, Brazil, Argentina, Singapore, Italy, and the United Kingdom.

All datasets generated and analysed during the current study are also available in the author’s Github repository24.

Model evaluations

We use a quantitative measure to evaluate and compare the performance of models: the Mean Absolute Percentage Error (MAPE), defined as:

$$\begin{aligned} \textrm{MAPE}&= \frac{100}{n}\sum _{t=1}^{n}\frac{ |\widehat{Y}_{t} - Y_{\textrm{true}, t}|}{|Y_{\textrm{true}, t}|} \end{aligned}$$
(5)

A model with small values of MAPE is preferred.

We examine the performance of the three models (hybrid, AR, and LSTM) on different time periods within the available range. This is essential in our research, since the performance of a model is not constant on different trends; by intuition, a model performs better on smooth curves than it does on steep curves. By repeating our evaluation process on different time periods thus different trends, we wish to understand what trends do the model give the best performance. Such understanding will help us decide to what degrees we may trust the performance of the models. We evaluate the models repeatedly to reduce the influence brought by the instability of model training. Specifically, we leave 7 days between the first date of any two consecutive training data points. Although a larger number of repetitions seems desirable, increasing the repetition number is at the cost of making neighboring training points closer to each other. However, the difference in performance between two neighboring training points, that are too close to each other, would be attributed more to the instability of model training than to the difference in trend. Such results give us little information about the model performance over trend. In the end, we let the step number be the same as our lag number. By doing so, we suppose the concept of a week is important in forecasting.

Additional evaluation metrics

In the Supplementary Material, we additionally evaluate and compare above models using Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The evaluation is done on the same dataset across different comparing methods.

Figure 3
figure 3

The left panels show the training and testing data. The right panels show the ground truth versus forecasts of the AR, LSTM, and hybrid model, respectively. We display the average prediction (solid line) with 2 times standard error (shaded region). The standard error across 100 runs are reported for LSTM and hybrid. The hybrid model is more stable than the LSTM.

Prediction

In this section, we present the numerical results for all three models. We perform a comprehensive comparison of the performance for the three models in multiple counties, showing the advantage of the hybrid model. All predictions are transformed back to the original scale.

Visualization

We compare the three models’ performance on COVID-19 case prediction in California 8 counties. For each county, we test the models’ performance on several different situations: for example, when the training data has an up trend and the testing data has a down trend. From all trials we practiced, we choose the following trials, presented in Figs. 3 and 4, as representatives of different combinations of training and testing data, since they reflect the general model performances well.

Figure 3a shows models being trained on curved data and being tested on down trend data, as shown on the left and right panel, respectively. Figure 3b shows models being trained on up trend data and being tested on down trend data. Figure 3c shows models being trained on up trend data and being tested on up trend data. Figure 3d shows models being trained on down trend data and being tested on down trend data. Figure 4a,b show models being trained on down trend data and being tested on up trend data, while Fig. 4a has gentle upward testing data and Fig. 4b has sharp upward testing data. Figure 4c show models being trained and tested on jagged data.

To ensure the results above are representative, we run each selected trial 100 times, visualize the mean and standard error of these trials, and present averaged MAPE. While AR outperforms LSTM on some cases, the hybrid model outperforms both in most cases, except that in Fig. 3b and in Fig. 4c. The MAPE, averaged on the 100 trials, shows that LSTM (4.469%) outperforms hybrid (4.993%) slightly in Fig. 3b. However, as shown in the right panel of Fig. 3b, the hybrid model captures the general trend of ground truth better than LSTM does. Similarly, in Fig. 4c, AR (3.675%) outperforms hybrid (3.718%) slightly. Yet, as shown in the right panel of Fig. 4c, the hybrid model captures the general trend of ground truth better than AR does.

Beside, interestingly enough, the hybrid model always seems to capture the ground truth’s trend. Actually, the shape of hybrid ’s forecasts resembles either that of the AR model or that of the LSTM model, or it resembles a combination of both. When AR model captures the trend better than the LSTM does, the hybrid model resembles the AR model in forecast shape: for example, in Fig. 3b, San Francisco 2020-02-17 to 2020-05-14, and in Fig. 4a, Santa Barbara 2022-01-17 to 2022-04-14. When LSTM model captures the trend better than the AR does, the hybrid model resembles the LSTM model in forecast shape: for example, in Fig. 3d, San Francisco 2022-06-10 to 2022-09-05, and in Fig. 4b, Riverside 2022-02-16 to 2022-12-20. On jagged testing data, where AR performs better on some part and LSTM better on the other, the hybrid model presents advantages of both models: for example, in Fig. 4c, the hybrid model resembles AR on the two ends, where AR performs better, and it resembles LSTM in shape between day 5 to day 15, where LSTM seems to capture the trend better.

General performance

We evaluated the model performances numerically, in the 8 California counties across multiple trials. The results are given in Table 2. We observe that the hybrid model outperforms the AR model and the LSTM models almost uniformly: it generally yields the smallest average MAPE. To be specific, the general MAPE of each model (AR, LSTM, LSTM with 2 layers, and hybrid), averaged on the results for all 8 counties, is 5.629%, 4.934%, 6.804%, and 4.173% in order. In general, the hybrid model has the best general performance, and it outperforms the AR model by approximately 1.5%. The LSTM model suffers from overfitting when a second LSTM layer is added. As seen in the Supplementary Material, the proposed hybrid model also yields the lowest RMSE and MAE values.

Table 2 MAPE (by percentage) for each model on each county.
Figure 4
figure 4

The left panels show the training and testing data. The right panels show the ground truth versus forecasts of the AR, LSTM, and hybrid model, respectively. We display the average prediction (solid line) with 2 times standard error (shaded region). The standard error across 100 runs are reported for LSTM and hybrid. The hybrid model is more stable than the LSTM.

Interpretability

Interpretability of hybrid models can be defined as the ability to provide insight into the relationships they have learned, as introduced by Murdoch et al.23. The hybrid model proposed a decomposition approach to decipher the learned model underlying the data-generating mechanism, where the estimated AR model provides the easy-to-understand linear trend. On the other hand, the LSTM is able to capture the long-term and nonlinear trend in the time series data. Our hybrid model aims to strike a balance between interpretability and accuracy, enabling us to gain insights into the underlying data while still achieving high predictive performance.

In this section, we study how AR and LSTM components contribute to the hybrid model when fitting the data. Our purpose is to seek the insights into explaining why the hybrid model enjoys the better performance in general. And more importantly, we seek to use the interpretation from the fitted hybrid model to provide practical guidance to the public health policy making process.

Note that all models are trained on the normalized data as described in section “Training” (Supplementary Material). Consequently all figures below report predictions on the normalized scales.

In Fig.  5, we present three settings with different signal strength ratio (represented by the value of \(\alpha\)) of the AR components and LSTM components in the prediction of the hybrid model. Specifically, the larger value of \(\alpha\) indicates the AR component dominates the LSTM component in prediction, and the smaller value of \(\alpha\) indicates otherwise. We found that the component that has stronger signal characterizes the general trend in the data while the other helps to stabilize the variance. This observation sheds light into why the hybrid model provides better predictive performance in general than a single model.

Moreover, the fitted value of \(\alpha\) provides a characterization of the intrinsic nonlinearity of the data, and consequently the difficulty of exploiting interpretation in the linear components of the fitted hybrid model. The smaller the value of \(\alpha\), the higher weight the nonlinear fit using LSTM has in the final prediction. In such a setting, coefficients in the AR components should be given less weight into generating interpretation for policy making. Equivalently, for larger value of \(\alpha\), it is more trustworthy to derive coefficients interpretation from the important AR part. This observation is helpful for public policy maker to distinguish among different virus transmission stages.

Table 3 Coefficients of AR model v.s. AR coefficients of hybrid model.

Finally, we observe interesting patterns of the coefficients estimates in the AR components of the hybrid model compared with the coefficients in the pure AR model. As shown in Table 3, across the three settings of different values of \(\alpha\), the pure AR model tends to put heavier weight in coefficients of larger lags, say \(Y_{t-7}\). In contrast, the AR component in the hybrid model tends to focus on capturing the short history, i.e., the coefficients associated with smaller lags (e.g., \(Y_{t-1}\)) tend to have larger estimates. This indicates that the short history pattern in the data could be well approximated by a simple (say, linear) model, while the longer history in the data possesses more complicated nonlinear structure that requires a LSTM component to fit.

Figure 5
figure 5

The forecasts of a hybrid model versus the ground truth, and the contribution of its AR and its LSTM component.

Comparative study on the WHO datasets

In this section, we compare our proposed hybrid model for COVID-19 prediction with its two component models, the ARIMA and LSTM models, as well as three other commonly used models: Support Vector Machines53 (SVM), Random Forest54 (RF), and eXtreme Gradient Boosting55 (XGBoost). To ensure the effectiveness of our model in different application settings, we use a country-level data for this comparative study, focusing on datasets from seven different countries collected by the World Health Organization.

We provide a brief overview of the three additional comparing methods. Support Vector Machines (SVM)42,47 is a machine learning model that identifies the optimal hyperplane in a high-dimensional space that maximally separates data points into different classes. An SVM applies to both classification and regression problems. SVM is know to not perform well on noisy or unbalanced data56,57.

Random Forest43,44,45 is an ensemble learning method that constructs a multitude of decision trees. A Random Forest is very flexible and can handle complex data types. On the other hand, the Random Forests are known for their reduced interpretability, sensitivity to noise, the need for hyperparameter tuning, and potential issues with imbalanced data. These factors may impact their performance in the context of COVID-19 predictions58,59,60.

Extreme Gradient Boosting (XGBoost)44,46,48 has shown exceptional performance in various tasks. XGBoost is an ensemble learning method based on gradient boosting trees. It is known for its efficiency, scalability, and accuracy. However, like other tree-based ensemble methods, it can be more challenging to interpret. This may make it difficult to understand the driving factors behind predictions. In addition, XGBoost can be prone to overfitting, especially with small datasets or when the hyperparameters are not tuned properly61,62.

We present the numerical results of the comparative study, which are visualized in Fig. 6. The comparative study is done on data collected by the World Health Organization63 in Japan (JPN), Canada (CAN), Brazil (BRA), Argentina (ARG), Singapore (SGP), Italy (ITA), and the United Kingdom (GBR).

Overall, the proposed hybrid model performs better than the other models in most cases, as evidenced by its lower MAPE. This suggests that our model is effective in various situations and outperforms other commonly used models for COVID-19 prediction.

Figure 6
figure 6

A heatmap exhibiting the performance, measured by MAPE in percentage, of the 7 models from this study and from previous work: AR, Single LSTM(LSTM), Double LSTM(DLSTM), hybrid, SVM, Random Forest(RF), XGBoost(XGB). The assessment has been done on data collected by World Health Organization, from 7 different countries around the world: Japan(JPN), Canada(CAN), Brazil(BRA), Argentina(ARG), Singapore(SGP), Italy(ITA), and The United Kingdom(GBR).

Discussion

In this paper, we introduce a novel hybrid model that borrows strength from a highly structured Autoregressive model and a LSTM model for the task of COVID-19 cases prediction. Through intensive numerical experiments, we conclude that the hybrid model yields more desirable predictive performance than considering the AR or the LSTM counterpart alone. In principle, the hybrid model enjoy the advantages of each of its two building blocks: the expressive power of LSTM in representing nonlinear patterns in the data and the interpretability from the simple structures in AR. Consequently, the proposed hybrid model is useful in simultaneously providing accurate prediction and shedding light into understanding the transition of the virus transmission phases, and thus providing guidance to the public health policy making process.

It is also noteworthy that the predictive performance of the proposed hybrid model can be further improved by properly choosing the hyperparameters. Furthermore, while we considered LSTM as the nonlinear component in the hybrid model, it can be substituted by any other deep learning models.