Adaptive Skip-Train Structured Regression for Temporal Networks
A broad range of high impact applications involve learning a predictive model in a temporal network environment. In weather forecasting, predicting effectiveness of treatments, outcomes in healthcare and in many other domains, networks are often large, while intervals between consecutive time moments are brief. Therefore, models are required to forecast in a more scalable and efficient way, without compromising accuracy. The Gaussian Conditional Random Field (GCRF) is a widely used graphical model for performing structured regression on networks. However, GCRF is not applicable to large networks and it cannot capture different network substructures (communities) since it considers the entire network while learning. In this study, we present a novel model, Adaptive Skip-Train Structured Ensemble (AST-SE), which is a sampling-based structured regression ensemble for prediction on top of temporal networks. AST-SE takes advantage of the scheme of ensemble methods to allow multiple GCRFs to learn from several subnetworks. The proposed model is able to automatically skip the entire training or some phases of the training process. The prediction accuracy and efficiency of AST-SE were assessed and compared against alternatives on synthetic temporal networks and the H3N2 Virus Influenza network. The obtained results provide evidence that (1) AST-SE is \(\sim \)140 times faster than GCRF as it skips retraining quite frequently; (2) It still captures the original network structure more accurately than GCRF while operating solely on partial views of the network; (3) It outperforms both unweighted and weighted GCRF ensembles which also operate on subnetworks but require retraining at each timestep. Code and data related to this chapter are available at: https://doi.org/10.6084/m9.figshare.5444500.
This research was supported in part by DARPA grant No. FA9550-12-1-0406 negotiated by AFOSR, the National Science Foundation grants NSF-SES-1447670, NSF-IIS-1636772, Temple University Data Science Targeted Funding Program, NSF grant CNS-1625061, Pennsylvania Department of Health CURE grant and ONR/ONR Global (grant No. N62909-16-1-2222).
- 1.Andonova, S., Elisseeff, A., Evgeniou, T., Pontil, M.: A simple algorithm for learning stable machines. In: ECAI, pp. 513–517. IOS Press (2002)Google Scholar
- 5.Glass, J., Ghalwash, M.F., Vukicevic, M., Obradovic, Z.: Extending the modelling capacity of Gaussian conditional random fields while learning faster. In: AAAI, pp. 1596–1602 (2016)Google Scholar
- 6.Gligorijevic, D., Stojanovic, J., Obradovic, Z.: Uncertainty propagation in long-term structured regression on evolving networks. In: AAAI (2016)Google Scholar
- 8.Qin, T., Liu, T.Y., Zhang, X.D., Wang, D.S., Li, H.: Global ranking using continuous conditional random fields. In: Advances in Neural Information Processing Systems, pp. 1281–1288 (2009)Google Scholar
- 9.Radosavljevic, V., Vucetic, S., Obradovic, Z.: Continuous conditional random fields for regression in remote sensing. In: ECAI (2010)Google Scholar
- 11.Stojanovic, J., Gligorijevic, D., Obradovic, Z.: Modeling customer engagement from partial observations. In: CIKM, pp. 1403–1412 (2016)Google Scholar
- 12.Stojanovic, J., Jovanovic, M., Gligorijevic, D., Obradovic, Z.: Semi-supervised learning for structured regression on partially observed attributed graphs. In: SDM (2015)Google Scholar
- 13.Stojkovic, I., Jelisavcic, V., Milutinovic, V., Obradovic, Z.: Distance based modeling of interactions in structured regression. In: IJCAI (2016)Google Scholar
- 14.Stojkovic, I., Jelisavcic, V., Milutinovic, V., Obradovic, Z.: Fast sparse gaussian markov random fields learning based on cholesky factorization. In: IJCAI (2017)Google Scholar
- 15.Stojkovic, I., Obradovic, Z.: Predicting sepsis biomarker progression under therapy. In: IEEE CBMS (2017)Google Scholar
- 18.Zhou, F., Ghalwash, M., Obradovic, Z.: A fast structured regression for large networks. In: 2016 IEEE International Conference on Big Data, pp. 106–115 (2016)Google Scholar