Knowledge and Information Systems

, Volume 30, Issue 1, pp 31–55

Correcting evaluation bias of relational classifiers with network cross validation

  • Jennifer Neville
  • Brian Gallagher
  • Tina Eliassi-Rad
  • Tao Wang
Open AccessRegular Paper

DOI: 10.1007/s10115-010-0373-1

Cite this article as:
Neville, J., Gallagher, B., Eliassi-Rad, T. et al. Knowl Inf Syst (2012) 30: 31. doi:10.1007/s10115-010-0373-1


Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess the models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). We propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1−Type II error).


Relational learningCollective classificationStatistical testsMethodology
Download to read the full article text

Copyright information

© The Author(s) 2010

Authors and Affiliations

  • Jennifer Neville
    • 1
  • Brian Gallagher
    • 2
  • Tina Eliassi-Rad
    • 3
  • Tao Wang
    • 4
  1. 1.Departments of Computer Science and StatisticsPurdue UniversityWest LafayetteUSA
  2. 2.Lawrence Livermore National LaboratoryLivermoreUSA
  3. 3.Department of Computer ScienceRutgers UniversityPiscatawayUSA
  4. 4.Department of Computer SciencePurdue UniversityWest LafayetteUSA