Correction

While reproducing the experiments that we have previously conducted as part of the Article Classification Task (ACT) of the Biocreative III Challenge (BC3), we discovered two errors in our reported results:

  1. 1.

    When computing the performance of two of our four classifiers (VTT3 and VTT5)on the test data, information from class labels was indirectly utilized. This accidental contamination occurred via the additional named entity recognition (NER) features included in these two affected classifiers. Therefore, the performance we previously reported for these two classifiers on test data is higher than it should be. The problem only applies to the test runs under the two classifiers VTT3 and VTT5. Performance reported on the training data for all classifiers and on the test data for the other classifiers remains correct and was not affected by this issue.

  2. 2.

    The values of the area Under the interpolated Precision and Recall Curve (AUCiP/R) performance measure for the test data were reported lower than their true and correct values. This occurred because the official BC3 evaluation script uses the classifier confidence values only if the appropriate variable is checked, which we did not previously do.

Tables 5, 6, and 7 of the original paper [1], which included the affected results, have now been corrected and are attached below.

Table 5 Performance of the submitted classifiers over the test data
Table 6 Summary statistics and variation of the performance of all runs submitted to ACT on the official BC3 gold standard, including our original and our corrected runs
Table 7 Performance of top 20 reported runs for the ACT in BC3

The above issue does not affect any of the results reported for the Interaction Method Task (IMT), nor those reported in tables 1–4 of the ACT.

The corrected results do change some of the conclusions we have drawn in the original paper regarding the ACT, as follows:

  1. 1.

    There is a substantial improvement in the ranking and classification of articles relevant to protein-protein interaction when using the ABNER NER tool [2] over abstracts; this can be seen by comparing the performance of VTT0 (no NER tools) with VTT1 (using ABNER) in Table 5. However, there are only minor gains in performance by applying the additional NER tools NLProt [3] and OSCAR 3 [4] to abstracts; this can be seen by comparing the performance of VTT1 (using ABNER) with VTT3 (using ABNER, NLProt and OSCAR 3) shown in the corrected Tables 5 and 7.

  2. 2.

    Including partially available full-text NER data as reported in the original paper [1], does not lead to classification improvement. Indeed, it hinders the performance of the VTT classifier. As can be seen in the corrected Table 5, VTT3 (without full-text NER features) outperforms VTT5 (with additional full-text NER features extracted with ABNER and the PSI-MI ontology [5]) on all performance measures except accuracy. Therefore, instead of the approximately 3% improvement, which we previously reported, including such full-text data actually leads to a 3-5% drop in performance.

  3. 3.

    Our linear classifier VTT5, which uses abstract and full-text NER features, is not the top classifier and does not outperform the best classifiers submitted to BC3. Our top classifiers are VTT3 and VTT1, which perform at approximately the same level (see Table 5). These two simple, linear classifiers obtain an overall competitive result well above the mean and the 95% confidence interval of the performance of all submissions to BC3 (see corrected Table 5 and 6). However, as can be seen in the corrected Table 7, using the rank product of the four main performance measures, these two classifiers rank 19th and 20th among the 59 runs submitted to BC3,including our own original and post-challenge runs. Based on these results, our team ranks 6th among those participating in the ACT task.

Along with the original submission [1], we provided a URL to demos including all data used in the challenge; the errors reported above were reflected in the demo code. At the same URL, we now provide updated demos, in which the above errors are all corrected (http://cnets.indiana.edu/groups/casci/piare).