A Joint Human/Machine Process for Coding Events and Conflict Drivers
Constructing datasets to analyse the progression of conflicts has been a longstanding objective of peace and conflict studies research. In essence, the problem is to reliably extract relevant text snippets and code (annotate) them using an ontology that is meaningful to social scientists. Such an ontology usually characterizes either types of violent events (killing, bombing, etc.), and/or the underlying drivers of conflict, themselves hierarchically structured, for example security, governance and economics, subdivided into conflict-specific indicators. Numerous coding approaches have been proposed in the social science literature, ranging from fully automated “machine” coding to human coding. Machine coding is highly error prone, especially for labelling complex drivers, and suffers from extraction of duplicated events, but human coding is expensive, and suffers from inconsistency between annotators; thus hybrid approaches are required. In this paper, we analyse experimentally how human input can most effectively be used in a hybrid system to complement machine coding. Using two newly created real-world datasets, we show that machine learning methods improve on rule-based automated coding for filtering large volumes of input, while human verification of relevant/irrelevant text leads to improved performance of machine learning for predicting multiple labels in the ontology.
This work was supported by Data to Decisions Cooperative Research Centre. We are grateful to Josie Gardner for labelling the ICG DRC dataset, and to Michael Burnside and Kaitlyn Hedditch for coding the AfPak event data.
- 2.Bagozzi, B.E., Schrodt, P.A.: The dimensionality of political news reports. Paper Presented at the Second Annual General Conference of the European Political Science Association, Berlin (2012)Google Scholar
- 5.Gerner, D.J., Schrodt, P.A., Yilmaz, O., Abu-Jabr, R.: Conflict and mediation event observations (CAMEO): a new event data framework for the analysis of foreign policy interactions. Paper Presented at the Annual Meetings of the International Studies Association, New Orleans, LA (2002)Google Scholar
- 7.Leetaru, K., Schrodt, P.A.: GDELT: global data on events, location, and tone, 1979–2012. Paper Presented at the Annual Meetings of the International Studies Association, San Francisco, CA (2013)Google Scholar
- 8.McClelland, C.: World Event/Interaction Survey (WEIS) Project 1966–1978. Inter-University Consortium for Political and Social Research (1978)Google Scholar
- 12.Rennie, J.D., Shih, L., Teevan, J., Karger, D.R.: Tackling the poor assumptions of Naive Bayes text classifiers. In: Proceedings of the Twentieth International Conference on Machine Learning, pp. 616–623 (2003)Google Scholar
- 15.Schrodt, P.A., Yonamine, J.E.: A guide to event data: past, present, and future. All Azimuth 2(2), 5–22 (2013)Google Scholar
- 16.Wang, S., Manning, C.D.: Baselines and bigrams: simple, good sentiment and topic classification. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, vol. 2, pp. 90–94 (2012)Google Scholar