Investigating the force multiplier effect of citizen event reporting by social simulation
- First Online:
- Cite this article as:
- Kramer, M.A., Costello, R. & Griffith, J. Mind Soc (2009) 8: 209. doi:10.1007/s11299-009-0059-0
- 83 Downloads
Citizen event reporting (CER) attempts to leverage the eyes and ears of a large population of “citizen sensors” to increase the amount of information available to decision makers. When deployed in an environment that includes hostile elements, foes can exploit the system to exert indirect control over the response infrastructure. We use an agent-based model to relate the utility of responses to population composition, citizen behavior, and decision strategy, and measure the result in terms of a force multiplier. We show that CER can lead to positive force multipliers even with a majority of hostile elements in the population. When reporter identity is known, a reputation system that keeps track of trustworthy reporters can further increase the force multipliers.
KeywordsAgent based modeling Civil violence Neighborhood watch Citizen event reporting
Neighborhood Watch (http://www.usaonwatch.org). Founded by the US National Sheriff’s Association, USAonWatch.org provides resources to communities interested in starting or improving Neighborhood Watches.
Crime Stoppers International (http://www.c-s-i.org). This organization takes tips and gives rewards to assist criminal investigators. The organization claims 700,000 arrests and seizure of $9 billion in stolen property and illicit drugs worldwide through the end of 2008.
DropaDime.net (http://www.dropadime.net) is a web-based, central location for reporting any type of crime or fraud.
WeTip (https://www.wetip.com) is a US nonprofit providing anonymous phone answering and tip collection service concerning school safety, corporate ethics, illegal drugs, and more, claiming 15,000 cases solved and 8,000 criminals convicted since 1972.
While CER itself is not new, widespread wireless communication has facilitated a host of new reporting possibilities, such as on-the-spot pictures or video, geo-location data, mobile applications, map-based visualizations, and real-time peer communication. Social media such as Twitter produced timely and detailed information in the recent Mumbai terror attacks (Beaumont 2008), which could help guide first responders. Some of the emerging application areas for CER include environmental monitoring (http://www.cybertracker.org), disaster response (http://www.instedd.org; Schellong 2006), election monitoring (Kinkade and Verclas 2008), and disease monitoring (Georgia Division of Public Health 2008).
Another promising application of CER emerges in the context of asymmetric warfare. Unlike traditional force-on-force warfare, in asymmetric warfare the enemy is blended into the population at large, emerging only momentarily to mount attacks. This strategy presents severe challenges for traditional information gathering systems, which rely on sophisticated sensors to detect troops, facilities, and equipment. While effective against fixed facilities, large mobile equipment, and traditional forces, these techniques are less likely to detect hostile individuals and loosely-coupled groups.
The conflicts in Iraq and Afghanistan have highlighted the need to gather information known primarily by the local populace. Of particular concern is detection of improvised explosive devices (IEDs), bomb-making facilities, location of hostages, and identification of militant insurgents and terrorists. Soldiers on patrol are highly motivated to locate bomb-makers and IEDs; however, civilians far outnumber troops in urban battlefields. Civilian reports on suspicious activities and devices represent a potentially significant force multiplier.1
There are many challenges to making CER work in hostile situations, where foes constitute a significant subgroup in an otherwise friendly or malleable population. The first challenge concerns citizen participation, which may be discouraged through social pressure, threats and intimidation. Traditional CER systems rely both on explicit incentives (monetary rewards) and social/indirect incentives (e.g., reduction in neighborhood crime) to encourage participation. Many CER systems also assure safe, anonymous reporting to reduce fear of reprisal, having to speak directly to police, and potential legal entanglements. While traditional incentives should be helpful in hostile situations, they may not necessarily be sufficient to cause an occupied population to cooperate with an occupying force. While important, issues of participation, incentives, and rewards are not discussed further in the current paper.
To harm first responders. An insurgent wishing to harm first responders or degrade the emergency response system might use the CER system to lure troops into ambushes, or attack them with roadside bombs (Stojanovic 2004). Misinformation designed to evoke the desired response from first responders might take the form of a single report or a coordinated series of staged reports from colluding individuals.
To harm a third party. Clans in Iraq and Afghanistan have attempted to exploit US troops by falsely reporting that another clan was an insurgent group, provoking a military reaction against the rival clan (Associated Press 2008).
To create a decoy. This occurs when a false report is created to draw attention and resources away from an area where a hostile act is planned.
Throughout this paper, we assume that citizen reports are called into an operations center by phone (via voice, text, or mobile application) or computer. At the operations center, a decision maker (DM) decides whether to dispatch resources. We assume each decision has a measurable outcome in terms of costs and benefits. For example, a response might require manpower, equipment, and supplies, and temporarily make these resources unavailable for other tasks, entail risk (endangerment of the first responders), and offer benefits in terms of protecting the peace, apprehending a suspect, or providing aid. We capture the essence of the problem by assuming the DM seeks to maximize the net utility (the total benefit less the total cost) of all responses.
What percentage of foes in a population will undermine the CER approach through collection of too many false or misleading reports?
What decision strategies minimize the impact of untrustworthy reports?
Is it necessary to require reporters to reveal their identity, or is anonymous reporting be sufficient?
Our approach was to create a social simulation that includes citizens (both friends and foes), soldiers, and DMs. In creating our model, we drew on several previous studies. Epstein et al. (2001) modeled civil violence in which a central authority tries to suppress a rebellion, or quell violence between two subgroups. Epstein’s citizens harbor a “grievance” which is a combination of hardship and the citizen’s perception of the illegitimacy of the central authority. Citizen actions are also governed by a personal risk tolerance. Goh et al. (2006) took this a step further by allowing citizens and cops to evolve new behavior by random mutation and selection, allowing successful strategies to spread via learning. Doran (2006) identified positive feedback between insurgent numbers, military success, and recruitment. Other related studies have used agent-based models to explore crime prevention (Groff and Birks 2008; Vasconcelos and Furtado 2005).
Recent news reports have suggested that increased security and reduced threat of retaliation has led a sudden increase in the number Iraqi citizens coming forward with tips in Shiite areas of Baghdad (Gamel 2008). This suggests there may be a “tipping point” characterized by a sudden transition from a state where there are few citizen reports, to one where there are many citizen reports. The feedback mechanism could be that citizens will report when they observe other citizens collecting incentive payments while remaining free of retaliation.
2 Citizen event reporting model
The citizen considers the costs, benefits, and risks, including the possibility of retaliation and the amount of good (or ill) will felt towards authorities (stemming from belief in the legitimacy of the authorities, grievances and amity from past interactions) as well as influences from peer groups. The DM, on the other hand, must decide whether to respond by estimating the likelihood of the event reported, and by considering the risks and benefits of responding, and the costs and benefits of delaying a response to gather more information. Our initial focus has been the DM, and thus our approach has been to create a simple model of citizen behavior in terms of observing and reporting incidents, and use that to explore decision parameters.
2.1 Simulation overview
To simplify the problem, there is only one type of event to be reported, a “fire”, which stands in for many types of threats. The world is a two-dimensional torus whose surface is divided into a rectangular grid. Fires can start spontaneously in any grid location, and they burn out if they are extinguished within a set period of time. Citizens and soldiers traverse the environment and can report a fire if they come within a certain distance of it. Reports have a grid location, a time, and an optional reporter identity. Citizens cannot report the absence of a fire at a location, but can falsely report a fire where there is none. The DM has access to all reports and decides when and where to respond according to a command and control (C2) strategy. We assume there are no resource constraints on the number of fires that can be simultaneously addressed, but each response carries a risk penalty.
2.2 Utility function
Extinguished Fires (E). Represents fires that were extinguished by first responders, either due to soldier or citizen reports.
Burned-out Fires (B). Represents fires that burned out by themselves, either because they were not observed, they were observed but not reported, or because the DM failed to respond to report(s).
False Alarms (F). Represents unnecessary responses by first responders, where fires were suspected, but none were found.
The basic form of the utility function is E – bB − fF, where the parameter b represents the cost of a fire left to burn (type II error), and f represents the cost of a false response (type I error). The utility of putting out a fire is assumed to be 1, without loss of generality. In the simulations reported here, we used f = b = 1.
Note that, consistent with our definition, the force multiplier is 1 if there are no reports or if the DM ignores citizen reports, and greater than 1 if the number of fires put out due to CER (Ec) exceeds the penalty for false responses (cF).
2.3 Agent behavior
We define two types of citizens: cooperative citizens (friends), and hostile citizens (foes). Although in reality there may be neutral parties, conflicting allegiances, and changes in loyalty over time, for simplicity we restrict to these two categories. The model also includes soldier agents. The central DM is an agent, but one without a geographical location. We do not explicitly model first responders, since their behavior is simply to put out a fire when requested by the DM, and we assume the DM has as many first responders as needed. Soldiers, friends, and foes are randomly placed on a rectangular grid and move using the same strategy: all agents move at the same speed and have the same probability of changing direction. All agents also have the same “vision” which determines how close they need to be to a fire to observe it. Since there are a large number of parameters in the model, and time and manpower were limited, some parameters (including vision and movement strategy) remain fixed for all experimental runs, and their influence on the results are not known.
When a fire is observed, an agent may report it. Soldiers report every fire they see. Friends report fires they see with a probability of less than 1, a parameter we refer to as diligence. Foes report fires they see with low probability (zero in our simulations).
Citizens can also make false reports. A false report occurs when a citizen reports a fire at a location where there is none. We assume soldiers do not make false reports (although in reality, soldiers may not be perfectly reliable witnesses and their perceptions can be faulty if they are deceived or do not understand what they are sensing in the environment). Friends have a low probability of false reports (assumed to be zero in most of our simulations). Foes account for most false reports, generating them at random time intervals, controlled by a falsehood probability parameter.
Foes also can collude. If there are other foes in the vision of the caller making a false report, then they can be recruited to make a false report as well, using the location of the original caller, thus reinforcing the impression of a real fire. In the simulation, another probability controls whether the original caller attempts collusion.
2.4 Decision maker and citizen reputation
The objective of the DM is to maximize the utility defined in Eq. 1, by correctly identifying false reports and responding only to real fires. Since soldiers are considered to be completely reliable witnesses, the DM will always respond on the first soldier report. For citizen reports, more sophisticated strategies are required. We consider two main scenarios: with and without reporter identity.
When reporting is anonymous, the DM’s decision has to be based entirely upon the number of reports at a grid location over a period of time. In this scenario, calls are considered inactive if they are older than a time threshold set by the DM. The strategy can be expressed in terms of the number of active calls at a grid location necessary to trigger a response. In our simulation, all locations are treated equally, though this is not a fundamental restriction of our model. Modeling neighborhoods with different population compositions and fire propensities remains for future work.
When reporter identity is known, the DM can keep track of reputations. When the DM sends first responders, reports will be revealed as accurate or fabricated. Each citizen agent can be tracked in terms of the number of true and false reports they contribute. The simplest reputation-based strategy is blacklisting. When a citizen is determined to have given a false report, they are added to a blacklist and future reports from that citizen are ignored. In addition, if there are multiple reports on a grid location, and one or more come from citizens on the blacklist, and the other reporters are unknown entities (meaning they have no verified reporting history), then the DM can infer that the other reporters are colluders, and add them to the blacklist, even without responding.
3.1 Optimal decision strategy with anonymous reporting
A set of simulations were run to determine the optimal decision strategy with anonymous reporting. Without caller identity, the DM can only make decisions based on the number of reports in a grid location in a given time period. Using the experimentation method shown in Fig. 3, we were able to determine the optimal number of calls needed to trigger a response for any given set of simulation parameters. We explored four parameters: the number of CER participants, the fraction of foes among participants, the probability of fire per unit of time, and the probability of collusion.
In a friendly environment (no foes), the DM should always respond to the first report of an incident (N = 1).
As the percentage of foes increases, there is a point where the best strategy is to never respond to reports (N ≫ 1). The best strategy may flip suddenly from N = 1 to N ≫ 1, or N may gradually increase. Gradual increases tend to be observed when the probability of incidents is low.
All other things being equal, as the number of participants increases, the DM should require more confirmatory reports (N > 1).
When foes do not collude, in many cases, the best strategy is to respond when the second report is received (N = 2). This is because a false report has a very low likelihood of being confirmed, unless foes are colluding. However, with low citizen participation, the best strategy may be to respond to the first report (N = 1).
When there is collusion, N can exhibit complex, non-monotonic behavior (Fig. 5d). The reasons for this are not completely understood.
3.2 Are more citizen reporters better?
Most CER programs strive for high levels of citizen participation under the assumption that more “feet on the street” are better. Our simulation shows that this intuition is sometimes false. For example, with a fixed percentage of foes (50%), and using the simple N = 1 strategy, the utility passes through a maximum (0.57) at around 600 participants (300 friends, 300 foes). If 400 additional participants are recruited (200 friends and 200 foes), the effectiveness of the system declines by 18%.
The explanation is that as participation increases, the number of true reports goes up, but only towards a bound set by the number of incidents. The number of false reports, on the other hand, increases in proportion to the number of participants, since there is a fixed probability for each foe to make a false report. As participation rises beyond a certain point, the DM becomes increasingly beset by false reports, without a compensating increase in the amount of valid information. In the above example, the addition of 400 citizens increases the number of fires put out by only 10% (from 86 to 96% of all fires), but the number of false alarms increases by 67%.
3.3 How many citizens equal 100 soldiers?
Another interesting result concerns the number of citizens required to provide the same utility as a given number of soldiers. We calculated this based on 100 soldiers operating alone (U = 0.493), and then determining how many citizens it takes to achieve the same utility, with different percentages of foes in the population.
3.4 Exploiting caller identity with a reputation system
The DM must evaluate the believability of reports to identify false reports. One of the most important types of evidence involves caller identity, which could be linked to the cell phone, the caller, or both. Cell phones are identified by the phone number (defined by the SIM card), Bluetooth ID (for phones so enabled), and potentially a software serial number. The user can have a user ID, a password, biometric data, name, and/or address. Unfortunately, the relationship between user and phone is not permanent. Phones, as well as IDs and passwords, can be borrowed, shared, stolen, or sold. Techniques such as voice biometrics are required to tie a user unambiguously to a mobile device.
For the purposes of our simulation, we assume caller identities are either fully known or fully anonymous. These two extremes are useful to bracket the lower and upper bounds on the performance of CER systems. When identities are known, the blacklisting reputation system (described previously) can be implemented. The reputation system uses past history to allow the DM to discriminate deceptive callers.
4 Conclusions and future work
In this paper, we explored the utility of CER as a means of supplementing conventional information collection. Using agent-based modeling, we showed how citizen reporting can be beneficial, even with significant percentage of foes in the population contributing false reports. We showed that the optimal strategy of DM depends on the number of participants, frequency of reportable incidents, the percentage of foes, and amount of collusion between foes. We also showed that above a certain percentage of foes, CER becomes ineffective, but that reputation systems can significantly expand the region of feasibility.
A surprising conclusion is that beyond a certain level of citizen participation, more “feet on the street” can actually reduce the effectiveness of CER, due to growth in the number of false reports. We also showed that, at least to a first approximation, it is possible to calculate the number of citizens required to replace a platoon of soldiers, to achieve the same level of decision-making capability.
We can create more sophisticated citizen models that combine the behavior of friends and foes in different proportions based on varying degrees of goodwill.
We can study strategic behavior, where a foe temporarily adopts friendly behavior to build false trust, with the intention of later exploiting that trust.
We can explore the effects of geography, multi-sided conflicts, or tribal, social, or religious affiliations.
We can create strategies for the DM that apply in situations where the percentage of foes, degree of collusion, and other parameters necessary to choose the optimal decision strategy are unknown.
We can explore decision rules applicable to partial or ambiguous reporter identity, perhaps exploiting “outside information” to corroborate or falsify reports.
We can consider the effect of multiple types of events, limitations in the number of first responders, and varying risk and benefit parameters.
A force multiplier is any method or technology that increases the effectiveness of conventional troops. A formal definition in the context of CER will follow.