Abstract
Crime statistics require a radical transformation if they are to provide transparent information for the general public, as well as police operational decision-making. This statement provides a blueprint for such a transformation.
Avoid common mistakes on your manuscript.
Summary
The best way to count crime is to assign a weight to the harm caused by each crime, rather than by counting all crimes as if they were created equal. This can be done by summing up the days of imprisonment recommended by sentencing guidelines for each crime type, multiplied by the number of crimes of each type that were reported by victims or witnesses, then summing the weight across all crime types to equal total crime harm. Total harm can also be calibrated by any other democratically legitimate method of assigning harm levels for each crime category in relation to all others. Any method using sentencing guidelines based on the harm caused by an offence to victims, without regard to the offender’s prior record or other circumstances, meets the standard of the Cambridge Crime Harm Index (Sherman et al. 2016).Footnote 1 Other methods of weighing harm for each crime can apply the same principles outlined in this statement.
By using a single sum of weighted crime harm across all crime types, for each year in each community, governments can offer the public a more reliable indicator of their safety. The Crime Harm Index (CHI) would also provide a clearer indicator by leaving out crimes not reported by the public at large, such as police-initiated investigations of human trafficking and narcotics crimes, as well as crimes such as big-store shop-theft that are detected by a company’s security staff. The clarity comes by separate reporting of proactive investigations that measure of varying levels of investment in detections rather than actual crime levels. A CHI also leaves out crimes reported in the current year but which occurred in a prior year—because the existing system of counting crimes when they are reported but years after they occurred distorts the measurement of current public safety for which police are held accountable.
The single CHI sum also lends clarity to other relevant indicators, such as the “detection” rate, which currently treats all crimes as created equal. It allows police to invest scarce resources in proportion to the harm of each offence type, by showing the public the proportion of all harm for which police bring offenders to justice. A Harm Detection Fraction (HDF) would use CHI to give the fact that over two-thirds (67%) of murders are detected far more weight than the low detection rate for vandalism (Office of National Statistics 2019). Similarly, a Proactive Policing Index (PPI) would use the CHI to give credit—instead of blame—to police for detecting hidden slavery and organized crime. The annual national and local crime reports recommended by this Consensus Statement are comprised of these seven statistical series to be calculated consistently from each year to the next:
A Crime Harm Index (CHI) for crimes against victims in the current year.
Crime counts by all crime categories, used to calculate the CHI.
A Historic Offences Index (HOCHI), a CHI for crime occurring in prior years.
A Proactive Policing Index (PPI), weighted by crime type as for the CHI.
A Company-Detected Crime Harm Index (CDCHI), also weighted by CHI.
A Harm Detection Fraction (HDF), which is the proportion of CHI with police detections.
Detection rates per 100 by all crime categories, used to calculate the HDF.
This system would give the public a reliable and realistic assessment of trends, patterns and differences in public safety. It would also give police a proportionate system of incentives to manage demands for their services with a clear focus on cutting the harm from crime, and not just the high volume of low-harm crimes counted equally. It offers a “bottom-line” for crime, like the profits of a business: a clear metric that untangles the current confusion about what the profusion of crime statistics really means for the general public.
Objectives
There are several different objectives for counting crime. Foremost among them is
- 1.1.
To provide the public with a single reliable measure of how much harm crime victims are suffering in their communities and nation, and how that harm varies—over time and across communities, by year of occurrence. Such a measure would embrace all crime categories in a meaningful way, one that relates to public perceptions of harm.
In addition, counting crime can help the public to hold their police accountable for
- 1.2.
The amount of crime that police discover proactively through their own initiatives-- the crimes that are not reported to police by victims or witnesses
- 1.3.
The proportion of harm that is reported to police by victims or witnesses for which police have held individual offenders to account by identifying them and bringing them to justice.
- 1.4.
The percentage of crimes reported to police for which police bring one or more offenders to justice.
- 1.5.
How police manage their workload of crimes reported to them by victims and witnesses in each specific year, including historical offences, but which are not confused with measurement of current-year crime harm levels.
Problems
There are numerous problems with the current system of counting crime in most contemporary democratic societies. Foremost among them is the absence of a bottom line for crime (Sherman 2007, 2011, 2013; Sherman et al. 2016; see also Ignatans and Pease 2015).
Cutting Crime: the Bottom Line
Crimes Are Not Created Equal, but They Are Counted as if They Were
Any publication of a single measure of crime that treats crime types as equal is misleading. Such methods as the current UK “offence rate per 1,000 population” provide a metric of public safety that gives equal value to a murder and to a theft of a bicycle. This practice can show “crime going down” even if murders rose 10,000% as long as thefts dropped by 50%, because the volume of theft is so much higher than murder. The current method of counting crime fails to provide a reliable measure of the level of harm to victims across and within communities over time.
Using Actual Sentencing Data to Measure Harm to Victims Is Unreliable due to the Influence on the Sentencing Decision of Each Offender’s Prior Criminal History
The UK Office of National Statistics (ONS) has provided a “Crime Severity Score” (CSS) for each offence type since 2017 based on the average sentence for each crime on a long list of offence categories.Footnote 2 This effort is commendable, and has advanced a national conversation about the misleading nature of counting all crime types as if they were equal.
Yet the CSS metric is flawed because it does not focus on harm to a victim. The actual sentences are heavily offender-focused and far away from being victim-focused. Sentencing guidelines, in fact, require judges to take into account the substantial effect of each defendant’s prior criminal record as an “aggravating factor.” This can mean that, on average, the longer and more serious a defendant’s criminal history, the longer the sentence for each new crime can should be, by law. Since 90% of sentences in England and Wales are passed on persons with one or more prior convictions (Ministry of Justice 2010 p. 75), the use of average sentences cannot be seen as a reliable measure of harm to victims. The most dramatic evidence of this claim is the controversy over the CSS scoring sexual assaults of children differently depending on whether the victim is male or female—which is how judges sentence such cases but which is not justified by a sentencing principle.
A murder victim is just as dead when they are killed by a first offender as by a repeat offender. Yet the sentence for the murder may be higher for the latter. The effect of differences in average sentences based on prior offending can even vary by offence type, thereby raising penalties in some categories relative to others based solely on offender characteristics. This is a far cry from the victim-focused principle of defining crime harm by relative penalties for the crimes themselves (Sherman 2013; Sherman et al. 2016).
In addition to these considerations, there is a separate problem of mixing crimes detected by proactive policing into the formula for a bottom line aimed at measuring harm to victims.
Proactive Policing: Outputs, Not Outcomes
Police-Discovered Crimes Cannot Provide a Reliable Measure of Harm to Victims
Because they vary by level of police resource allocation for discovering each category of crime. They are the outputs of police work to prevent crime, rather than the outcome of public safety we want police to achieve. The count of dangerous driving crimes, for example, may vary by the number of police assigned to traffic enforcement. The outcome of fewer traffic deaths and serious injuries is the outcome which police try to achieve by the output of dangerous driving arrests. Similarly, the number of drug possession crimes varies by the number of police assigned to drug enforcement actions, which may drop in the wake of a terrorist attack. Because these numbers of proactively detected crimes vary in relation to the level of other demands on police time, any counting system based on police-discovered crimes will be inconsistent and unreliable. Crime counts that combine that information with victim-reported crimes (like burglaries) mislead the public about whether public safety is rising or falling.
Proactive Detection of Crime Is a Measure of Positive Police Action That Is “Punished” by Current Counting Rules
When police detect more people carrying weapons illegally by investing more time in looking for weapons, research suggests police are making communities safer. Yet counting each success in finding a knife as a “crime” creates blame for police, rather than credit: “Knife crime is up!” is the headline, rather than “Knife Arrests are up.” Counting each knife-carrying arrest as “crimes” that harm victims creates a perverse disincentive for any police force that is trying to “cut crime” when using the current system of measurement. Similarly, when police discover modern slavery and rescue people from captivity, they should be rewarded with credit, rather than taking the blame for the harm that would not otherwise have been counted.
Crimes Proactively Detected by Corporations, such as Retail Theft, are also an Unreliable Indicator of Public Safety
Like proactive policing, corporate security actions reflect a decision to invest in the detection of an offence category, decisions that can fluctuate widely over time within companies depending on corporate profits or losses. When companies call police to arrest and charge a shoplifter arrested by a corporate security investigator, the police are providing a service paid for by the public. Counting such services as police products is appropriate. Counting them as crimes cannot produce a consistent measure of how safe people are in their communities.
The current system also confuses the public about current levels of public safety by counting crimes in the current year that actually occurred many years earlier.
Historic Offences: then, not now
Counting Crimes in the Year They Are Reported Distorts the Measurement of Current Public Safety, Especially if Crime Harm Is the Focus of Public Safety
If crimes are counted according to the harm they cause, then delayed reporting of historic crimes distorts the measure of public safety even more than the current system. The current system is bad enough, as we see from recent increases in the UK of adults reporting serious sexual abuse they had experienced as children. These cases are extremely serious and demand police attention. Yet like the offences police proactively detect, these offences belong in a category of measuring police outputs for their workload. They reflect a changing cultural context, not a trend in greater danger to our children at the present time.
A final problem for counting crime is the calculation of crimes “solved,” with offenders brought to justice.
Detections by Crime Category: Statistics out of Context
Computing Detection Rates Within Categories Conceals the Context of Overall Detection Performance by Total Harm Across Categories
Police resources to bring offenders to justice are necessarily limited. More serious crimes may require far more resources to prosecute and convict the most dangerous offenders. The implicit premise of the current system of reporting detection rates by offence is that all offences should—or could—have equal detection rates. Few would ask the police to give the same effort to investigating a burglary as to a murder. Yet when low burglary detection rates are reported, the related fact of high murder detection rates goes unmentioned (ONS 2019). Without a crime harm metric built into a single bottom line for detection, police are permanently exposed to blame for acting rationally in relation to differential harm levels across crime categories. Including a crime harm metric, on the other hand, would give police and prosecutors a greater incentive to prosecute more offenders for high-harm, low-detection offences, such as rape.
Unless these problems are remedied, the public will not be able to tell whether their police have “cut crime” and “made the streets safer.” The current system provides neither a valid nor a reliable system for measuring crime and police performance. It even distorts the counting of effective police practices and strategies, creating disincentives for police to make the streets truly safer.
These problems can be solved.
Solutions
The solutions to these problems begin with the positive actions for counting crime more usefully. They end with the ways not to count crime, with the steps governments should take to terminate current practices that hinder all efforts to create greater public safety.
Seven Statistical Series for Counting Crime Usefully
The following seven statistical series are the essential tools needed to count crime usefully, based entirely on reports to and by the police. The seven systems exclude victimization surveys, such as the Crime Survey for England and Wales (CSEW). Such surveys are very useful as a track on crimes reported to police. Yet they are too expensive for taxpayers to fund such surveys in every community and with sample sizes large enough to measure the rarest crimes, which cause the highest harm. The seven also exclude alternative measures of violence from local communities, including ambulance response data and treatments in hospitals or accident and emergency rooms. Those measures are also useful but require further funding and regulation to be established.
These seven statistical series are all inexpensive to create and report, since they rely on existing systems of data collection and reporting. They achieve their success by re-arranging the existing statistical systems in a way that is more informative to the public and more useful for the police:
A Crime Harm Index (CHI)
A crime count by all crime categories
An Historic Offences Crime Harm Index (HOCHI)
A Proactive Policing Index (PPI)
A Company-Detected Crime Harm Index (CDCHI)
A Harm Detection Fraction of total CHI (HDF)
Detection rates per 100 by all crime categories
This section describes each of the statistical series, in a sequence from citizen-reported crimes (“A Crime Harm Index” through “A crime count by all crime categories”) to proactively detected crimes (“A Proactive Policing Index” and “A Company-Detected Crime Harm Index”) to reactive detections of citizen-reported crimes (“A Harm Detection Fraction of total CHI” to “Detection rates per 100 by all crime categories”).
A Crime Harm Index
Wherever a total count of offences is reported, with or without a rate per 1000 people, a Crime Harm Index derived from sentencing guidelines will be reported in place of that count.
The Cambridge Crime Harm Index (CCHI) that demonstrates this method has been provided by the University of Cambridge using this method since 2016, based on the Guidelines for sentences published by the Sentencing Council of England and Wales.Footnote 3 The CCHI takes the number of days of imprisonment recommended as the “starting point” for sentencing decisions before taking into account the aggravating and mitigating circumstances for each case. This is, in effect, a sentence recommendation for a first offender whose crime featured neither aggravation in the act nor mitigating factors since the act (such as attending a restorative justice meeting with the crime victim).
Once each crime category (of over 700 listed on the Cambridge website)Footnote 4 in any jurisdiction is given a CHI score, the number of reported crimes (for 1 year) in each crime category is multiplied by that score (taken from Sentencing Guidelines). The result is a total CHI score for each crime category.
Once the CHI total for each crime category for 1 year (or other time period) is calculated, all of those totals across all of the categories are summed. The sum equals the total CHI score for the jurisdiction for that time period. Identical procedures could be taken within different jurisdictions (such as all 43 territorial police forces in England and Wales) or over time by years, quarters or other time periods.
An example of how the CHI is calculated can be taken from a hypothetical village, which in one year suffered the following count of crimes by category:
100 bicycle thefts
20 burglaries
2 murders
Total = 122 crimes
With this count, the CHI can be calculated by multiplying the number in each category by the Sentencing Council of England & Wales “starting point” guidelines for days of imprisonmentFootnote 5 as follows:
Crime Type × Guideline CHI Score
100 Bicycle Thefts x 2 days = 200
20 Burglaries × 19 days = 380
2 Murders × 5475 days = 10,950
Total CCHI score in days of recommended custody = 11,530
This method would not replace the crime-category counts; it would supplement and apply those counts in computing CHI scores. The only statistics the CHI would replace would be a summary count of all categorical counts—the offence rate per capita or raw number of offences. (Note: The CHI could also be divided by population or by numbers of police officers, for a CHI rate. But the complexity of a CHI rate may be inadvisable in the short run, since the first step in introducing a CHI in place of an offence count is to achieve public understanding and comprehension.)
It is essential that the CHI not include crimes generated by proactive investigations by police or corporations, for reasons discussed in the “Proactive Policing: Outputs, not Outcome” and “Historic Offences: then, not now” sections. Nor should it include crimes that occurred before the time period being measured, even if they were reported in that time period (see the “Detections by Crime Category: Statistics Out of Context” section).
Categorical Offence Counts (a Crime Count Within, but not Across, All Crime Categories)
This recommendation is simple: to continue all of the annual counts of crimes reported in each crime category. These counts are the core data for any Crime Harm Index. Their publication allows any citizen, journalist or police professional to calculate the elements and final result of any CHI by hand from published data. This transparency is a key part of making any CHI legitimate. The only change to current practice would be to remove crimes that are reported through
Historical crimes from prior years reported in the current year
Proactive policing by police
Proactive private policing by companies and other organisations
The three latter categories are all better reported in their own separate statistical series.
An Historic Offences Index
This statistical series would include every crime reported to every police force for which the date of occurrence was prior to the year in which the report is made. It would be calculated in the same way as the CHI for current year crimes. Once each historical crime is counted by a jurisdiction in each category of crime types (of over 700 listed on the Cambridge website, or its equivalent in other countries)Footnote 6, it is given a CHI score, and the number of reported crimes (for 1 year of reporting) in each historic crime category is multiplied by that score (taken from Sentencing Guidelines). The result is a total HOCHI score for each crime category as newly reported that year.
Once the HOCHI total for each crime category for 1 year (or other time period) is calculated, all of those totals across all of the categories are summed. The sum equals the total HOCHI score for the jurisdiction for that time period. Identical procedures could be taken within different jurisdictions (such as all 43 territorial police forces in England and Wales) or over time by years, quarters or other time periods.
In principle, this HOCHI (and its component crime counts) could be used to revise, continually, the previously reported statistics for crimes known to have occurred in those years. This would allow a 10-year crime trend to be reported annually with the more accurate count of crimes in the prior year taken into the calculation of the trends. These adjustments could be made both nationally and locally at any level of geographic breakdowns.
The HOCHI for any current year would be an important recognition of the added workload for police. If, for example, in a flood of new reports about historic sex crimes against children the HOCHI were to equal as much as 15% of the value of the current year’s CHI, that would be an important fact to consider in budget reviews for police forces and resource allocation within police agencies. Such consideration does not, however, require a threat to the reliability of measuring crime trends on the basis of when each crime occurred, nor when it was reported.
A Proactive Policing Index
This statistical series is computed by multiplying the sentencing guideline starting point times each offence detected by police through police-initiated activity, rather than by response to a citizen complaint of a crime. While some items of criminal intelligence might be seen to make proactive policing more ambiguous as to how it was generated, existing systems for distinguishing intelligence from crime reports would be sufficient in countries like the UK. As the Cambridge CHI provides, entire categories of crime are excluded from the CHI, and can be aggregated in a Proactive Policing Index based on CHI metrics.
The procedure for calculating a PPI is almost identical to that for a CHI. Once each proactively detected crime categoryFootnote 7 in any jurisdiction is given a CHI score, the number of reported crimes (for 1 year) in each crime category is multiplied by that score (taken from Sentencing Guidelines). The result is a total CHI score for each PPI crime category, now called a PPI score.
Once the PPI total for each crime category for 1 year (or other time period) is calculated, all of those totals across all of the categories are summed. The sum equals the total PPI score for the jurisdiction for that time period. Identical procedures could be taken within different jurisdictions (such as all 43 territorial police forces in England and Wales) or over time by years, quarters or other time periods.
Calculating a PPI score would be an essential first step towards constructive public dialogue about how much police should invest in low-harm volume crimes reported by citizens, versus high-harm hidden crimes that only police efforts can detect. In England and Wales, this dialogue has already been launched, although with minimal quantification so far. Her Majesty’s Inspectorate of Constabulary, Fire and Rescue Service (HMICFRS) requires that police force management statements contain a strategic plan for discovering unreported “hidden” demand, including crimes with very high levels of harm by sentencing guidelines. That plan could be a means for setting goals on the basis of CHI levels detected, including (and primarily) those cases brought to justice.
The PPI would also include identifications of offenders committing such publicly visible offences as drink driving, speeding, running red lights and other dangerous driving. The creation of a separate reporting statistic for such detections as outputs rather than outcomes could even shape police resource allocation, given the high levels of harm associated with accidents due to criminal negligence. If police efforts were clearly separated from the counting of the CHI, there would be even more incentive to generate new crime reports by making arrests proactively in areas like drugs, modern slavery and people trafficking.
A Company-Detected Crime Harm Index
This statistical series is computed by multiplying the sentencing guideline starting point times each offence detected by an organization and reported to police, as distinct from offences reported by to an individual (not acting on behalf of an organization) as a victim of or witness to a crime. This can be done without too much difficulty by focusing on the name of the complainant. Thus when a shop assistant is threatened at knifepoint and calls police, the shop assistant becomes the victim, and the crime would remain in the CHI list. But when a service station clerk reports that someone has filled a car with petrol and driven off without paying, the oil company becomes the victim and not the clerk. These entire categories of crime can be excluded from the CHI because they do not reflect a level of public safety and can be aggregated in a Company-Detected Crime Harm Index based on CHI metrics.
A Harm Detection Fraction
This statistical series requires, first, the calculation of a total Crime Harm Index for a jurisdiction, which provides a denominator for the harm of all crimes reported to police by victims and witnesses. The next step is to take the currently reported numbers of crimes in each category that were detected in the same time period, by some definition. Countries vary substantially in their definitions of detections, but the key to reliability is consistency of definition within any one country. Within England and Wales, the concept (and list) of agreed-upon sanctioned detections is well-established, including prosecutions, cautions and other acts that generally create a criminal record for the offender.
The next step is to divide the detections by the number of crimes reported, to compute a detected fraction for each crime category. That fraction is then to be multiplied by the total CHI score for that category, which would show the total CHI score for the detections in that category. The final step is to sum the detected CHI scores across all offence categories, and divide that total by the total CHI score of all reported crime. This can be expressed in both the absolute level of the detected score, as well as in the percentage of all reported CHI that was detected.
The time period between reporting and clearance is not a simple matter to address in creating an HDF. Investigations take months and sometimes years to complete. Police practices cannot be driven by statistical reporting requirements. Two simple counting rules will ease the reporting of the HDF. One is to keep a running updated statistic of the number of detections achieved for all the offences reported in each year. The other is never to count as a detection an action taken in 1 year about an offence that occurred in a prior year. Only the use of modern IT systems for tracking running totals of detections for crime and CHI by year of occurrence can save the HDF system from unreliability due to ambiguity.
Detection rates per 100 offences for all crime categories
For reasons discussed in the “Detections by Crime Category: Statistics out of Context” section, it is essential to continue reporting the raw detection rates per 100 offences despite the addition of an HDF. The raw rates are the materials from which the HDF scores must be constructed, in the context of the total CHI scores. They are also a subject of great public interest and should be made as easy to find as possible—like all seven of these statistical series.
Three Ways NOT to Count Crime
This section reiterates points made above, for absolute clarity. The three ways this statement calls on governments not to count crime are totals of reported crimes, total detection rates and average sentences for each offence.
No “Offence Rate”
The first step towards reliable measurement of public safety is to discontinue official computation or reporting of an “offence rate per 1,000 population,” while continuing to report counts for specific offence categories. This step will ensure a public focus on what matters, which is the harm level of crime. By including total offences by crime category, it would still be possible for any reader to create a total. But that computed total would lack the official endorsement of the government for a misleading statistic. In emphasizing the bottom line of a single measure of crime, the government would establish a fair system for letting each police force decide how best to improve public safety—as long as it reduces or controls CHI.
No Total Detection Rate
For the same reason, there should be no endorsement of a total detection rate across all reported crimes. The total detection rate would keep a spotlight on minor crime, and divert visibility from major crime. The Harm Detection Fraction should be given primacy (after CHI itself) in media discussions of crime and policing, including ratings of police forces for their percentage of CHI that is detected (HDF) after adjustment for population and historical crime levels.
No Average Sentences
The severity rating of any crime type should be based solely on the harm of the crime; it should not reflect the prior record of the offenders convicted of that crime type. Actual sentencing data are heavily weighted by the prior record of the offender. The Office of National Statistics should rely on the “starting point” sentence for each crime category recommended by the Sentencing Council for England and Wales.
Challenges of Implementation
This statement was written in England, within the current context of policing in that jurisdiction and its UK-level policymaking. The logic and principles of the statement have a far broader application, at least to any nation with criminal justice system based on English heritage (House and Neyroud 2018), as well as other modern democracies, such as those in Scandinavia (Andersen and Mueller-Johnson 2018; Karrholm et al. 2020). Yet any country attempting to implement the recommendations of this statement must face challenges of implementation.
The first challenge is official recognition. After a decade of discussion, official recognition is developing in at least two countries. Sweden’s police are moving to adoption of their own sentencing-based CHI (Karrholm et al. 2020). Developments in the UK include substantive discussions with the Office of National Statistics, as well as well-received comments by one of the signatories to this statement in the House of Lords (Blair 2020).
In England and Wales, as well as many other countries, a second challenge is the lack of a comprehensive lookup table that provides a “single point of truth” for recommended (or even average) sentences for every crime code (but see the website with a partial lookup table at the link in end note 2 below). The primary cause of this problem is the profusion of new legislation adding new crime types over time. A secondary cause is that the system recommended here has never been officially adopted, with all of the staff support that might create and maintain such a master lookup list.
Other challenges will include decisions about the operational definitions of offence codes that should be assigned to the proactive policing indices and removed from the CHI. These decisions must often confront ambiguity, and make choices that are the “least worst” options. The moral ground for such an imperfect process is that the result will be so very much better than the current systems. The perfect can become the enemy of the better but arguably should not hold back progress.
The very idea of variable harm levels within crime categories is a prime example of a “least worst” choice, one that is embedded in lawmaking itself. In the case of robbery, for example, some jurisdictions treat the threat to use force to be equally harmful as shooting a victim and leaving the victim crippled. Even a separate category of “aggravated” offences fails to capture the range of psychological harm, such as stealing a wedding ring from a 90-year-old who had been married 70 years, versus stealing a ring of higher financial value from a 20-year-old. The goal of setting fine distinctions in harm within crime categories is worthy. It is not an essential first step, however, towards the urgently needed improvement of the current system of counting of crimes.
A final recommendation is that the transformation of crime counting be delegated to an independent body that advises statistical, police and justice agencies. The body should be governed by a panel that includes criminologists, statisticians, law professors, police leaders, policing oversight bodies, crime analysts, psychologists, public opinion experts and others. The people who do the work of recording, classifying, compiling, auditing and reporting crime and police actions should have a central role for providing their expertise, if only to identify all of the issues that must be faced. The delivery of an implementation plan should be tracked and subject to public reporting for at least 5 years, if not continuously. Transparency and complete information should guide all decision-making, as in all processes of implementing major innovations.
Conclusion
This statement is offered for discussion and dialogue with the key institutions of policing and government in modern democracies. The signatories have been struck by how many police leaders from other countries have moved in the direction of a national crime harm index after discussing the idea at Cambridge. Such indices have been published for Denmark (Andersen and Mueller-Johnson 2018, Sweden (Karrholm et al. 2020), Western Australia (House and Neyroud 2018), California (Mitchell 2017), New Zealand and other countries. Canada has used its own actual-sentencing Canadian Crime Severity Score since 2008.
None of these countries, however, have developed clear-cut systems for managing proactively detected and historical reports of crime. None have developed a detection rate based on harm (an HDF). None have widely implemented a discussion of how to count crime best for reducing it. None have discussed how not to count crime.
It is the aim of this statement to help to start such discussions. Wherever these discussions may lead, the time has clearly arrived to review how democracies should count crime.
Notes
The latest version of this index is openly accessible at https://www.crim.cam.ac.uk/Research/research-tools/cambridge-crime-harm-index/view as of 20 March 2020.
See guidelines by offence category downloaded on 8th March 2020 from https://www.sentencingcouncil.org.uk/the-magistrates-court-sentencing-guidelines/
The sentencing guideline used to provide the CCHI scores found and computed as follows: Murder: The starting point for sentencing = 5475 days (15 years). See https://www.cps.gov.uk/legal-guidance/homicide-murder-and-manslaughter. Burglary: the basic burglary guideline is a high level community order for which the minimum unpaid work requirement is 150 h, using a standard metric for unpaid work of 8 h worked per day the individual would need to work 18.75 days in total to complete the requirements, which gives a CCHI 2020 score of 18.75 for burglary. See https://www.sentencingcouncil.org.uk/offences/magistrates-court/item/domestic-burglary/. Bike theft: The sentencing guideline for bike theft is a Band B fine, which works out as £120 minimum, which is equal to approximately 2 days needed to work at the minimum wage to pay in full, giving that offence a CCHI 2020 score = 2 points. See https://www.sentencingcouncil.org.uk/offences/magistrates-court/item/theft-general/.
References
Andersen, H. A., & Mueller-Johnson, K. (2018). The Danish Crime Harm Index: how it works and why it matters. Cambridge Journal of Evidence-Based Policing, 2(1–2), 52–69.
Blair I. (2020). Comments in House of Lords Short Debate on “Offender Management: Checkpoint Programme” 27 February 2020 at Hansard (https://hansard.parliament.uk/).
House, P. D., & Neyroud, P. W. (2018). Developing a Crime Harm Index for Western Australia: the WACHI. Cambridge Journal of Evidence-Based Policing, 2(1–2), 70–94.
Ignatans, D., & Pease, K. (2015). Taking crime seriously: playing the weighting game. Policing, 10(3), 184–193.
Karrholm, F., Neyroud, P. W., & Smaaland, J. (2020). Designing the Swedish Crime Harm Index: an evidence-based strategy. Cambridge Journal of Evidence-Based Policing, 4(1).
Ministry of Justice. (2010). Sentencing statistics: England and Wales 2009 statistics bulletin. London: National Statistics https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/218034/sentencing-stats2009.pdf.
Mitchell, R. J. (2017). The usefulness of a crime harm index: analyzing the Sacramento hot spot experiment using the California Crime Harm Index (CA-CHI). J Exp Criminol, 15(1), 103–113.
Office of National Statistics. (2019). Homicide in England and Wales: year ending March 2019. https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/articles/homicideinenglandandwales/yearendingmarch2019
Sherman, L. W. (2007). The power few: Experimental criminology and the reduction of harm. J Exp Criminol, 3(4), 299–321.
Sherman, L. W. (2011). Al Capone, the sword of Damocles, 30 and the police-corrections budget ratio. Criminology and Public Policy, 10, 195–206.
Sherman, L. W. (2013). The rise of evidence-based policing: targeting, testing and tracking. Crime Justice, 42, 377–343.
Sherman, L. W., Neyroud, P., & Neyroud, E. (2016). The Cambridge Crime Harm Index: measuring total harm from crime based on sentencing guidelines. Policing, 10(3), 171–183.
Acknowledgements
The author gives special thanks to his colleagues Peter and Eleanor Neyroud for their support in developing the CCHI in general, and this statement in particular. The statement was improved by comments from the critical readings of an earlier draft by Matthew Bland and Crispian Strachan. All signatories are warmly thanked for lending their endorsement to this particular version of a vision that is often discussed in the master’s degree courses we teach together. The author also thanks the many police leaders who have discussed these issues in Cambridge classes since 2007, especially the many who have applied the CCHI in their own research and operational practice. The signatories to this statement are a subset of the academic staff, occasional lecturers and consultants of the Cambridge Police Executive Programme or the Cambridge Centre for Evidence-Based Policing.
Author information
Authors and Affiliations
Consortia
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sherman, L.W., and Cambridge University associates. How to Count Crime: the Cambridge Harm Index Consensus. Camb J Evid Based Polic 4, 1–14 (2020). https://doi.org/10.1007/s41887-020-00043-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41887-020-00043-2