Introduction

The general public appears to be dissatisfied with the criminal justice system. In survey research it is repeatedly shown that the public thinks judges are too lenient. In the recurring survey on social and cultural issues in the Netherlands, the Dutch Social and Cultural Planning (SCP) Bureau routinely uses the statement: “Crimes are punished too leniently in the Netherlands”. In 2002 no fewer that 91% of a sample of the Dutch general public agreed (Maas 2002, p. 657). In 2003 we used this same statement with a random sample of the Dutch population and again found that the vast majority, 85%, agreed (Elffers and de Keijser 2004).

In this respect, the Dutch responded not much differently from people in other Western countries (cf. Barber and Doob 2004; Hough and Roberts 1998; Hutton 2005; Mattinson and Mirrlees-Black 2000; Roberts and Stalans 1997; Roberts et al. 2003; Stalans 2002). Moreover, these findings have also been found consistently over time for at least the past two decades (Kury and Ferdinand 1999). It is exactly these types of survey findings that concern politicians and professionals working in the criminal justice system. Apparently, there is a gap between what ordinary people want and what the justice system delivers. This gap seems to focus primarily on desired levels of punishment; a punitiveness gap.

A number of underlying reasons and explanations have been put forward for the existence of this gap. In this article we report on a field experiment concerning one particular factor that might help to reduce the so-called punitiveness gap between the general public and the judiciary. The study scrutinized whether more and better balanced media information on the operation of the criminal justice system might help to bridge the gap. In 2004 a Dutch regional newspaper took the initiative of asking panels of subscribers to attend court cases, and, with the assistance of an experienced court journalist, to report on their findings in the newspaper. These panels of lay persons have been, perhaps somewhat misleadingly, called ’newspaper juries’. After consolidating the kind cooperation of this paper and of another regional newspaper that served as a control, we surveyed large samples of subscribers to these newspapers on their attitudes towards crime and the criminal justice system at the start of the project. After a year of reporting by the newspaper juries, the same samples were surveyed again. Would the specific and presumably more balanced reports of the newspaper juries, describing first hand the lay person’s perceptions of what happens in court, have a unique effect on attitudes of the general reader population of this local newspaper? While the study employed a quasi-experimental design, the nature of this field experiment was such that the treatment (i.e. the actual content of newspaper reporting by the juries) was not under our control.

Attitudes about crime and punishment

People’s attitudes are based on their perceptions of the criminal justice system and of the way it operates. People may thus believe that sentences are too lenient, irrespective of factual levels of sentencing. As a result, their confidence in the justice system may diminish (Roberts and Hough 2005). More than anything else, perceptions are based on availability and quality of information.

A gap between the judiciary and the general public is a cause of concern. After all, a call for more severe sentences is closely related to a lack of trust in the courts (Hough and Roberts 1999), which may undermine the legitimacy of the courts and the criminal justice system as a whole (van Koppen 2003). In reaction to public concerns as perceived or measured in survey research, policy makers and the courts may turn to more severe sentences. This mechanism has been described in the literature as penal populism (Roberts et al. 2003) or populist punitiveness (Bottoms 1995). During the past two decades Dutch courts have indeed rendered more and more severe sentences. More defendants are convicted to a prison term (van Tulder 2005). Moreover, the Dutch imprisonment rate increased from 33 per 100.000 inhabitants in 1985 to 123 per 100.000 in 2004.Footnote 1

However, a substantive body of literature has accumulated, showing how a variety of factors influence public attitudes towards the criminal justice system in general and desired levels of punishment in particular. Public punitiveness thus needs to be put in perspective. Research has focused on ways to measure popular opinion and differential validity thereof, and on the stability and malleability of public attitudes towards criminal justice-related issues (for comprehensive reviews, see Cullen et al. 2000; Roberts and Hough 2005; Roberts et al. 2003). This has direct implications for the interpretation of the nature and depth of any gap between the general public and the judiciary, as well as for (formal) ways of dealing with such a gap.

First of all, a critical view on whether a punitiveness gap can indeed be concluded from traditional survey data is necessary. In survey research popular opinion on punishment is usually assessed using a single statement, such as: In general, sentences for crimes in the Netherlands are too lenient (Sociaal en Cultureel Planbureau 2005; Elffers and de Keijser 2004). It has been argued that the overwhelmingly punitive public opinion that consistently (both across time and across jurisdictions) results from this type of questioning is largely an artefact of the methodology applied (Hutton 2005; Hough and Roberts 1999). Roberts et al. (2003) explain that, in replying to such general statements, people tend to have the worst kinds of cases in mind, such as murder and rape. They do so, because, instead of carefully and systematically processing and evaluating all necessary information before replying to a sweeping survey question, people tend to use shortcuts. One such shortcut is the availability heuristic: general survey questions tap into superficial attitudes primarily based on biased, stereotypical and readily available media reporting on crime (Stalans 1993, 2002; cf. also Goldstein and Gigerenzer 1999; Tversky and Kahneman 1982). Indeed, most people in daily life do not receive balanced information about sentencing in the media, “but rather a steady stream of stories about sentencing malpractices, cases in which a judge imposes what appears to be a very lenient sentence for a serious crime of violence.” (Hough and Roberts 1999, p. 23; see also Sprott and Doob 1997). However, even if punitiveness is measured by multiple questions focusing on less general and more elaborate case descriptions, respondents still tend to react with the most wicked version of the offender in mind (Hutton 2005; van Koppen et al. 2002). Thus, the general public may have several reasons to be so punitive, which would not apply to judges deciding in specific cases. One may simply think that judges should punish more severely in the most extreme cases (Hough and Roberts 1999; Roberts 1992).

Another perspective on public punitiveness is its linkage with a lack of factual knowledge (cf. Doob and Roberts 1988; Mattinson and Mirrlees-Black 2000). The relation between factual knowledge of the functioning of the criminal justice system and people’s attitudes towards criminal justice is well documented: the less knowledge, the more critical a person is of the criminal justice system (Roberts and Hough 2005). Two related types of research illustrate this linkage. One is (quasi-) experimental research, in which a sample from the general public is questioned on knowledge and attitudes toward crime and punishment, subsequently given factual and nuanced information, and then questioned again. With a sample of the British public, researchers from the Home Office did just that (Chapman et al. 2002; Mirrlees-Black 2002). Using different methods of conveying information (i.e. a video, a booklet, presentations) they showed that providing people with factual information really does improve knowledge. The researchers were, however, more cautious in formulating their conclusions on the effect of improved information on shifting attitudes to the criminal justice system. While the different formats for providing information to respondents all had some influence on attitudes, the evidence for a direct relationship between improved knowledge and changing attitudes was less clear. Chapman et al. (2002) noted that “The sheer act of engaging people in this type of exercise appears to be sufficient to bring about an improvement in attitudes” (p. 50).

The second type of study on the link between information and attitudes is a very elaborate version of the first. It involves the methodology of deliberative polling (cf. Fishkin 1995). Such a deliberative poll is tailored to measure what the general public would think if they had had the time and the opportunity to gather all the relevant facts, talk to experts and carefully weigh every bit of information. As such it is designed to assess the opinions of “a hypothetical public, one much more engaged with and better informed” (Luskin et al. 2002, p. 458) than ordinary citizens are (see also Green 2006). In a British deliberative poll in 1994 a random sample was questioned about their views of crime and punishment. A subsample of the initial sample was subsequently given briefing materials containing factual information and were then subjected to a whole weekend of presentations by experts (including practitioners, ex-prisoners, and politicians), questioning sessions and group discussions. After that weekend, participants were again questioned on their views, and this questioning was repeated 10 months later. It was shown that the deliberative poll indeed led to an enduring shift in attitudes in the expected direction (Hough and Park 2002).

The evidence of the link between information and attitudes is extremely important, because, indeed, the general public appears to be poorly informed and has distorted images of the justice process (cf. Cullen et al. 2000). For instance, as in other jurisdictions, many Dutch still believe that crime levels are rising, while, in fact, overall, they are not (Wittebrood 2003, pp. 200 ff.). Many people also think that judges have not been punishing more severely in recent years, while, in fact, they have become more punitive (van Tulder 2005, p. 5). And many people continue to be convinced that the level of crime is effectively influenced by the level of punishment, while, in fact, this is questioned by a large body of empirical research (cf. Pratt et al. 2006; von Hirsch et al. 1999; Zimring 1997). And again, the media are pointed out as one of the main sources of such restricted and biased information (cf. Garland 2000; Roberts et al. 2003).

In relation to the previous remarks, it should be stressed that public punitiveness has been shown to be affected by a fear of crime (Indermauer and Hough 2002; Sprott and Doob 1997). Calls for more severe sentences are consistently correlated with negative opinions about judges and the police, but are especially correlated with the impressions that crime is rising and the extent to which one is worried about that (Hough et al. 1988; Rossi et al. 1985; Sprott and Doob 1997). Recently, Hutton (2005) described this in terms of a narrative of insecurity, in which punitiveness is one aspect of a more complex but coherent set of attitudes on crime and punishment. Restricted and biased information feeds fear of crime, which, in turn, fuels punitiveness. As such, the relationship between media consumption, fear of crime and punitiveness is evident (e.g. Callanan 2005).

It may very well be that, as Hutton (2005) argued, there exists an abstract punitiveness next to more subtle opinions on justice in individual criminal cases. There may be a large difference between people’s top-of-the-head opinions and more fully informed opinions on specific cases (Hough and Park 2002). People may adapt their attitudes to more (better) information, and confrontation with specific cases may tap into attitudes other than those of general survey questions (Cullen et al. 2000). If respondents are questioned with more refined methods, there does seem to emerge a more nuanced and less punitive opinion on punishment (Beyens 2000; Cullen et al. 2000; van der Laan 1993; Dümig and van Dijk 1975; van Kesteren et al. 2000; St. Amand and Zamble 2001; Walker and Hough 1988). This appears to happen especially when respondents are confronted with more and better information on specific cases. The more information is given, the less punitive respondents are (van der Laan 1993; Kuhn 2002).

In all the above, the role of the news media was a recurring topic. Without pretending to resolve each and every issue with the measurement of public attitudes, we feel that it seems safe to expect a clear and positive relationship between improved media reporting and public attitudes towards crime and justice. If the public receives more and balanced information on the operations of the criminal justice system in general and on specific criminal cases and the way in which the courts deal with them, it will be more positive about courts and judges, and less punitive, than without such information.

The newspaper jury of the Brabant Daily

A decade ago the Northern Daily (Nieuwsblad van het Noorden), based in a Dutch northern province, introduced a “newspaper jury”. Volunteer subscribers to the newspaper attended court sessions and were interviewed for the newspaper on their opinion on the case at hand and on the court’s decisions. While no systematic analysis of these particular newspaper reports has been carried out, quick perusal shows that they are at least more balanced and less single-mindedly repressive in orientation than other types of newspaper reports on crime and punishment generally tend to be.

In November 2004 the editors of the Brabant Daily (Brabants Dagblad),, based in the southern province of Brabant, decided to start a newspaper jury as well. From all persons who responded to a published call for participation, 30 subscribers were selected by the editors. On background characteristics (e.g. age, gender, education) these 30 readers represented as wide a range as possible in the general readership of the Brabant Daily. After recruitment these newspaper jurors received a short introduction to Dutch criminal law and criminal procedure. Accompanied by one or two reporters from the newspaper, groups of five jurors visited court trials and were interviewed immediately afterwards. When the decision of the court was rendered 2 weeks later, they were interviewed again, giving their opinion on the court’s decisions. For 1 year, this resulted in a total of 20 newspaper articles based on ten criminal cases: for each case one article immediately after the trial and one after the verdict had been passed. While the types of crimes that were covered varied, within their own categories they could all be considered quite serious. They included homicide, assault, rape, armed robbery, human trafficking and drug dealing. A typical article was about 800 words in length, starting with a summary of the case (indictment and criminal history of the accused), followed by the impressions of the jurors and ending with a concise recapitulation of the relevant facts. Appendix A shows one of these newspaper articles concerning a court trial.

Method

Design

The Brabant Daily responded positively to our request to accompany their newspaper jury scheme with a field experiment, starting just before the first reports were published in the paper and ending after a year of such reporting. Our study thus exploited the juries’ reporting in the Brabant Daily as a quasi-experimental treatment.

Our field experiment used a quasi-experimental repeated-measures design. The treatment group was a random sample of 2,000 people from the general readership (subscribers) of the Brabant Daily (i.e. the newspaper with the jury reports). A random sample of 2,000 from the readership of the Limburgs Dagblad combined with De Limburger served as control group (no newspaper jury). These are regional newspapers from the province of Limburg, adjacent to Brabant. The two newspapers in the control group are owned by the same owner and have, except for some very local subjects, the same content. We therefore treat them as one in the remainder of this article, under the name Limburg Daily. The readership of the experimental and control newspapers is mutually exclusive. While in true experiments one would assign subjects randomly to treatment and control conditions, this was not possible in the current field experiment. Rather, our subjects were sampled from the existing databases of subscribers of the respective newspapers. Consequently, we will have to investigate whether structural differences between treatment and control groups were present (see below).

In November 2004 (T0) both treatment group and control group were simultaneously approached with a mailed questionnaire. This was at the start of the Brabant Daily jury project. One year later (T1), the same two samples were surveyed again using the same questionnaire. During the course of this year, a total of 20 articles by the newspaper juries had appeared in the Brabant Daily.

Treatment

The treatment in this field experiment was not designed by or under control of the experimenters. Rather, the treatment was the result of what the newspaper jurors, aided by the court journalist of the Brabant Daily, commented on after having witnessed a case being tried in court. As a result, our experimental group did receive more detailed information on the courts and their decisions than the Limburg sample (control group). However, the precise content was beyond the experimenters’ control. Nevertheless, from earlier experiences with newspaper juries at the Northern Daily, we thought that it seemed safe to expect that such articles would, in general, portray the judicial process rather more positively than ordinary coverage of crime in the newspapers. Given the nature of the treatment in this field experiment, we will check below whether the articles of the newspaper juries indeed delivered the treatment that we expected, i.e. more understanding of the courts’ decisions, and more nuanced and balanced.

It should be noted that, as a further unavoidable restriction on a field experiment such as ours, the treatment cannot be considered a very strong one: during a year, 20 articles based on ten criminal cases were published. Irrespective of the exact content of the articles, this pool of 20 special newspaper articles is only a small quantity amidst the vast and steady flow of other crime- and justice-related information that is commonly available to the general public. On the other hand, the newspaper jury project was clearly announced in the Brabant Daily, and the journal made contributions in the series very recognisable as such. Nevertheless, it could be expected that between-person variation in measured attitudes toward criminal justice would be rather large in comparison with the expected effect of the experimental treatment. Given the repeated measures design of the experiment, a within-subjects test of our hypothesis is therefore highly preferable.

As all cases on which the newspaper jury commented were crimes from within the local jurisdiction, none of these cases was reported in the regional newspapers in the adjacent province of Limburg (i.e. our controls). Though we did not keep tabs on all criminal court news during the year in both provinces, we know that there was no particular salient court case attracting attention in only one of the provinces in our study.

Response

At T0—November 2004—we sent out a four-page questionnaire to the samples of subscribers in the treatment group and in the control group. Answered questionnaires were returned to us in a business-reply envelope. We sent 2,000 questionnaires to the Brabant subscribers, of which 674 were returned (34%), and 2,000 to the Limburg control subscribers (response 712; 36%).

Owing to privacy concerns, we were not allowed to note respondents’ names. This made it impossible, after a year had passed at T1, to approach only those who had responded at T0. Therefore, at T1, the same questionnaire was sent to exactly the same original samples of T0. Because of minor administrative problems, 1,985 (instead of the original 2,000) of the Brabant subscribers were contacted again. Of these questionnaires 589 (30%) were returned. Of the control respondents from Limburg, 653 (33%) returned the questionnaire at T1.

As indicated above, we intend to use only the repeated measures (at T0, T1) of respondents who participated at both times. As all returned questionnaires were completed anonymously, we identified those respondents at T1 who had also participated at T0 using the biographical data in the questionnaires from both samples. Identifying variables were: month of birth, year of birth, gender and completed education. In case of doubt, i.e. when we were not able to identify a respondent at T1 as being uniquely the same person as at T0, those cases were excluded. We thus succeeded in uniquely identifying 224 respondents from Brabant at both times of measurement (33% of the response at T0) and 285 control respondents from Limburg at both times (40% of response at T0). The analyses reported in the remainder of this article are based exclusively on those respondents that were uniquely identified at both times of measurement. Analyses on all individuals, including those that could not be uniquely identified at T1 and To, were done as well. Results were comparable to the results reported below.

Questionnaire, dependent variables

Apart from a concise section requesting biographical data (age, gender, highest completed level of education, and number of people in the household), the questionnaire consisted of three general sections measuring respondents’ attitudes towards and perceptions of crime and justice. One section dealt with attitudes towards goals of punishment. It consisted of a 21-item questionnaire drawn from earlier work by de Keijser and co-workers (de Keijser 2000; de Keijser et al. 2002; Hessing et al. 2003). The answers were given on five-point Likert-scales. This measurement instrument was designed to summarize people’s positions regarding various preferred goals of punishment in two dominant dimensions: (1) harsh treatment (just deserts, incapacitation, deterrence) and (2) social constructive intervention (rehabilitation, restorative justice). Item analyses showed these two summarizing scales to be quite reliable with values of Cronbach’s alpha from 0.87 and 0.68, respectively (cf. Table 2, nos. 1, 2).

The second part addressed people’s perceptions of judges’ responsiveness to society: whether judges are out of touch with the public (cf. Walker et al. 1988; Hough and Roberts 1998; Mattinson and Mirrlees-Black 2000). Does the respondent favour an independent judge who focuses on the case at hand, or rather a responsive judge, one who is prepared to tailor the verdict to public concerns (Elffers and de Keijser 2004)? Four descriptive statements are included (Table 2, nos. 3 to 6), and five normative statements (Table 2, nos. 7 to 11).

The third section was a collection of questions and statements, most of which may be considered typical of general public opinion research on people’s attitudes to crime- and justice-related topics. This section included the typical statement that “Crimes are punished too leniently in the Netherlands” (Table 2, no. 12) (e.g. Sociaal en Cultureel Planbureau, 2005). Regarding preferred sentencing climate, we also asked repsondents if they would be more lenient or more severe than a real judge when given the opportunity to sit in the judge’s chair for a while (Table 2, no. 13). This section further contained questions on the preferred ratio between the number of innocents convicted and guilty acquitted (Table 2, nos. 14, 15) (cf. van Koppen 2003). The extent to which people people worry about crime was measured (Table 2, no. 17), which has been argued to be one of the causes for the preference for harsh sentencing (cf. Hessing et al. 2003). Finally, we also requested respondents to grade the performance of police, the public prosecution and the criminal court judges on a scale from 1 to 10 (Table 2, nos. 18, 19, 20).

In the questionnaire at T1 we asked respondents from the Brabant Daily, in addition, how often they had read the reports of the newspaper jury.

Results

The contents of the newspaper items

As discussed earlier, we had no control over the content of the newspaper articles in this field experiment. We therefore rated the articles on a number of dimensions. For the ten criminal cases concerned, articles on the court proceedings and on the verdicts are taken together for this brief analysis. Table 1 shows that a minority of the articles contained predominantly negative evaluations of the judiciary. As was expected, the general impression sketched is rather positive. Especially judges and their verdicts received positive evaluations in the newspaper reports. The expectation that, in general, the newspaper jury scheme would result in more positive information about what happens is corroborated.

Table 1 Evaluative contents of the newspaper articles

It should, however, be noted that the articles also issue a secondary message: that of the awfulness of the crimes being tried. The articles emanate a clear disapproval of the accused, which is reinforced by the factual, but nevertheless often gruesome, description of the cases (among which were rape, manslaughter, murder, human trafficking). Thus, although the newspaper articles are indeed positive about the judiciary, a strong depreciation about the crimes committed and the respective offenders cannot be missed.

Necessary pre-checks: equivalence of groups and identification procedure

In this quasi-experimental design there was no random assignment to either the treatment group or the control group. Rather, the respective samples were drawn from the newspapers’ databases of subscribing persons. We therefore compared both samples at T0 for any differences on attitudinal variables and biographical data. Using Student’s t-tests or χ2 tests where appropriate (α = 0.01, two-sidedFootnote 2), on the 20 variables of Table 2 and on the demographic data. Table 2 illustrates that, at T0, we found no significant difference in attitudinal variables between respondents from Brabant (treatment) and from Limburg (control). Additionally, no significant differences were found in biographical data (not in the table).

Likewise, in order to investigate whether our identification and selection of the same persons at both T0 and T1 had distorted our findings, we checked whether response patterns from uniquely identifiable individuals at both times of measurement were significantly different from those of respondents for whom we had only data at one point in time, either T0 or T1, for the experimental and control groups, separately. Again, no significant differences were observed.

At T0: attitudes of respondents

Table 2 gives an overview of all attitude indices used and their means and standard deviations for both the treatment and the control group at T0. In line with previous survey research, members of both experimental and control groups were rather critical of judges (half of them rated judges as “living in an ivory tower”) and endorsed at T0 the idea that, in general, judges are too lenient (over 85%). A majority (over 75%) believed that they themselves would issue more severe punishment if they were a judge. Over 90% were worried about crime. We asked respondents to grade the police, the prosecution and the Dutch judges. The prosecution was graded lowest (overall mean = 5.8 on scale from 1 to 10), the judges highest (overall mean = 6.3). There were no significant differences between experimental and control group. These findings are very much in line with what the Dutch population in general scores on such items.

Changes between T0 and T1

The reports of the newspaper juries in the Brabant Daily were read by many. At T1 about 50% of the Brabant respondents indicated that they had read at least half of the 20 reports by the newspaper jury over the past year. Only 10% had never read any such report in the Brabant Daily.

Comparing our indices between T0 and T1 for the experimental group, one can see that only four significant differences (paired sample t-tests, alpha = 0.01, two sided) can be observed in Table 2. It is especially remarkable that none of the items on responsiveness (Table 2, nos. 3 to 11) shows any change. Overall, within the experimental group, comparing T1 with T0, we found that only the following items showed a statistically significant change:

  • endorsement of the harsh treatment approach to punishment diminishes

  • more strongly (at T1), people believe that, regardless of judges’ sentencing severity, the public will never be satisfied

  • at T1 fewer respondents believe that they themselves would be more severe than judges

  • people have a more positive opinion at T1 of the police

We must immediately add, however, that these changes are not substantive [i.e. never more than 0.25 of a standard deviation (SD)].

Within the Limburg Daily control group only two significant differences were found. However, the main test in a pre-test—post-test control group design is whether differences in the experimental group are, on average, greater than differences in the control group. The result of a Student’s t-test on these ‘differences of differences’ is reported in the final column of Table 2 (α = 0.01, two-sided) and shows that no treatment effect could be demonstrated, except for item 16. The newspaper jury articles had slightly increased the understanding that toughness of judges cannot match the demands of the public in the experimental group, while that is not the case in the control group. As such, that is an interesting finding, as it illustrates that balanced information can increase the understanding of the task of the judge, without changing the public’s preference for tough sentences.

Table 2 Differences between Brabant (treatment) and Limburg (control) at T0; differences between T0 and T1 in Brabant and in Limburg; and experimental test of differences in final column (only uniquely identified respondents with measurements both for T0 and T1). Values in bold type indicate statistically significant differences at α = 0.01 (two sided) t-tests (cf. footnote 2). B Brabant (experimental group), L  Limburg (control group), Δ(X,Y) is the score at X minus the score at Y

Our main conclusion, then, is simple and straightforward. The introduction of newspaper juries and their reporting in a local newspaper on their findings in ten cases, amounting to 20 newspaper items published in 1 year, hardly had any substantial influence on the newspaper’s subscribers’ attitudes to crime, criminal justice and punishment.

This lack of substantive change may partly be due to the fact that we compared measurements before and after the newspaper jury reporting scheme irrespective of whether the respondents had read the articles. We checked this by distinguishing between four groups of readers of the Brabant Daily: those who stated at T1 that they had not read any newspaper jury article (10%, n = 20), those who had read one to five articles (30%, n = 61), those who had read six to fourteen articles (37%, n = 76), and those who had read 15 or more (23%, n = 48). We analysed the difference scores between T1 and T0 on the variables from Table 2 for the Brabant respondents, using intensity of having read the jury articles as the between-subjects factor. The 20 analysis of variance (ANOVA) tests performed resulted in only one significant difference. This concerned item 4 in Table 2 (“Dutch judges are very well aware of what is happening in society”; F3, 187 = 2.80; P = 0.04). Readers that regularly read the relevant newspaper items were slightly less convinced at T1 that judges are aware of what is going on in society. The effect was, however, not very substantive (less than 0.5 SD). We concluded that readership intensity differences did not confound the main results as reported.

Discussion

The expectation that was central to our study was that if the public receives better factual information on the operations of the criminal justice system in general and on specific criminal cases and the way in which the courts deal with them, it will become more positive about courts and judges and less punitive than they would be without such information. The Brabant Daily newspaper jury project offered an opportunity for a quasi-experimental test for this hypothesis.

In the introductory sections of this article we discussed the fact that, in many Western jurisdictions, survey studies have shown the general public not to be very satisfied with the criminal justice system; neither does the general public appear to have much confidence in the justice system. It was also argued that a lack of public satisfaction and confidence may have much to do with a lack of knowledge of the facts. After all, people’s perceptions affect their attitudes, and perceptions are largely based on quality and availability of information. Indeed, studies have shown that levels of public knowledge on the criminal justice system and its operations are, in general, rather poor (e.g. Mattinson and Mirrlees-Black 2000; van Koppen 2003). Moreover, it has been demonstrated that the less factual knowledge a person has, the more likely it is that that person will be more critical of the criminal justice system (Roberts and Hough 2005).

For the general public, an important, and often exclusive, source of information about the criminal justice system is the media. However, while media coverage of crime and sentencing is abundant in Western societies, one may take issue with the nature and quality of a great deal of such coverage. The media do not provide most people in daily life with balanced information about the criminal justice system (e.g. Hough and Roberts 1999).

The initiative by the Brabant Daily in 2004 to ask panels of subscribers to attend court cases and, with the assistance of an experienced court journalist, to report on their findings in the newspaper provided us with a unique opportunity. It provided the setting for a field experiment using a quasi-experimental design. The resulting and presumably more balanced and higher quality media coverage in the Brabant Daily constituted the quasi-experimental stimulus. Contrary to many earlier studies on the subject, this was a highly natural setting for respondents, i.e. subscribers to the Brabant Daily reading their newspaper as they always do. In the study by the Home Office (Chapman et al. 2002) described above, the evidence for the effect of better, more balanced information on the one hand, and changing attitudes towards the criminal justice system on the other hand, was not very direct. In fact, the researchers concluded that merely the act of participating in the study may have been enough to have brought about the attitude changes. Such participation effects are much less likely to occur in a field experiment such as ours.

The results of the quasi-experiment did not confirm the hypothesis. Almost no differences were observed between experimental and control groups, and, in fact, also within the experimental group of the Brabant Daily subscribers, no substantial attitude changes could be observed. The small changes that we did find in the attitudes of the readers of the Brabant Daily after a year of newspaper juries were negligible. The short conclusion, therefore, must be: such media reports based on newspaper juries do not have an effect on attitudes about crime, punishment and the judiciary.

How should we interpret this result? As discussed, the theory had suggested that “more and balanced information” would shift readers’ attitudes towards a more positive evaluation of the criminal justice system. Notice that this phrase presupposes that information on what really is going on in a court room would be more balanced than the usual newspaper coverage of criminal proceedings. Is this true after all? Indeed, the judiciary tends to covet that idea; they have a strongly negative evaluation of the average newspaper coverage and a generally positive evaluation of their own and colleagues’ behaviour in the court room (de Keijser et al. 2004) and, therefore, are likely to believe that any reports other than the usual cannot be but more balanced, i.e. positive. Their opinion is indeed corroborated in our material. Our reading of the newspaper jury reports shows that, in general, the professional trial participants are evaluated quite positively, especially the judges. However, there is another side to the coverage. Many of the newspaper jurors were impressed by the atrocity of the crimes that they had met in court, and the resulting newspaper coverage in the Brabant Daily gave ample concern for that. Negative opinions about the crimes and the perpetrators abound. As a consequence, any positive and balanced treatment of court proceedings, and the appreciation of how judges do their job, may well have been counterbalanced by reinforcement of the negative attitude with respect to crime. Moreover, we suggest that such is inevitable: much truly balanced reporting on court trials will incorporate this double message, of strong disapprobation of the crimes as well as appreciation of the courts’ professional handling of cases. As such, the idea that positive evaluations are being counterbalanced by negative evaluations reflects on the validity of the theoretical expectation that more and more balanced information will change public attitudes in a more favourable direction.

Apart from that, our study produces another critical note on the “information–attitude change” theory regarding public attitudes towards the criminal justice system. As noted, our field experiment is much less likely to produce a participation effect than is a study such as the Home Office study, or, for that matter, studies such as the deliberative polls discussed earlier in this article. The fact that we have found no effect may be taken to underline the importance of such study-participation effects, independent of the actual effect of providing respondents with better information. We should, however, be careful in weighing the theoretical implications of our study, because there are also limitations to what we have done. The most important limitation, we believe, is that the size and impact of those 20 reports published over the course of 1 year was not large enough to establish any attitude change. The simple reason for this may be that those 20 newspaper reports have sunk away in between the vast media attention on crime and criminal justice nowadays, coming not just from one local newspaper, but from many other sources, including national newspapers and popular television coverage of crime and punishment. In that respect, a project such as newspaper juries simply has too small a scope to be able to produce a sizeable change in attitudes.