Law and Human Behavior

, Volume 33, Issue 3, pp 225–236 | Cite as

The Impact of Eyewitness Expert Evidence and Judicial Instruction on Juror Ability to Evaluate Eyewitness Testimony

Original Article


It has been argued that psychologists should provide expert evidence to help jurors discriminate between accurate and inaccurate eyewitness identifications. In this article we compare the effects of judicial instruction with expert evidence that is either congruent or incongruent with the ground truth, focusing on juror ability to evaluate “real” eyewitness evidence. In contrast to studies which have employed “fictional” eyewitness designs, we found no appreciable effect of either congruent or incongruent expert evidence on participant-juror sensitivity to eyewitness accuracy. We discuss the role of methodology on the inferences and conclusions that can be made regarding the impact of eyewitness expert evidence.


Eyewitness Judge Expert testimony Memory Decision-making 

There is evidence that eyewitness identifications are sometimes inaccurate. Analysis of exonerations obtained from DNA evidence suggests that mistaken eyewitness testimony is a factor frequently associated with known wrongful convictions in the United States (Gross, Jacoby, Matheson, Montgomery, & Patil, 2005; Scheck & Neufeld, 2006; Scheck, Neufeld, & Dwyer, 2001). Traditionally, judicial instructions and eyewitness cross-examination have been recommended to counter jurors’ apparently inappropriate reliance on eyewitness evidence (Penrod & Cutler, 1999). However DNA exoneration cases and other evidence implicating eyewitnesses in erroneous convictions suggest that these safeguards are inadequate to protect innocent defendants (Devenport & Cutler, 2004). More specifically, research suggests that judges and lawyers have a limited ability to help a jury discriminate between accurate and inaccurate eyewitness identifications (Benton, Ross, Bradshaw, Thomas, & Bradshaw, 2006; Devenport, Stinson, Cutler, & Kravitz, 2002), instead fostering a generalised disbelief of all eyewitnesses among jurors (Leippe, 1995; Ramirez, Zemba, & Geiselman, 1996). This has led some psychologists to recommend an alternative safeguard in the form of the provision of expert testimony by a qualified research psychologist (Benton et al., 2006; Devenport & Cutler, 2004; Devenport et al., 2002; Leippe, 1995; Penrod & Cutler, 1999). Indeed, some researchers have suggested that jurors show an improved ability to evaluate eyewitness testimony following exposure to experts who provide information relating to human memory, and guidance in the assessment of eyewitnessing conditions (Cutler, Dexter, & Penrod, 1989; Cutler, Penrod, & Dexter, 1989; Devenport et al., 2002). Overall, however, the evidence supporting the admissibility of eyewitness expert testimony is mixed, and has not unequivocally been shown to result in an improvement in juror performance (Leippe, 1995), which generally falls at chance levels (Lindsay, Wells, & O’Connor, 1989; Wells, Lindsay, & Ferguson, 1979).

Cutler, Penrod, et al. (1989) propose three possible effects of expert testimony: (1) Juror Confusion; (2) Juror Sensitivity; and (3) Juror Scepticism. To date, empirical evidence regarding the effects of expert testimony has been mixed, and although some studies have demonstrated increased juror Sensitivity following eyewitness expert evidence (Cutler, Dexter, et al., 1989; Cutler, Penrod, et al., 1989; Devenport et al., 2002; Geiselman, Putman, Korte, Shariary, Jachimowicz & Irzhevsky, 2002; Wells & Wright, 1983 cited in Wells, 1986), the frequency with which Scepticism has been observed (Cutler, Dexter, & Penrod, 1990; Fox & Walters, 1986; Leippe, Eisenstadt, Rauch, & Seib, 2004; Lindsay, 1994; Wells, Lindsay, & Tousignant, 1980) left Leippe (1995, p. 941) to characterise this less desirable outcome as being a “near-ubiquitous” consequence of eyewitness expert evidence.

Researchers have also investigated the effects of judicial instruction relating to eyewitness identifications in terms of Sensitivity, Scepticism and Confusion. Investigations of the Telfaire instruction (US v Telfaire, 1972) by Greene (1988) and Hoffheimer (1989) identified effects consistent with Confusion i.e. the instruction was found to have no significant effect on guilty verdicts. Scepticism was identified as a result of standard instruction plus summation and commentary (Katzev & Wishart, 1985), standard instruction following eyewitness testimony (Ramirez et al., 1996), and revised judicial instruction (Greene, 1988). Conversely, Ramirez et al. (1996) found that jurors who were presented with a judicial warning before they heard evidence from the eyewitness showed significant Sensitivity. Overall then, various studies using different forms of judicial instruction have found mixed effects on verdicts and, depending on the study, judicial instruction has been shown to result in each of the effects anticipated by Cutler, Penrod, et al. (1989).

Comparing eyewitness expert evidence with judicial instruction

Some researchers have argued that expert evidence provided by a psychologist has a more desirable effect on juror decision-making than does an instruction issued by a judge (Cutler & Penrod, 1995; Greene & Loftus, 1984; Leippe, 1995; Pezdek, 2007). Given this, it is surprising that to date only one study (Cutler et al., 1990) has directly compared the impact of eyewitness expert testimony (in that case, the evidence of a court appointed expert) with a judicial instruction. Cutler and colleagues fabricated a trial scenario to investigate the relative impact of the Telfaire instructions and court appointed expert testimony. Their results suggest that, while the expert induced a significant increase in Scepticism among jurors, judicial instruction had no systematic effect on juror decisions.

In addition to categorising research in this field on the basis of the type of outcome observed, it is also important to differentiate studies according to the type of research question they permit experimenters to explore. Studies which only present participant-jurors with fictional (i.e. fabricated) eyewitness statements, without orthogonally varying witnessing and identification conditions, confound Sensitivity and Scepticism effects (as in Hosch, Beck, & McIntyre, 1980; Maass, Brigham, & West, 1987). It is not possible, from these studies, to ascertain whether or not any observed change in the dependent variable (i.e. juror or jury verdicts) is appropriate given the conditions surrounding the identification (Cutler, Dexter, et al., 1989). It is only possible to classify the effect of the expert in terms of differences in the frequency of a particular outcome. That is, using this methodology we can determine whether guilty verdicts occur significantly more or less frequently after the testimony of an eyewitness expert, but we cannot determine if the expert has improved juror Sensitivity, as there is no way of knowing whether the change in the dependent variable (in this case: the number of guilty verdicts) was warranted.

An improvement on this method uses fictional eyewitness statements which vary in witnessing and identification conditions (Cutler, Dexter, et al., 1989; Cutler, Penrod, et al., 1989; Devenport et al., 2002; Fox & Walters, 1986; Geiselman et al., 2002; Leippe et al., 2004; Lindsay, 1994; Loftus, 1980). This method allows researchers to identify the extent to which, following expert evidence, jurors are sensitive to the manipulation of variables psychologists have identified as correlates of eyewitness accuracy. For example, the experimenter may construct relatively “good” and “poor” witnessing scenarios by manipulating features of the event such as: the lighting at the scene (Blonstein & Geisleman, 1990; Geiselman et al., 2002), duration of witness exposure to the perpetrator (Wells et al, 1980; Wells & Wright, 1983 cited in Wells, 1986), the presence or absence of disguise (Cutler, Dexter, et al., 1989; Cutler, Penrod, et al., 1989; Cutler et al., 1990; Wells et al., 1980; Wells & Wright, 1983 cited in Wells, 1986), and the extent to which the lineup shown to the eyewitness was constructed and administered in an unbiased fashion (Devenport et al., 2002; Devenport & Cutler, 2004; Lindsay, 1994). If jurors are found to be insensitive to such variables, researchers can determine whether, following the testimony, they show Scepticism by believing all eyewitnesses less irrespective of the witnessing and identification conditions, or overly trusting, by believing eyewitnesses irrespective of the conditions reported.

Experimental designs of a third kind utilise the testimony of real eyewitnesses who have actually viewed an event (usually a staged or recorded incident). These studies may or may not orthogonally vary witnessing and identification conditions (as in Wells & Wright, 1983 cited in Wells, 1986; Wells et al., 1980). In these studies knowledge of the accuracy of the eyewitness’s identification decision provides a “verifiable criterion” (Wells, 1986, p. 90) against which juror judgements can be evaluated. This allows the researcher to compare the objective accuracy of the eyewitness identification with juror evaluations of the accuracy of that identification. That is, we can determine whether or not jurors believe those witnesses who made objectively accurate identifications and disbelieve those eyewitnesses who made objectively inaccurate identifications. This method, unlike the previous two experimental designs, permits tests of hypotheses which are in keeping with the objectives espoused in the literature, such as: what is the impact of eyewitness expert testimony or judicial instruction on the accuracy of juror judgments? In addition to allowing us to discriminate between Sensitivity and Scepticism, experimental designs of this kind allow us to investigate the effects of expert testimony in terms of its ability to protect the innocent accused, without compromising the integrity of the evidence provided by accurate eyewitnesses. To date, this method has been employed in only two studies (Wells & Wright, 1983 cited in Wells, 1986; Wells et al., 1980), and only one of these studies (which did not undergo peer review) employed both target-absent and target-present lineups (Wells & Wright, 1983 cited in Wells, 1986). All other studies of the impact of expert testimony have used fictional eyewitness designs which, by virtue of presenting jurors with the accounts of imagined eyewitnesses, do not allow an objective measure of the accuracy of the jurors’ decisions as moderated by expert testimony. Instead, fictional designs equate all identification decisions made under “good” conditions with accuracy, and all identification decisions made under “poor” conditions with inaccuracy. While this approach may be defensible from a probabilistic standpoint; as more identifications made under good conditions will be accurate than identifications made under poor conditions, it denies jurors the opportunity to show Sensitivity in instances where the expert’s advice, although probabilistically accurate, is inaccurate in the individual case. An example of such an instance would be when, despite the odds, an eyewitness makes an accurate identification under objectively poor witnessing and identification conditions. Here, the best outcome would be for the juror to believe the eyewitness despite the expert’s evidence because the identification is objectively accurate. Hence, fictional designs do not allow us to assess the extent to which the juror can integrate the information provided by the expert with other information available to them and so provide the best possible evaluation of the eyewitness.

This article reports a study using a real eyewitness design to permit an investigation of the relative impacts of judicial instruction and the evidence of a defence-commissioned expert on juror judgement accuracy. Further more, this study begins to explore the functional limits of expert evidence on juror decision making by constructing a scenario where the expert provides jurors with information containing predictions which are either congruent or incongruent with the facts of the eyewitness’s identification. We suggest that exploring these best-case (where the expert provides congruent or objectively helpful information) and worst-case scenarios (where the expert provides incongruent or objectively unhelpful information), either of which may manifest themselves in the individual instance, we can begin to assess the possible range of effects produced by expert evidence. It is hypothesised that congruent expert evidence will result in significantly greater Sensitivity to eyewitness accuracy than will either incongruent expert evidence or judicial instructions. It is also anticipated that expert testimony, of either type, will significantly increase levels of participant-juror Scepticism.

Phase 1—witnessing & identification

In the first phase participant-witnesses were exposed to a crime scenario and then attempted to make an identification of the perpetrator from a lineup. The aim of this phase of the study was to generate real eyewitness identifications to act as the stimuli for participant-juror evaluations to take place in Phase 2 of the study.


Participant-witnesses were 28 University clerical staff and students who responded to advertised requests for experimental participation in return for sponsored charitable donations made by the University. These witnesses were randomly allocated to view either a target-absent or target-present simultaneous lineup.


Videotaped Crime

We prepared a video of a bag-snatch incident in which a male perpetrator approached a young woman from behind while she was talking on her mobile phone. There was a struggle between the thief and the young woman before the culprit secured the victim’s bag and ran away. The thief was visible for 16 s, the crime took place in daylight and during the film the thief was clearly seen in front view and both profiles.


Each participant-witness viewed one of two photo lineups, either a simultaneous target-present lineup or a simultaneous target-absent lineup. Allocation to lineup condition was randomly determined by a computer program. The lineups consisted of nine colour “mug shot” style photographs. The images were randomly positioned in two rows, with five photographs on the top and four in the row below. Target-present lineups included eight foils and the target, while target-absent lineups were composed of the same eight foils plus one additional foil. For the purposes of this study, any identification from the target-absent lineup was considered to be a “suspect” identification. In addition, both lineups included a “Not Present” option which always appeared in the bottom right corner of the lineup array.

Witnessing Program

A computer program presented participant-witnesses with the crime video followed by either a target-absent or target-present lineup. The experimenter was blind to the allocation of participants to lineup condition. The witnessing program recorded the type of identification made and the participant-witness’s confidence in their decision as a percentage score.


The computer program instructed participant-witnesses to “watch carefully” and then presented the video recording of the crime. At the end of the video the participant-witness spent approximately 5 min answering 15 yes–no filler questions about their cognitive style. The program then asked participant-witnesses to try to identify (select with the computer mouse) the perpetrator from a lineup, instructing them that the perpetrator “may or may not be present”. Participants then rated their confidence in the accuracy of their decision.


Overall, the 28 participant-witnesses identified five perpetrators, four suspects, one foil and made eight incorrect and 10 correct rejections. Overall, the correlation between the accuracy of their decision and their confidence at the time of the identification was not significant (rpbi = .00, p = .999, n = 28), however, confidence was a significant predictor of accuracy for those eyewitnesses who selected from the lineup (“choosers”; rpbi = .7, p < .05, n = 10), and for the subset of eyewitnesses most likely to appear in court: those witnesses who identified the perpetrator or made a positive selection from a target-absent lineup (“court choosers”; rpbi = .827, p < .01, n = 9). The confidence-accuracy correlation for those participant-witnesses rejecting the lineup was not statistically significant (rpbi = −.386, p = .114, n = 18).


It has been suggested that the value of the confidence-accuracy relationship varies according to the mathematical methods used to assess it and its underlying components (Krug, 2007; Weber & Brewer, 2003). Surveys by Kassin, Ellsworth, and Smith (1989) and Kassin, Tubb, Hosch, and Memon (2001) revealed that 87% of experts believed the statement: “[a]n eyewitness’s confidence is not a good predictor of eyewitness accuracy” was reliable enough to testify in court. Increasingly, however, this standpoint is being challenged by mounting evidence (of the sort seen here) that an eyewitness’s confidence can be a useful marker of accuracy for certain types of identifications (Bothwell, Deffenbacher & Brigham, 1987; Brewer & Wells, 2006; Brigham, 1988; Fleet, Brigham, & Bothwell, 1987; Sporer, Penrod, Read, & Cutler, 1995; Weber & Brewer, 2003). The implication of this is that it is likely that some experts would testify that confidence can usefully predict accuracy, while others would advise jurors that that the witness’s confidence is not a useful indicator of their accuracy. In the next phase of the study we examined the impact of both of these forms of expert evidence. Given, that for this sample of eyewitnesses we know that there is a significant confidence-accuracy correlation, advice to this effect (i.e. congruent evidence) from the expert should increase the jurors’ sensitivity to eyewitness identification accuracy, while evidence that there is no confidence-accuracy relationship (i.e. incongruent evidence) should decrease juror sensitivity.

Phase 2—minimal trial

In this phase, each participant-juror was shown a video of the examination-in-chief and cross-examination of one of the participant-witnesses from Phase 1 and was asked to determine whether or not the identification was accurate. In addition, approximately one quarter of participant-jurors saw video of a judicial instruction relating to eyewitness identification evidence, one half saw expert evidence, while the remaining quarter acted as a “no instruction” control group. Jurors who saw the expert testimony of a psychologist either heard “congruent” evidence in which they were correctly told confidence was a useful predictor of identification accuracy, or “incongruent” evidence in which they were erroneously told confidence was not a useful guide to identification accuracy. Irrespective of the instruction received, participant-jurors were required to decide whether the participant-witness they viewed had made a correct identification (i.e. selected the perpetrator from the lineup) or had been mistaken.



Participant-witnesses. Of the 28 witnesses from Phase 1, the nine who identified either the perpetrator or suspect were asked to participate in the Phase 2 interviews. Eight of these witnesses completed the interview (one accurate eyewitness failed to attend), and six of these interviews were finally presented to participant-jurors in Phase 2. These six interviews were obtained from the three eyewitnesses who expressed the most confidence in their decisions and made accurate identifications (60, 68 and 100% confidence, \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x} \) = 76%, \( {\bar{\sigma}}\) = 17.28), and the three least confident eyewitnesses who made inaccurate identifications (\( \ifmmode\expandafter\bar\else\expandafter\=\fi{x} \) conf = 40%, \( {\bar{\sigma}} \) = 0.0). This procedure did not alter the magnitude of the pre-existing confidence-accuracy correlation (rpbi = .827, n = 6, p = .042).

Participant-jurors. Two hundred and ninety-six undergraduate psychology students from the University of New South Wales acted in the role of jurors during scheduled psychology tutorials.


A 2 (witness type: correct vs. mistaken) × 4 (instruction type: congruent eyewitness expert evidence vs. incongruent eyewitness expert evidence vs. judicial instruction vs. control) between-subjects factorial design was employed.


Participant-witness Testimony

Two assistants (one acting as counsel for the prosecution the other acting as counsel for the defence) who were trained to administer a standard interview schedule, video recorded interviews with each of the six participant-witnesses. These assistants did not know whether the participant-witness had made an accurate or inaccurate identification and conducted interviews in a random order over a two-week period. The first interview (conducted by counsel for the prosecution) was in the style of an examination-in-chief, during which the witness was asked to describe what they saw and to outline the details of the identification process and their resulting decision. On average the examination-in-chief lasted just under four minutes. The second, cross-examination style interview, focused on the witness’ estimate of the duration of the incident and other details regarding the perpetrators appearance. Importantly, during this interview the cross-examining counsel restated the confidence estimate originally provided by the eyewitness at the time they made their identification. This restatement was made irrespective of whether the original confidence estimate differed from the eyewitness’s estimate at the time of the trial. On average the cross-examination lasted just under three minutes. The same interview schedule was used for each of the six witnesses, although minor variations did arise as a consequence of individual eyewitness responses.

The Minimal Trial

All participant-jurors viewed approximately 7 min of footage showing the examination-in-chief and cross-examination of one participant-witness. For participant-jurors in either of the two expert conditions this was followed immediately by approximately 8 min of video showing the examination-in-chief and cross-examination of an eyewitness expert. The participant-jurors in the judicial instruction condition saw the testimony of an eyewitness followed by the judicial instruction (lasting approximately 4 min).

Pre-trial Instruction

A series of pre-trial instructions were read aloud to the participant-jurors. All participant-jurors were told that they were about to see the testimony of an actual witness to a crime who identified the police suspect and were asked to watch the video “as though they were a juror in the trial of the accused”. They were then asked to “examine and scrutinise the testimony of the witness with great care” (Judicial Commission of NSW [JCNSW], 2006, s3-610). Participant-jurors were also informed in general terms about the structure and purpose of the examination-in-chief and cross-examination. Those participant-jurors in the expert evidence condition were also provided a direction regarding the purpose and evaluation of expert testimony (JCNSW, 2006, s2-1110).

Congruent and Incongruent Expert Evidence

A research psychologist acted in the role of the expert in eyewitness identification issues. During the examination-in-chief the expert outlined: their credentials; current position; area of expertise; and research history. They then addressed three key issues regarding eyewitness testimony; (1) the nature of memory as a reconstructive process; (2) system and estimator variables including distance, lighting, disguise, race and lineup type; and (3) the confidence-accuracy relationship. The information provided by the expert on the last of these issues differed across expert conditions. In the congruent expert condition, the expert explained that “the confidence expressed by a witness was a good indicator of the accuracy of their identification”, and also that it was a “strong predictor” of identification accuracy. Conversely, in the incongruent expert condition, the expert suggested that there was no relationship between a witness’s confidence and their accuracy. In all other respects the examination-in-chief was identical across conditions. The cross-examination of the expert was also identical across expert conditions, and highlighted some of the limitations of expert psychological testimony including; (1) the reliance on mock crime paradigms and undergraduate participants in laboratory research; (2) the questionable ecological validity of studies which employ mock-crimes and mock-witnesses; and (3) the probabilistic nature of psychological testimony. The expert was given no prior knowledge regarding the nature or content of the questions contained within the cross-examination. The cross-examination lasted for two and a half minutes while the examination-in-chief was approximately five minutes in duration.

Judicial Instruction

The role of the judge was played by the same research psychologist who provided the eyewitness expert evidence. The judicial instruction lasted for approximately 4 min and was based on the direction recommended by JCNSW (2006, s3-020). This direction alerted jurors to the necessity for special caution wherever there is disputed identification evidence, that completely honest witnesses can be mistaken in their identifications, and that jurors should consider various factors including: lighting; distance; duration; context; and reliability when weighing the evidence. Importantly, no mention was made about witness confidence or the confidence-accuracy relationship in this instruction.

Juror Responses

After watching the video, participant-jurors completed a brief questionnaire, containing either 31 questions (in expert and judicial conditions) or 23 questions (control condition). Across all variants of the questionnaire, participants were asked to provide demographic information before being asked if they believed that the witness had made an accurate identification (yes or no), and to rate their confidence in this decision using a 7-point Likert scale (“not at all confident” to “extremely confident”). All participant-jurors also rated the eyewitness on the dimensions of trustworthiness, accuracy, likeability, attractiveness, anger, credibility and confidence on 7-point scales. In addition, participant-jurors from the expert and judicial conditions completed cued recall questions relating to the content of the evidence or instruction they heard.


Participant-jurors were tested in groups as part of a tutorial activity and were seated at individual computers loaded with the trial video. Each of 22 tutorial groups was randomly assigned to one of the four instruction conditions (congruent expert, incongruent expert, judicial or control) and within each of these groups participant-jurors were randomly assigned to view one of the six witness interviews. Thus, across the experiment each witness interview was seen under each of the different instruction conditions.

Once participant-jurors sat at a terminal they were read the pre-trial instructions specific to their condition and were then asked to put on headphones and open the video file on their computer. Those in the expert evidence and judicial instruction conditions first watched the participant-witness testimony, followed by either congruent or incongruent expert evidence or the judicial instruction as dictated by experimental condition. They were then asked to complete the response sheet. Those in the control condition watched the participant-witness testimony, participated in a 5-min filler task (jurors memorised strings of letters and numbers, between 7 and 14 items long, which they were required to recall after a 30 s delay), and then completed their response sheets.


Manipulation Checks

As a first step, we conducted a series of checks: (a) to ensure that participant-jurors were able to detect the underlying confidence-accuracy relationship; (b) to establish whether accurate and inaccurate eyewitnesses differed with regard to any of six other variables (credibility, accuracy, attractiveness, likeability, trustworthiness and anger); and (c) to ascertain if participant-jurors recalled the instructions they were provided by the expert or the judge.

Juror Perceptions of Accurate and Inaccurate Eyewitnesses. A 4 × 2 between groups ANOVA was conducted to explore the extent to which participant-juror estimates of eyewitness confidence (on a 7-point scale) varied as a function of eyewitness accuracy and instruction type. There was a significant main effect for eyewitness accuracy (F(1,289) = 72.60, p < .0005, ηp2 = .201), such that witnesses who had accurately identified the perpetrator from the lineup were rated by the jurors as being significantly more confident than eyewitnesses who had identified the innocent suspect (accurate eyewitnesses \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x} \) = 4.58, 95% CI: 4.37–4.78; inaccurate eyewitnesses \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x} \) = 3.36, 95% CI: 3.15–3.54). The main effect of instruction type was not significant (F(3,289) = 0.40, p = .756, ηp2 = .004) and there was no significant interaction between accuracy and instruction type (F(3,289) = 0.69, p = .559, ηp2 = .007).

Jurors also evaluated accurate and inaccurate eyewitnesses on six other dimensions. Analysis of these responses revealed that accurate witnesses were rated as significantly more credible than inaccurate witnesses (t(293) = −3.24, p ≤ .005). However, ratings of accurate and inaccurate eyewitnesses did not differ significantly with respect to perceived accuracy (t(294) = –0.03, p = .975), attractiveness (t(294) = −0.38, p = .707), likeability (t(295) = 0.17, p = .863), trustworthiness (t(293) = −1.71, p = .089) or anger (t(295) = 0.81, p = .419).

Juror Recall of Instruction. Juror recall for the information contained in the instruction or evidence that they heard was assessed using a series of multiple choice cued recall items requiring jurors to indicate which of four options most accurately reflected what the expert or judge said. For each question one response option was a quote taken directly from the judicial instruction or the expert’s testimony (e.g. “confidence is a good predictor of accuracy”), one option interpreted or paraphrased this quote (e.g. “confident witnesses are generally right”) and two were inaccurate accounts of what was said (e.g. “confidence is a poor predictor of accuracy” or “confident witnesses are generally mistaken”). Jurors were given a full mark for an item where they selected the direct quote, a half mark for its paraphrased alternative, and zero for either of the incorrect options. Out of a possible score of four marks, on average those jurors in the congruent expert condition scored 3.27 (\( {\bar{\sigma}} \) = 0.64), those in the incongruent expert condition also scored an average of 3.27 (\( {\bar{\sigma}}\) = 0.55), while those in the judicial instruction condition scored an average of 2.84 (\( {\bar{\sigma}} \) = 0.61). A univariate ANOVA with post-hoc analyses indicated that juror recall for expert evidence was significantly better than for judicial instruction (F(2,228) = 12.43, p < .0005). For the specific item relating to the confidence-accuracy relationship, 83.8% of jurors in the congruent expert condition selected the verbatim quote, and 8.8% chose its paraphrased alternative, meaning that 92.6% of jurors had a largely accurate recollection of the evidence they heard. A very similar pattern was evident in the incongruent evidence condition, with a total of 90.1% of jurors accurately recalling the details of the incongruent expert evidence (88.9% selecting the verbatim quote and 1.2% choosing the paraphrased alternative). The observed patterns of accuracy were not found to differ significantly across expert instruction conditions (χ(1)2 = 0.29, p = .539). These findings show that overall participant-jurors had a good memory of the instruction they heard (although significantly poorer in the judicial condition than the expert conditions), and that memory for the congruent and incongruent evidence was equivalent.

Effects of Instruction Type

Scepticism. Chi-squared tests revealed that belief rates were comparable across all conditions (χ(3)2 = 1.16, p = .764), indicating that neither expert testimony nor judicial instruction caused jurors to adopt a more stringent belief criterion compared to controls, and therefore, did not induce Scepticism amongst participant-jurors. Rates of belief did not differ significantly from 50% overall (χ(1)2 = 1.10, p = .295).

Sensitivity to Eyewitness Confidence. A 4 × 2 between groups ANOVA was conducted to investigate how ratings of eyewitness confidence varied given differing instruction types and belief decisions (i.e. whether the juror believed the eyewitness or not). Although instruction type had no impact on juror perceptions of confidence (F(3,288) = 0.29, p = .803, ηp2 = .003), a significant main effect for belief was observed such that those eyewitnesses who were believed (\( \ifmmode\expandafter\bar\else\expandafter\=\fi{x} \) = 4.44, 95% CI: 4.24–4.66) were rated as being significantly more confident than those who were disbelieved (\( \ifmmode\expandafter\bar\else\expandafter\=\fi{x} \) = 3.54, 95% CI: 3.34–3.74, F(1,288) = 37.37, p < .0005, ηp2 = .115). A significant interaction effect was also identified (F(3,288) = 2.69, p = .047, ηp2 = .027), however, post-hoc analyses revealed no significant pair-wise differences. A follow-up analysis considering only the ratings of witness confidence assigned by jurors in the expert conditions revealed a significant belief by expert evidence interaction (F(1,156) = 7.07, p ≤ .01, ηp2 = .043) as well as the main effect of belief described above. Participant-jurors who heard the testimony of the congruent expert showed a greater differentiation in the ratings of confidence they assigned to believed and disbelieved witnesses than jurors who heard the evidence presented by the incongruent expert. Thus, jurors told to use witness confidence to help them to evaluate the accuracy of the eyewitness, rated believed eyewitnesses as more confident than disbelieved eyewitnesses. In contrast, the jurors who were told that confidence was not a useful indicator of eyewitness accuracy rated believed and disbelieved eyewitnesses as equally confident (see Fig. 1 below). While it is not clear from this analysis if confidence influenced belief, or vice versa, it is clear that the type of expert instruction provided significantly altered the relationship between decision type and estimates of eyewitness confidence.
Fig. 1

Participant-juror ratings of eyewitness confidence by instruction condition and belief type

Sensitivity to Eyewitness Accuracy. Overall, participant-juror evaluations of eyewitness accuracy were correct 63.6% of the time. This represents a level of discrimination which is significantly better than would have been expected by chance alone (χ(1)2 = 22.09, p < .0005). Participant-jurors in the control, congruent expert, incongruent expert and judicial conditions attained 71.2, 66.3, 56.8 and 61.4% evaluation accuracy, respectively, however, there was no significant association between instruction type and accuracy (χ(3)2 = 3.66, p = .300). This was also true when only the expert conditions were considered (χ(1) = 1.52, p = .271). A binary logistic regression was also conducted in an attempt to identify an effect of instruction type, however the model was not significant either for all instruction types (β = −0.11, p = .201), or for the expert conditions in isolation (β = −0.40, p = .218). Thus, although jurors were able to discriminate accurate from inaccurate witnesses, their performance was not affected by the instruction they received.

Signal detection measures were calculated (see Macmillan & Creelman, 2005) for each instruction condition (see Table 1.) in order to estimate participant-juror Sensitivity to the accuracy of eyewitness identifications (d′) and their Scepticism (C). When evaluating Sensitivity, greater values of d′ indicate a greater ability to discriminate between a signal (in this case an accurate identification of the perpetrator by the eyewitness) and noise (the identification of an innocent suspect). Positive values of C indicate that the jurors are biased towards believing that the eyewitness correctly identified the perpetrator while negative values of C indicate a tendency to be Sceptical and disbelieve the identification made by the eyewitness. No reliable difference in Sensitivity was found between the two most discrepant conditions: control (d′ 95% CI: 0.51–1.81) and the incongruent expert (d′ 95% CI: −0.21–0.89). Similarly a comparison of the C estimates showing the largest difference suggests that there is no reliable difference in observed Scepticism between the control condition (C 95% CI: −0.16–0.50) and the judicial condition (C 95% CI: −0.24–0.37).
Table 1

Judgement type as percent within instruction condition, observed d′ and C values

Instruction condition

Judgement type

Miss (%)

Hit (%)

Correct rejection (%)

d′ (±95% CI)

C (±95% CI)





1.157 (0.65)

0.17 (0.33)

Congruent expert




0.810 (0.57)

0.15 (0.29)

Incongruent expert




0.341 (0.55)

0.05 (0.28)





0.588 (0.60)

−0.06 (0.30)

Participant-Juror Decision-Making

Reported Predictors of Juror Belief Decisions. All participant-jurors were asked to rate the extent to which each of three factors (eyewitness confidence, eyewitness manner and witnessing condition) affected their decision to believe the eyewitness or not. These ratings were analysed using three separate 2 (belief type: believe or disbelieve) × 4 (instruction type: control, congruent expert, incongruent expert or judge) ANOVAs. These analyses revealed that jurors in the congruent expert condition rated eyewitness confidence as significantly more influential than did the jurors in the incongruent expert condition (F(3,295) = 4.26, p ≤ .01, ηp2 = .042). Participant-jurors in the judicial instruction condition reported relying on eyewitness manner significantly more than those from any other condition (F(3,295) = 5.33, p ≤ .005, ηp2 = .053), while participant-jurors from the control condition were significantly less likely than other groups to report using information about witnessing and identification conditions (F(3,295) = 7.24, p ≤ .0005, ηp2 = .07).

Observed Predictors of Juror Belief Decisions. A binary logistic regression was conducted in order to investigate which, if any, characteristics of the eyewitness predicted participant-juror decisions to believe or disbelieve the eyewitness they saw. This analysis was conducted separately for each instruction condition using the participant-jurors’ rating of eyewitness credibility, accuracy, confidence, attractiveness, likeability, trustworthiness and anger as the predictors in the models.

None of these factors significantly predicted belief decisions in the control condition. Juror estimates of eyewitness confidence was the only significant factor in the congruent expert condition (β = 1.11, Exp(β) = 3.03, p < .005); perceptions of eyewitness accuracy acted as a significant predictor of belief in the incongruent expert condition (β = 1.35, Exp(β) = 3.85, p < .005), while both accuracy and attractiveness were significant predictors in the judicial condition (accuracy β = 0.78, Exp(β) = 2.15, p < .05; attractiveness β = −0.67, Exp(β) = 0.51, p < .05).

General discussion

In many ways the participant-jurors in our study performed very well. They not only heard and recalled the instructions they were given, but they also reported cognitions, and objectively behaved, in a manner consistent with the advice given to them by the expert. Those participants who were told that confidence was a good predictor of accuracy reported relying on confidence significantly more than participants who were told that confidence did not predict accuracy. The jurors who heard congruent or helpful expert evidence also made decisions which were significantly predicted by their estimates of eyewitness confidence. Markedly, the decisions of those participants who were erroneously told that confidence was a poor predictor of eyewitness accuracy could not be predicted on the basis of their estimates of eyewitness confidence, rather these jurors appear to have relied significantly more on other factors when deciding whether to believe the eyewitness’s evidence or not. Participant-jurors in the judicial condition reported relying on the manner of the eyewitness in making their decisions, and indeed their beliefs were significantly predicted by their estimates of eyewitness reliability and attractiveness. Yet, despite this evidence that the participant-jurors understood and responded appropriately to the expert evidence or judicial instruction they received, the objective accuracy of the judgements they made were not found to be significantly associated with the type of instruction they heard. That is, participant jurors who were correctly informed that confidence was a good predictor of eyewitness accuracy did not perform significantly better than participant-jurors who were either provided no information about the confidence-accuracy relationship, or those erroneously told that confidence was a poor predictor of accuracy. In light of the fact that confidence was a strong predictor of eyewitness accuracy in this sample (rpbi = .827), the jurors’ failure to utilise this information to improve their discrimination performance is concerning, although not necessarily surprising. Specifically, calibration data reported by Brewer and Wells (2006) suggests that, probabilistically speaking, it may be appropriate for jurors to disbelieve eyewitnesses whose confidence is 70% or less (as was the case for five of the six eyewitnesses in this study) as they will be correct less than 60% of the time. However, it is difficult to suggest that a criterion of this sort explains the failure to identify a difference between congruent and incongruent expert evidence and judicial instruction in this case, as there is no evidence of a belief bias in any condition. A more satisfactory account attributes the null effect of instruction type to the fact that participant-juror perceptions of confidence were not perfectly calibrated with the eyewitness’s own numerical expression of confidence (rpbi = .593, p < .0005). This indicates that participant-jurors were using more than just these numerical statements to evaluate confidence, possibly incorporating verbal and non-verbal cues into their estimates. More over, the witness’s own confidence was also free to vary between the time they made their identification and the time they gave their evidence. Together these factors may have been sufficient to “swamp” any effect of expert evidence on accuracy. If this is the case, it is difficult to advocate for the inclusion of eyewitness experts as a viable alternative to judicial instruction as it is likely that similar variations in the expression and interpretation of eyewitness confidence will serve to undermine the utility of expert evidence in real world contexts.

Perhaps more important than the specific null effect observed, this study allows us to consider what conclusions would have been formed had we conducted this research using the fictional eyewitness design which is more commonly employed in this field. In fictional eyewitness designs experimenters vary one or more witnessing and identification factor and an expert provides information about the impact of these factors on likely eyewitness accuracy. In these designs the effect of expert evidence on juror sensitivity is defined by the extent to which jurors evaluate eyewitness testimony in a fashion consistent with the advice of the expert. In this study we evaluated participant-juror sensitivity to the positive or neutral confidence-accuracy correlation described by the expert. It is possible then for us to evaluate participant-juror performance with regard to its correspondence with the expert advice, without consideration for the objective accuracy of the eyewitness in each case. That is, we can model what conclusions we would have reached had we employed a fictional design. Interestingly, this analysis (reported in the Results section under the subheading Sensitivity to eyewitness confidence) suggests that expert evidence does increase Sensitivity to the manipulated variable without an associated Scepticism effect, i.e. those participant-jurors told confidence was a good predictor of accuracy reported a significantly greater discrepancy in their estimates of confidence for believed and disbelieved eyewitnesses than that seen among participant-jurors directed not to use eyewitness confidence in this way. Moreover, estimates of eyewitness confidence were found to be significant predictors of belief decisions amongst participant-jurors who were told to rely on confidence, while estimates of confidence did not significantly predict belief decisions amongst participant-jurors told not to rely on confidence. Thus, had we not used a real eyewitness design, and therefore, been able to measure juror performance against the objective accuracy criteria, our conclusion would have been that eyewitness expert evidence resulted in increased juror sensitivity to the manipulated factor while judicial instruction did not. The evidence reported in this study, however, must serve to challenge the inevitability and indeed the validity of such an inference, or any which might suggest that the evidence of significant Sensitivity to the Expert Evidence (SEE) observed in studies with fictional eyewitness designs, translates into, or is equivalent to significant Sensitivity to Eyewitness Accuracy (SEA). As in this instance at least, significant sensitivity to the relevant factors discussed by an expert does not translate into a significant association between expert evidence and performance accuracy.

Indeed, the discrepancy between the outcomes measured using real (SEA) and fictional (SEE) eyewitness designs is not unexpected given that a jurors’ decision to believe or disbelieve an eyewitness is unlikely to be influenced solely by the factors manipulated in the trial. Instead, it is likely that real jurors will consider a large number of factors including: elements of the eyewitness’s personality, presentation and performance; their own expectations and assumptions; and their interpretation of the expert’s advice. Accordingly, we suggest that until further investigations are made focusing on the appropriateness of inferring Sensitivity to Eyewitness Accuracy from the observation of an increase in Sensitivity to Expert Evidence, it cannot be assumed that fictional methodologies provide an adequate test of the effectiveness of expert testimony. Further more, we suggest that Sensitivity to Expert Evidence should be seen as a construct distinct, although not independent, from Sensitivity to Eyewitness Accuracy. It is not our position at this time that one of these standards is invariably more valid or appropriate than the other, rather we suggest that both need to be considered and evaluated when weighing up the costs and benefits of providing eyewitness expert evidence.


Reliance on the evaluation of confidence estimates provided by real eyewitnesses may somewhat limit the extent to which the results of this study generalise to other forms of expert evidence. We fully accept that a different pattern of instruction effects might have been observed if the presence of “swamping” factors had been reduced, or if different eyewitnessing factors and expert advice had been used. Moreover, since confidence-accuracy has previously been identified as one of the heuristics which participant-jurors naturally tend to rely on when making their identifications (Bradfield & Wells, 2000; Brewer & Burke, 2002; Cutler, Penrod & Stuve, 1988; Lindsay et al., 1989; Luus & Wells, 1994) it may be the case that our participant-jurors were going to use confidence to predict accuracy anyway. Given that for our sample of eyewitnesses there was a particularly strong confidence-accuracy relationship, this may have artificially inflated the accuracy of participant-juror judgements in the control and judicial conditions relative to that which could be expected for populations of eyewitnesses where the confidence-accuracy relationship was less robust. Yet, even the prospect of artificially inflated performance in the control and judicial conditions does not account for the absence of a significant difference between participant-juror accuracy in the two expert conditions. Instead, the null effect of instruction type may reflect the impact of various factors which are likely to undermine the utility of expert evidence in the real world (e.g. jurors imperfectly perceiving eyewitness confidence, the hardening of eyewitness confidence, and the imperfect use of probabilistic predictors in the individual instance). Irrespective of the validity of this account, the interpretation of a null effect is always problematic. Even so, we believe that these data allow us to conclude with some certainty that congruent and incongruent expert testimony had no detectable effect on the accuracy of participant-juror decision making because: (a) our design had sufficient statistical power to provide a 71% chance (i.e. power of .71) of being able detect a difference between the expert conditions that was small to moderate in size (w: 0.2), and a 96% chance of detecting an effect that was moderate or larger (w > 0.3); (b) we know there was a difference between accurate and inaccurate eyewitnesses which our participant-jurors detected (accurate eyewitnesses were perceived to be significantly more confident than inaccurate eyewitnesses); and (c) the provision of the expert testimony resulted in predicted effects on other measures such as participant-juror ratings of the influence of eyewitness confidence. Thus, it is with some confidence that we conclude that expert evidence did not exert a substantial or significant effect on juror discrimination accuracy in this realistic context, and that whatever impact it did have was not significantly different from that of the judicial instruction.

In conclusion, we suggest that the method employed here represents a non-trivial improvement on previous research using both fictional and real eyewitness designs. Specifically, this study provides a more ecologically valid test of the impact of expert evidence on participant-juror evaluations than that generally found using fictional eyewitness approaches. Moreover, this study addresses some limitations of previous real eyewitness research (Wells et al., 1980) through the inclusion of testimony from eyewitnesses who made identifications from target-present and target-absent lineups, and by providing for a direct comparison between the effects of expert evidence and judicial instruction on juror sensitivity to eyewitness accuracy. Most importantly, we believe that the distinction provided here between fictional and real eyewitness designs makes a fruitful contribution to future discussions regarding the measurement and evaluation of the impact of eyewitness expert evidence.



This research was supported by Discovery Grant DP0452699 from the Australian Research Council to the second author. We also thank Amanda Barnier, Nathan Weber and the following students for their contributions; Jade Hucker, Erin Littlewood, Alexa Muratore, Tamara Sweller and Shaina Terry.


  1. Benton, T. R., Ross, D. F., Bradshaw, E., Thomas, W., & Bradshaw, G. S. (2006). Eyewitness memory is still not common sense: Comparing jurors, judges and law enforcement to eyewitness experts. Applied Cognitive Psychology, 20(1), 115–129.CrossRefGoogle Scholar
  2. Blonstein, R., & Geiselman, E. (1990). Effects of witnessing conditions and expert witness testimony on credibility of an eyewitness. American Journal of Forensic Psychology, 8(4), 11–19.Google Scholar
  3. Bothwell, R. K., Deffenbacher, K. A., & Brigham, J. C. (1987). Correlation of eyewitness accuracy and confidence: Optimality hypothesis revisited. Journal of Applied Psychology, 72, 691–695.CrossRefGoogle Scholar
  4. Bradfield, A., & Wells, G. L. (2000). The perceived validity of eyewitness identification testimony: A test of the five Biggers criteria. Law and Human Behavior, 24(5), 581–594.PubMedCrossRefGoogle Scholar
  5. Brewer, N., & Burke, A. (2002). Effects of testimonial inconsistencies and eyewitness confidence on mock-juror judgments. Law and Human Behavior, 26(3), 353–364.PubMedCrossRefGoogle Scholar
  6. Brewer, N., & Wells, G. L. (2006). The confidence-accuracy relationship in eyewitness identification: Effects of lineup instructions, foil similarity, and target-absent base rates. Journal of Experimental Psychology: Applied, 12(1), 11–30.PubMedCrossRefGoogle Scholar
  7. Brigham, J. C. (1988). Is witness confidence helpful in judging eyewitness accuracy? In M. M. Gruneberg, P.E. Morris, & R. N Sykes (Eds.), Practical aspects of memory: Current research and issues, Vol. 1: Memory in everyday life (pp. 77–82). Oxford, England: Wiley.Google Scholar
  8. Cutler, B. L., Dexter, H. R., & Penrod, S. D. (1989). Expert testimony and jury decision making: An empirical analysis. Behavioral Sciences & the Law, 7(2), 215–225.CrossRefGoogle Scholar
  9. Cutler, B. L., Dexter, H. R., & Penrod, S. D. (1990). Nonadversarial methods for sensitizing jurors to eyewitness evidence. Journal of Applied Social Psychology, 20(14, Pt 2), 1197–1207.CrossRefGoogle Scholar
  10. Cutler, B. L., & Penrod, S. D. (1995). Mistaken identification: The eyewitness, psychology, and the law. Cambridge: Cambridge University Press.Google Scholar
  11. Cutler, B. L., Penrod, S. D., & Dexter, H. R. (1989). The eyewitness, the expert psychologist, and the jury. Law and Human Behavior, 13(3), 311–332.CrossRefGoogle Scholar
  12. Cutler, B., Penrod, S. D., & Stuve, T. E. (1988). Juror decision making in eyewitness identification cases. Law and Human Behavior, 12(1), 41–55.CrossRefGoogle Scholar
  13. Devenport, J. L., & Cutler, B. L. (2004). Impact of defense-only and opposing eyewitness experts on juror judgments. Law and Human Behavior, 28(5), 569–576.PubMedCrossRefGoogle Scholar
  14. Devenport, J. L., Stinson, V., Cutler, B. L., & Kravitz, D. A. (2002). How effective are the cross-examination and expert testimony safeguards? Jurors’ perceptions of the suggestiveness and fairness of biased lineup procedures. Journal of Applied Psychology, 87(6), 1042–1054.PubMedCrossRefGoogle Scholar
  15. Fleet, M. L., Brigham, J. C., & Bothwell, R. K. (1987). The confidence-accuracy relationship: The effects of confidence assessment and choosing. Journal of Applied Social Psychology, 17(2), 171–187.CrossRefGoogle Scholar
  16. Fox, S. G., & Walters, H. A. (1986). The impact of general versus specific expert testimony and eyewitness confidence upon mock juror judgment. Law and Human Behavior, 10(3), 215–228.CrossRefGoogle Scholar
  17. Geiselman, R. E., Putman, C., Korte, R., Shahriary, M., Jachimowicz, G., & Irzhevsky, V. (2002). Eyewitness expert testimony and juror decisions. American Journal of Forensic Psychology, 20(3), 21–36.Google Scholar
  18. Greene, E. (1988). Judge’s instruction on eyewitness testimony: Evaluation and revision. Journal of Applied Social Psychology, 18(3, Pt 1), 252–276.CrossRefGoogle Scholar
  19. Greene, E., & Loftus, E. F. (1984). Solving the eyewitness problem. Behavioral Sciences & the Law, 2(4), 395–406.CrossRefGoogle Scholar
  20. Gross, S. R., Jacoby, K., Matheson, D. J., Montgomery, N., & Patil, S. (2005). Exonerations in the United States; 1989 through 2003. The Journal of Criminal Law & Criminology, 95(2), 523–560.Google Scholar
  21. Hoffheimer, M. H. (1989). Effect of particularized instructions on evaluation of eyewitness identification evidence. Law & Psychology Review, 13, 43–58.Google Scholar
  22. Hosch, H. M., Beck, E. L., & McIntyre, P. (1980). Influence of expert testimony regarding eyewitness accuracy on jury decisions. Law and Human Behavior, 4(4), 287–296.CrossRefGoogle Scholar
  23. Judicial Commission of NSW. (2006). Criminal trial courts bench book. Retrieved 21 March 2006, from
  24. Kassin, S. M., Ellsworth, P. C., & Smith, V. L. (1989). The “general acceptance” of psychological research on eyewitness testimony: A survey of the experts. American Psychologist, 44(8), 1089–1098.CrossRefGoogle Scholar
  25. Kassin, S. M., Tubb, V. A., Hosch, H. M., & Memon, A. (2001). On the “general acceptance” of eyewitness testimony research. American Psychologist, 56(5), 405–416.PubMedCrossRefGoogle Scholar
  26. Katzev, R. D., & Wishart, S. S. (1985). The impact of judicial commentary concerning eyewitness identifications on jury decision making. Journal of Criminal Law & Criminology, 76(3), 733–745.CrossRefGoogle Scholar
  27. Krug, K. (2007). The relationship between confidence and accuracy: Current thoughts of the literature and a new area of research [Electronic Version]. Applied Psychology in Criminal Justice, 3(1), 7–41.Google Scholar
  28. Leippe, M. R. (1995). The case for expert testimony about eyewitness memory. Psychology, Public Policy, and Law, 1(4), 909–959.CrossRefGoogle Scholar
  29. Leippe, M. R., Eisenstadt, D., Rauch, S. M., & Seib, H. M. (2004). Timing of eyewitness expert testimony, jurors’ need for cognition, and case strength as determinants of trial verdicts. Journal of Applied Psychology, 89(3), 524–541.PubMedCrossRefGoogle Scholar
  30. Lindsay, R. C. L. (1994). Expectations of eyewitness performance: Jurors’ verdicts do not follow from their beliefs. In D. F. Ross, J. D. Read, & M. P. Toglia (Eds.), Adult eyewitness testimony: Current trends and developments (pp. 362–384). New York, NY, US: Cambridge University Press.Google Scholar
  31. Lindsay, R., Wells, G. L., & O’Connor, F. (1989). Mock-Juror belief of accurate and inaccurate eyewitnesses: A replication and extension. Law and Human Behavior, 13(3), 333–339.CrossRefGoogle Scholar
  32. Loftus, E. F. (1980). Impact of expert psychological testimony on the unreliability of eyewitness identification. Journal of Applied Psychology, 65(1), 9–15.PubMedCrossRefGoogle Scholar
  33. Luus, C. A., & Wells, G. L. (1994). The malleability of eyewitness confidence: Co-witness and perseverance effects. Journal of Applied Psychology, 79(5), 714–723.Google Scholar
  34. Maass, A., Brigham, J. C., & West, S. G. (1987). Testifying on eyewitness reliability: Expert advice is not always persuasive. In L. S. Wrightsman, C. E. Willis, & S. M. Kassin (Eds.), On the witness stand (pp. 240–262). Thousand Oaks, CA, US: Sage Publications, Inc.Google Scholar
  35. Macmillan, N. A., & Creelman, C. D. (2005). Detection theory a user’s guide (2nd ed.). Routledge.Google Scholar
  36. Penrod, S. D., & Cutler, B. (1999). Preventing mistaken convictions in eyewitness identification trials: The case against traditional safeguards. In R. Roesch, S. D. Hart, & J. R. P. Ogloff (Eds.), Psychology and law: The state of the discipline (pp. 89–118). Dordrecht, Netherlands: Kluwer Academic Publishers.Google Scholar
  37. Pezdek K. (2007). Expert testimony on eyewitness memory and identification. In M. Costanzo, D. Krauss, & K. Pezdek (Eds.), Expert psychological testimony for the courts (Chapter 4). Mahwah, NJ: Erlbaum.Google Scholar
  38. Ramirez, G., Zemba, D., & Geiselman, R. (1996). Judges’ cautionary instructions on eyewitness testimony. American Journal of Forensic Psychology, 14(1), 31–66.Google Scholar
  39. Scheck, B., & Neufeld, P. (2006). The innocence project. Retrieved March 7, 2006, from
  40. Scheck, B., Neufeld, P., Dwyer, J. (2001). Actual innocence (2nd ed.). New York, NY: Signet Printing.Google Scholar
  41. Sporer, S., Penrod, S., Read, D., & Cutler, B. (1995). Choosing, confidence, and accuracy: A meta-analysis of the confidence-accuracy relation in eyewitness identification studies. Psychological Bulletin, 118(3), 315–327.CrossRefGoogle Scholar
  42. U .S. v Telfaire, 469 F.2d 552 (D.C. Cir. 1972).Google Scholar
  43. Weber, N., & Brewer, N. (2003). The effect of judgment type and confidence scale on confidence-accuracy calibration in face recognition. Journal of Applied Psychology, 88(3), 490–499.PubMedCrossRefGoogle Scholar
  44. Wells, G. L. (1986). Expert psychological testimony: Empirical and conceptual analyses of effects. Law and Human Behavior, 10(1–2), 83–95.CrossRefGoogle Scholar
  45. Wells, G. L., Lindsay, R., & Ferguson, T. J. (1979). Accuracy, confidence, and juror perceptions in eyewitness identification. Journal of Applied Psychology, 64(4), 440–448.PubMedCrossRefGoogle Scholar
  46. Wells, G. L., Lindsay, R. C., & Tousignant, J. P. (1980). Effects of expert psychological advice on human performance in judging the validity of eyewitness testimony. Law and Human Behavior, 4(4), 275–285.CrossRefGoogle Scholar

Copyright information

© American Psychology-Law Society/Division 41 of the American Psychological Association 2008

Authors and Affiliations

  1. 1.School of PsychologyUniversity of New South WalesSydneyAustralia
  2. 2.National Drug and Alcohol Research CentreUniversity of New South WalesSydneyAustralia

Personalised recommendations