Fingerprint, Forensic Evidence of
Fingermark identification procedure; Automatic fingerprint identification system; Forensic evaluation of fingerprints and fingermarks.
Forensic evidence of fingerprint is the field of forensic expertise related to the inference of the identity of source from the examination of all the friction ridge skin, namely the fingers, the palms, the toes, the soles, and their marks. But for the sake of simplicity, the text is mainly focused on fingerprints and fingermarks. The extreme variability of the fingerprints derives firstly from the knowledge of the morphogenesis of the papillary ridges pertaining to embryology and, secondly, from statistical researches pertaining to dactyloscopy. This variability is mainly used in four different processes within forensic science: identity verification, forensic intelligence, forensic investigation, and forensic evaluation. The first three processes are based on the use of Automatic Fingerprint Identification Systems (AFIS). The fourth process, forensic evaluation, is an expert-based process, built on procedure, training, and experience. The procedure and practice vary a lot between countries, principally regarding the threshold used for forensic identification. Most of the European and South American countries favor a quantitative approach based on a numerical standard when the USA, UK, and most of the Scandinavian countries have adopted a qualitative approach based on the experience and knowledge of the dactyloscopist. For both approaches, the decision is an expert opinion that is deterministic: exclusion, inconclusive or identification. As the current practice is not error-free and partly based on the subjective probabilities of the dactyloscopists, efforts are made to develop a new approach based on a logical inference model and statistical probabilities, in order to assist the dactyloscopists in producing a logical, testable, and quantitative evaluation of the fingerprint evidence.
At the end of the 19th century William Herschel and Henry Faulds expressed the principles of the forensic use of fingerprints and fingermarks: the use of fingerprints and fingerprint databases for the identification of serial offenders and the use of fingermarks to establish a link between a crime scene or an object and an individual. In literature, confusion exists between the term fingerprint and fingermark. This article uses a uniform terminology: the finger dermatoglyphics and their standard rolled inked impressions are named fingerprints, whereas recovered traces left by unprotected fingers are named fingermarks. In criminal records, reference prints are collected using forms named ten-print cards.
Individuality of the Fingerprint
Confusion surrounds the terms identity, identify, and identification in forensic science. This is clearly demonstrated in popular practice, when the perpetrator of an infringement is said to be “identified from her/his fingerprints”. The perpetrator is not identified, but individualized. What is proved by the fingerprints is individuality. To individualize a human being on the basis of fingermarks in forensic science ultimately consists in determining if an individual is the source of the fingermark linked to the criminal activity . The individuality of fingerprints derives firstly from the knowledge of the morphogenesis of the papillary ridges pertaining to embryology and, secondly, from statistical researches pertaining to dactyloscopy.
The friction ridge skin morphogenesis offers a biological basis to explain the variability in friction ridge patterns. The morphogenesis of the human hands and feet starts during the 6th week of the estimated gestational age (EGA). The pattern of ridge skin is established from the 10th week to the 14th week of EGA when the basal layer of the volar epidermis becomes folded and forms the primary ridges. This process is influenced by the volar pads, local eminences of subcutaneous tissue in well-defined locations of the volar surfaces. It is conjectured that the inversion of the volar pads creates tensions in the epidermis that align the ridge pattern . From this moment on up to the 16th week of EGA, the tissues growing under the dermis, named volar pads, induce physical stress in the cell layers constituting this dermis. This physical stress forms a two-dimensional structure of ridges on the palms, the soles, the fingers tips, and the toes. From the 16th to the 24th week of EGA, the dermis matures; secondary dermal ridges start to develop between the primary dermal ridges and bridges, named dermis papillae, appear between the apex of the primary and secondary ridges. After 24 weeks of EGA, the development of the dermis is finalized and the epidermis is gradually formed by cell development from the dermis, named papillary ridges. In its final stage, the papillary ridges grow as a three-dimensional structure based on the two-dimensional pattern. The anchorage of this epidermal structure in the dermis ensures the stability and the permanence of the dermatoglyphics. Therefore a permanent modification or destruction of the dermatoglyphics can only occur in case of destruction of the dermis .
Variability of the Fingerprint
The fingerprint is expressed through the interaction of genotype, development, and environment; therefore this biometric modality is qualified as epigenetic, similar to the iris of the eye but contrarily to a DNA sequence, from which by instance a DNA profile is extracted, that is genetically determined. The information content in the fingerprint ridges is structured in three levels named the general pattern, the minutiae, and the third level details.
The minutiae contribute the most to the selectivity of the fingerprint, due to the combination of their spatial arrangement along the ridges and their intrinsic characteristics: type, location, and orientation. The selectivity offered by a minutiae configuration present on a fingerprint or on a fingermark is a function of their number, type, and topology (relative position and orientation on the ridges).
The process underlying the development of the minutiae is not known yet, but models offered by mathematical biology and empirical studies suggest that it is epigenetic . For ridge endings, bifurcations, and dots, more correlations are observed on fingerprints of monozygotic twins as opposed to dizygotic twins . Correlations are also observed between the number of minutiae and the finger number, which can be explained by the fact that the surface of the fingertip of the thumb is bigger than the surface of the fingertip of the little finger. The relative frequencies of the minutiae type are correlated with gender, but no difference has been observed between the fingerprint characteristics of the left and right hands .
Third Level Details
The study of the friction ridge details may be further subdivided into the description of ridge contours or edges, and the position and the shape of the pores . However, the degree of agreement between dactyloscopists on the value of these latter characteristics is limited so far, and no systematic study supports the different opinions.
The first statistical investigations were conducted at the end of the nineteenth and at the beginning of the twentieth century, but the initial models were developed on the basis of unrealistic premises: it was presumed that each minutiae type appeared with the same probability and independently of each other on the ridge skin surface. More sophisticated models were developed later during the twentieth century, first including the unbalance between the minutiae type (e.g., the bifurcations are more rare than the ridge endings) and then including the uneven density of the minutiae (e.g., the density of minutiae increases in the centre and delta zones) .
Statistical studies mainly focus on the second level features and especially the spatial arrangement of minutiae, while studies of other fingerprint features remain too seldom. These studies on minutiae provide extremely valuable fundamental knowledge about the degree of randomness of minutiae configurations, but they cannot be used yet for the deployment of large-scale, case-specific statistical evaluation of the fingermark evidence. Current statistical models simplify reality, emphasizing the statistical behavior of minutiae, and adopting a restricted view of the overall factors like the general pattern, the main ridge flows, the ridge edges, or the pores. Nevertheless, this new approach aims to offer a uniform framework and a transparent methodology to the dactyloscopists. Coupled to a logical inference model originating in the Bayes theorem, these models aim to assist them in producing a logical, testable, and quantitative evaluation of the fingerprint evidence based on statistical probabilities .
Classification of Fingerprints and Fingermarks
For about a century the classification of fingerprints based on general patterns allowed the dactyloscopists to limit the search for the source of an unidentified fingermark to a specific section of their databases of fingerprint reference files. Francis Galton proposed the first system of fingerprint classification in 1891, and the development and practical application of dactyloscopy for forensic use were materialized in 1892 with the publication of his manual of dactyloscopy. This led to the acceptance of fingerprints in Great-Britain and the British Empire. In 1900, Henry modified the classification system of Galton, which remained the most widely used system in the world under the name of Galton-Henry. In 1891, Vucetich began to collect the first ten print cards databases based on the ideas of Francis Galton and developed another classification system, which was adopted by some South-American countries. The size of the ten print cards databases increased progressively during the twentieth century, and the workability was maintained sophisticating the indexation system, but to the cost of a trade-off between selectivity and reliability. The coexistence of several classification systems around the world limited the interoperability of the manual classification between different systems. In the second part of the twentieth century, manual classification was slowly abandoned and replaced by computerized classification systems named Automatic Fingerprint Identification Systems (AFIS) .
From the mid-1960s, research on automation of fingerprint identification started. USA and Japan concentrated on automation of the high-volume ten-print workload, while France and the UK focused more on automation of fingermark identification. After a decade of effort, digitization of the ten-print card and automatic designation of minutiae were effective enough for the USA and the UK to produce automatic fingerprint reader systems. This advancement opened the possibility to digitize the ten print card records and to store the standard impressions and the demographic data of individuals (e.g., name, citizenship, and date of birth) in a computerized database.
Forensic Uses of AFIS Technology
AFIS technology was initially developed to assist the dactyloscopists with computers in the identity verification process of individuals through their fingerprints. This process consists in searching the ten fingerprints of an individual in the database of standard impressions to verify if he or she is already present in the database and, if present, to check his or her demographic data. The AFIS technology has achieved enough maturity to ensure an identity verification process that is virtually error-free from the technological point of view, even if clerical mistakes in the database or in the running of the process can never be excluded.
In the 1990s the improvement of both AFIS and computer technologies allowed for the processing of fingermarks, exploited in two forensic processes. Fingermarks can be used for forensic investigation, in order to establish a link between a crime scene or an object and an individual. They can also be used for forensic intelligence to establish links between several crimes, even if the potential for links using marks depends on their limited quality.
In the 2000s the improvement of the computer mass-storage, in terms size and affordability, favored the constitution of large-scale palmprints databases. This development allowed for an extension of forensic investigation and forensic intelligence based on palmmarks. In most countries, the constitution of large scale palmprints databases is an ongoing process.
The challenge of standardization has only been solved recently, through the use of a common format, developed by the American National Institute for Standards and Technology (NIST), facilitating the computerized exchange of fingerprint and fingermark data between countries and agencies .
Individualization of Fingerprints and Fingermarks
If more than 12 minutiae (“concurring points”) are present, and the fingermark is sharp, then the identity is certain. The imperative requirement for the absence of significant differences is implicit.
- 2.With 8–12 concurring points, the case is borderline and the certainty of the identity depends on
The sharpness of the fingermark.
The rarity of the type.
The presence of the core of the general pattern and the delta in the usable part of the mark.
The presence of pores.
The perfect and obvious similarity of the print and the mark regarding the width of the papillary ridges and valleys, the direction of the lines, and the angular value of the bifurcations.
In these instances, the certainty of the identification can only be established following a discussion of the case by at least two competent and experienced specialists.
With less than 8 minutiae, the fingermark cannot provide certainty for the identification, but only a presumption proportional to the number of minutiae available and their clarity.
Principally the first two parts of this rule were largely adopted by the community of the dactyloscopists but, unfortunately, the third part of the rule remained largely ignored .
The current dactyloscopic practice has evolved from the body of knowledge developed about the fingerprint individuality and the forensic use of fingermarks. It is formalized in a 4-step procedure named ACEV (Analysis-Comparison-Evaluation-Verification). This procedure consists in the analysis of the fingermark followed by the analysis of the fingerprint, the comparison of the fingermark and the fingerprint, the evaluation and the decision based on the observed similarities and discrepancies between the fingermark and the fingerprint, and the verification of the findings by a second dactyloscopist.
Despite the formalization of the identification procedure, the practice varies between continents and countries, and even within some countries. The evaluation step, in particular, is based either on a quantitative threshold or on a qualitative threshold.
Quantitative Threshold: Presence of a Numerical Standard
A majority of European and South American countries favor a purely quantitative approach for forensic individualization, by fixing a numerical standard and considering qualitative aspects such as the third level details as secondary. A formal identification is established only if a minimal number of corresponding minutiae between the observed mark and the fingerprint – and an absence of significant differences – is put in evidence.
The numerical standard differs between countries and sometimes also between agencies in the same country: Italy (16-17); UK (before 2000) (16); Belgium, France, Israel, Greece, Poland, Portugal, Romania, Slovenia, Spain, Turkey, South American Countries (12); Netherlands (10-12); Germany (8-12); Switzerland (before 2008) (8-12); and Russia (7) .
Qualitative Threshold: Absence of Numerical Standard
Until 1970, the fingerprint identification procedure in the USA was also based on a numerical standard of 12 points, and below this threshold, qualitative factors in the comparison were taken into consideration. In 1970, a commission of experts from the International Association for Identification (IAI) was established to study the question of the relevancy of a fixed numerical standard for dactyloscopy. The following resolution was adopted by the IAI in 1973: “The International Association for Identification, based upon a 3-year study by its Standardization Committee, hereby states that no valid basis exists for requiring a predetermined minimum of friction ridge characteristics that must be present in two impressions in order to establish positive identification.”
It was accepted that the concept of identification could not be reduced to counting fingerprint minutiae, because each identification process represents a unique set of features available for comparison purposes; the identification value of concurring points between a fingerprint and a fingermark depends on a variety of conditions that automatically excludes any minimum standard.
In 1995, during a conference meeting on fingermark detection techniques and identification hosted in Ne’urim, Israel, 28 scientists active in the field of dactyloscopy, representing 11 countries, unanimously approved a resolution that is a slight variation of the IAI 1973 resolution. The Ne’urim declaration states that “no scientific basis exists for requiring that a predetermined minimum number of friction ridge features must be present in two impressions in order to establish a positive identification.”
A formal identification is established when the dactyloscopists reach a decision threshold. They evaluate the contributions to individuality on a quantitative level (numerical standard), or on a qualitative level (absence of numerical standard), and the size of the relevant population of potential sources of the fingermark is set to its maximum, independently of the circumstances of the case .
On the basis of their evaluation, most dactyloscopists report three types of qualitative opinion: identification, exclusion, and inconclusive. As their evaluation is deterministic, they also make an implicit use of their own subjective probabilities of the rarity of the characteristics used to substantiate their opinion. They refine these subjective probabilities through training and experience, but they rarely consider results from research, particularly in the fields of embryology and statistics.
Admissibility of the Fingerprint in the USA
Like for other forensic disciplines, the scientific status of fingerprint identification has been questioned since 1993, when the Supreme Court of the USA handed down its ruling in Daubert v. Merrell Dow Pharmaceuticals (1993, Inc., 509 US, 579). Previously the main criterion for the admissibility of expert testimony in the federal courts of the USA was the Frye standard, which requires the general acceptance of the methods by the relevant scientific community. Daubert gave federal judges much greater discretion in deciding admissibility. It suggested that they consider (1) whether a theory or technique can be tested, (2) whether it has been subject to peer review, (3) whether standards exist for applying the technique, and (4) the technique’s error rate. Although it is possible to test and validate methods for the forensic individualization of fingermarks, the research on this topic is still very limited.
The admissibility of fingerprint evidence, as being scientific in nature, has been subject to a Daubert hearing in the case U.S. v. Mitchell (1999, U.S. District Court for the Eastern District of Pennsylvania, Criminal), followed by Daubert hearings in more than 20 other fingermark cases. In the same case, U.S. v. Mitchell, the FBI provided calculations based on experiments carried out on an AFIS system. Random match probabilities of 10−97 and 10−27 were claimed respectively for complete fingerprints and partial fingermarks. These extraordinary numbers have been obtained by an extreme extrapolation of the probability density of the score using a postulated model, but they are so far from reality that it is surprising that they were admitted as evidence. Until January 2002, all Daubert hearings on fingermark cases led to the full admissibility of fingermark evidence in the courtroom. Judicial notice was given to the fact that fingerprints are permanent and unique .
January 2002 coincides with the first decision that proposes to limit expert testimony on fingerprint identification. Indeed in U.S. v. Llera Plaza (188F. Supp. 2d 549, 572–73 (E.D. Pa. 2002)), the defense “Motion to Preclude the United States from Introducing Latent Fingerprint Identification Evidence” has been partly successful. Judge Pollak held that a dactyloscopist could not give an opinion of identification, and required that the expert limits his testimony to outline the correspondences observed between the mark and the print, leaving to the court the assessment of the significance of these findings. That led the Government experts to ask for reconsideration bringing to the debate background documents in relation to the move of the UK toward the abandonment of the 16 point standard. Judge Pollak later reversed his opinion, and admitted the evidence.
Two cases of wrongful fingermark identification following the case of the Scottish police officer Shirley McKie perpetuated this controversy. In the first case the American Stephan Cowans was convicted by fingerprint identification, but later exonerated by DNA analysis. In the second case, the American Brandon Mayfield was wrongly associated with the 11 March 2003 Madrid bombing, by means of fingerprint to a latent mark revealed by the Spanish National Police on a plastic bag containing detonators recovered from a stolen van associated with these bombings. Three FBI experts and an independent court-appointed expert all identified Mayfield as the donor of the mark. Mayfield, a lawyer based in the US State of Oregon, came to the FBI’s attention when one of the latent marks sent by the Spanish authorities through Interpol gave a hit against his name on the FBI integrated AFIS (IAFIS), containing about 440 millions of fingerprints from 44 millions of persons. Brandon Mayfield was arrested, and remained in custody for a few weeks until the Spanish dactyloscopists, who immediately had raised issues with this identification, finally identified the mark with the finger of an Algerian suspect.
The FBI offered an apology and published a research report in the beginning of 2004 in which the existing FBI procedures were investigated extensively. This report showed that the mistake in this case was not owed to the methods the FBI used, but was the consequence of “human error” which cannot be excluded. The problem with this frequently used explanation is that the method and the human cannot be separated in case of an activity at which the human acts as a measuring instrument as is the case in traditional dactyloscopy .
An extensive research by the General Inspector of the US department of Justice appeared in January 2006 in which a clear analysis was given of the facts and circumstances causing the incorrect identification . According to this report, an important factor in the Mayfield case was that when a search is performed using a very large database, there will always be a reference print which strongly looks like the unknown mark. A positive consequence of these cases is that they initiated a move towards a much more open discussion about the misidentifications in the forensic fingerprint field.
Analysis of the Current Practice
Research in embryology and statistics clearly do not legitimate the reduction of fingerprint individuality to counting minutiae. Indeed the scope of features is much broader than minutiae alone, and the nature of the papillary individuality prevents the adoption of any predefined number of ridge characteristics necessary for identification, without significant differences . It is axiomatic that no two fingerprints are identical, as no two entities of any kind can be identical to each other. A common misconception lies in the fact that the features of individuality of the fingerprint is often attributed to the fingermark. As already described by Locard, in criminalistics, the transfer of material is logically never perfect. In dactyloscopy, the transfer of the pattern from the fingerprint ridges to the fingermark is accompanied by two types of loss of information: quantitative, due to the limited size of the trace, and qualitative, due to distortion, blurring, bad resolution, and loss of pore and edge details.
The challenge for dactyloscopy is about the ability to quantify the information available for the individualization process in a partial distorted fingermark, and not to prove the individuality of the friction ridge skin. The first step in the quantification of the evidential value of fingermark evidence consists in estimating the similarity between the features of this fingermark and those of the fingerprint considered as potential source of this mark. The second step consists in estimating the typicality or the rarity of these features, and the third step, in reporting the similarity–typicality ratio as evidential value. This concept encapsulates a continuum of values for individualization of the fingermarks ranging from very high to very low, depending on the feature analyzed. Therefore, the forensic individualization process of fingermarks cannot be considered as a binary decision process, but has to be envisaged as a purely probabilistic assessment of the value of evidence, as it is for any type of evidence .
Probabilistic models, which are applicable to fingermark individualization , have been proposed and accepted by forensic scientists in other forensic areas – i.e., DNA, microtraces and speaker recognition . The absence of extensive statistical analysis on fingerprint variability can be viewed as the main reason to prevent giving qualified opinions. Statistical data only support and comfort identification statements used by dactyloscopists but, according to Stoney, “we must realize that to reach absolute identification, or its probabilistic equivalent, through an objective process is not possible. Probabilities are objective when they can be tested and reproduced” .
The statistical studies applied to fingerprints and fingermark individualization provide valuable knowledge about the statistical behavior of various types of features, mainly the minutiae, and to a more limited extent, the pores, but they do not provide a robust tool to assess the probability associated with a given configuration of features for several reasons: none of the proposed models has been subjected to an extended empirical validation, and the assumptions about the features used in these models have not been fully explored.
The research possibilities are huge, mainly in three different directions. The first is a refinement and an empirical validation of the model-based approaches developed in earlier studies . The second is the development of data-driven approaches taking advantage of the capabilities of the current AFIS systems, embedding large fingerprint and fingermark databases, high computation capabilities, and sophisticated pattern recognition techniques. The third direction is to explore the morphogenesis process from the point of view of mathematical biology, with the aim to determine the contribution of the genetic, environmental, and the other factors, which influence the features defined in the three levels of information present in the fingerprint. These studies require the availability of large samples of fingermarks and fingerprints and a clear definition of the features used by the examiners to compare fingermarks with fingerprints.
- 3.Wertheim, K., Maceo, A.: The critical stage of friction ridge and pattern formation. J. Forensic Ident. 52(1), 35–85 (2002)Google Scholar
- 5.Champod, C., et al. Fingerprints and other Ridge Skin impressions. CRC press, London (2004)Google Scholar
- 6.Ashbaugh, D.R.: Qualitative-quantitative friction ridge analysis – An introduction to basic and advanced ridgeology. In: Geberth, V.J. (ed.) Practical Aspects in Criminal and Forensic Investigations. CRC Press, Boca Raton, FL (1999)Google Scholar
- 7.Stoney, D.A.: Measurement of fingerprint individuality. In: Lee, H.C. Gaensslen, R.E. (eds.) Advances in Fingerprint Technology, pp. 327–388. CRC Press, Boca Raton, FL (2001)Google Scholar
- 9.Berry, J., Stoney, D.A.: History and development of fingerprinting. In: Lee, H.C., Gaensslen, R.E. (eds.) Advances in Fingerprint Technology, pp. 1–40. CRC Press, Boca Raton, FL (2001)Google Scholar
- 10.McCabe, R. M. (ed.): Data format for the interchange of fingerprint, facial, scar mark & tatoo (SMT) Information, American National Standard ANSI/NIST-ILT 1-2000, July (2000)Google Scholar
- 11.Fine, G.E.: A Review of the FBI’s handling of the Brandon Mayfield case. 2006, Office of the Inspector General, U.S. Department of JusticeGoogle Scholar
- 12.Office of the Inspector General, United States Department of Justice. A Review of the FBIs Handling of the Brandon Mayfield Case: Unclassified Executive Summary, Washington, DC (2006)Google Scholar
- 13.Champod, C.: Dactyloscopy: Standards of proof, In: Siegel, J. (ed.) 1Encyclopedia of Forensic Science. Academic, London. (2000)Google Scholar
- 14.Taroni, F., Champod, C., Margot, P.: Forerunners of Bayesianism in early forensic science. Jurimetrics 38, 183–200 (1998)Google Scholar
- 15.Good, I.J.: Weight of evidence and the Bayesian likelihood ratio, In: Aitken, C.G.G. (ed.) Statistics and the Evaluation of Evidence for Forensic Scientists. Wiley, Chichester, UK (1995)Google Scholar