I am honored to have been invited to participate with the editors of JQC to join in the celebration of the 25th anniversary of the Journal. I think I got called on because I probably have had the longest opportunity to observe the dramatic growth in bringing quantitative analysis to criminology.

My involvement started in 1966 when I was invited to lead the Task Force on Science and Technology for the President’s Crime Commission.Footnote 1 That was quite a challenge for someone who had an undergraduate degree in engineering physics and a PhD in operations research who protested that he knew nothing about crime or criminal justice, but accepted that risky invitation with the assurance that the Commission had lots of people around who had that expertise and could answer any questions. That was a golden opportunity to play “emperor’s clothes” and ask any probing questions however naïve, and—little did I realize—launch a new career.

President’s Crime Commission and Its Task Force on Science and Technology

I interpreted the invitation by the Commission to have been stimulated by the fact that science and technology was then getting a man to the moon, and so must be ideally suited to solving the nation’s urban crime problem. Less facetiously, I realized that computers and electronics more generally were then making major intrusions into businesses, government, and all facets of society, and so establishing a task force on science and technology would be a useful link to bring those innovations into the field of criminal justice for whatever contributions they might make. The obvious opportunities included maintaining information about individual criminal records and about crime patterns, radio communication linking crime victims to police officers through dispatchers and police to each other, and maintaining and transmitting electronic files on wanted individuals and property. The technologies involved here were quite remote to most people involved in legal matters or in operating the criminal justice system, but were part of the daily business of the Institute for Defense Analyses, a think-tank for the office of the Secretary of Defense, where I was a staff member at the time, and to someone with a technical education.

I pulled together a team with diverse technical backgrounds and we started to look for opportunities to contribute to the Commission’s mission. It became clear that there were many opportunities for bringing many aspects of science and technology to the operation of the criminal justice system. It was also clear that there were many analytic opportunities for gaining a better understanding of crime and the nature of offenders and bringing those opportunities to the then very limited field of criminology.

An important principle for much of our analytical work was that given by Hamming in the preface to his classic book on numerical analysis (Hamming 1962): The purpose of computing is insight, not numbers”. One runs quantitative models to gain insight into how a system performs, not necessarily to get a quantitative measure of its performance, to learn about the relative influence of various factors impinging on it, and to get a measure of the relative sensitivity to change in different environments.

Perhaps our best opportunity was bringing the perspective of a “systems approach” to the operation of the criminal justice system. Most of the other participants on the Commission staff were associated with one or another part of the system, but we, having no connection to any particular part, found it very easy to look at the system as a whole.

We were intrigued, for example, by the conflict between police, who argued that recidivism rates were about two-thirds, while the corrections folks argued as vigorously that recidivism rates were lower, about one half. Of course the disagreement derived from where they did the measurement—police at rearrest and corrections at re-incarceration. This gave rise to an early paper (Blumstein and Larson 1972) I did with Dick Larson, then a new graduate in electrical engineering and now a distinguished operations researcher, that highlighted the equivalence of those two measurements.

That issue and other related systems considerations gave rise to our drawing an early version of what became the widely used systems flow diagram in the original Commission report and in our task force report. For most people at the time, that diagram was intended primarily for pedagogic use to highlight the interactions across the parts of the system, but for us it served more as a function for examining the flows across the different parts of the system. That diagram was later elaborated by DOJ’s Bureau of Justice Statistics. That systems diagram gave rise to the formulation of the JUSSIM flow model (Belkin et al. 1972) that we used initially in a CMU project course for the criminal justice system in Allegheny County, PA (Cohen et al. 1973).

At least in part as a result of our task force’s work, one of the strongest recommendations from the Commission was to push for planning for the entire criminal justice system. This was implemented by the Congress in the 1968 Safe Streets and Crime Control Act, which created the Law Enforcement Assistance Administration (LEAA) and mandated the creation in each state of a state-level criminal justice planning agency (then known as an SPA and currently designated as an SAA or state assistance agency) to include all components of the CJS. The Act provided funding for those agencies and offered them federal funding to implement their plans. Many of those initial plans involved expenditure for technology and especially information systems, but also for a wide variety of other operational innovations.

Perhaps one of the most striking empirical results from our work was an analysis by Ron Christiansen (Christensen 1967) of the probability that a male in the United States would be arrested for a non-traffic offense some time in his life. That analysis was quite sophisticated in many ways and represents a good model of how a quantitative personFootnote 2 would approach this problem, invoking calculus to integrate across the ages of the probability of a first arrest at any particular age to estimate the cumulative lifetime probability of arrest. The estimate of 50% astonished everyone (after all, how many arrestees did any of us know) and we were sure that he had missed a decimal point, but the strength of his analysis dispelled our doubts. The results were sufficiently compelling and the methodology sufficiently strong that it became a striking finding noted in the Commission’s report.Footnote 3 Even recently, the Wall Street Journal (Bialik 2009) was astonished by the numbers, which are undoubtedly an underestimate today in light of the major growth in arrest for drug offending and for domestic violence.

Some of the Essential Qualities of Quantitative Criminology

Quantitative criminology must be more than use of sophisticated statistical programs. It requires an analytic sophistication in being able to interpret the aspects of a model, to understand its implicit and explicit assumptions and to know when those assumptions are violated, the nature of the data entering the model and its reporting and recording flaws, the specificity of time and place from which the data are drawn and the degree to which other times and places are likely to have similar results. In all of these assumptions and specifics, it is often important to test the robustness of the model and the sensitivity of the data to alternative formulations and collections.

That is certainly a challenging task because access to data is so often limited and an investigator is always limited in funding, in time, and in access. As a result, we rarely see much of this sensitivity testing, and so many fragments of criminological research efforts are singular in model or site or time, and the reader or user, who only rarely shares the same context, is often left with considerable uncertainty about whether the results reported apply to his or her context. A grounding in science and in mathematics contributes to that sensitivity,Footnote 4 but also a challenging intellectual environment, especially with people of different disciplines, can contribute to that sensitivity.

Some aspect of this concern is captured in the rote application of a p-value of .05 for testing the “significance” of any particular finding. This same p-value is used regardless of whether the sample size is 20 or 20,000. Of course, because the statistical power is so dependent on the size of the sample, a sample of 20 can detect only a fairly large difference (whether that difference is the difference between two populations’ response to a particular treatment or whether a coefficient in a regression analysis is different from zero), whereas a sample of 20,000 will shoot off triple-stars for even the slightest difference. In this context, the statisticians have done us ill by capturing the word “significant”; they should have used the word “discernible” to reflect the fact that in the statistical test applied, a particular difference was discernible. To most people, the term “significant” conveys a sense of importance, and so any time one chooses a p-value, that should be in the context of detecting an important difference rather than merely of statistical power. As a result, if one can specify the difference one wants to be sure to detect, then the p-value could be adjusted appropriately, taking account of the sample size available. One rarely sees any such consideration in any applications of statistical tests in criminology.

Some Early Interactions with Criminology and ASC

My first interaction with the American Society of Criminology was in 1968 when I attended its annual meeting, then held in a small anteroom of New York University with an attendance of only about 30. I attended because I was invited to present a report of the work of our task force.

The research activity that most impressed me at the time was the work by Marvin Wolfgang and colleagues and students (Wolfgang et al. 1987) tracing the arrest histories of boys born in Philadelphia in 1945 and still resident in Philadelphia at age 18 in 1963. This was an impressive undertaking that was bound to yield rich insights into criminal careers.

As a result, much of my early attention was focused on the findings from that study. Perhaps one of its most striking and still often-quoted findings was that 6.3% of the boys (those who accumulated five or more arrests, who were thereby labeled “chronic offenders”) accounted for 52% of the crimes (actually, 52% of the arrests), an impressive observation suggesting a high level of criminality by a tiny subset of the population, thereby suggesting that identifying that small group could lead to a major reduction in crime. That finding or others like it are still being cited today. But it turned out that only one-third of the boys had ever been arrested, so it was 18% of the arrestees who accounted for 52% of the crime, and that number appeared far less provocative. Of course, selecting those with five or more retrospective arrests alone was bound to account for a disproportionate fraction of the total.

If there was such a select group of offenders that were identified retrospectively, then it would be most important to learn if their prospective contributions would be disproportionate. There never appears to have been an attempt to provide that prospective information on that 6.3% of the boys. Indeed, such identification was never provided. Furthermore, it turned out that the probability of an (n +1)th arrest conditional on already having n arrests saturated fairly quickly, after n = 4, and was fairly constant after that (Blumstein and Moitra 1980), and no indication was ever provided of which subset of the “chronic offenders” would be of particular interest as high-risk offenders.

I raise these issues to highlight the fact that quantification in criminology at that early stage was less an issue of needing advanced methodology and more an issue of analytic sophistication in being able to ask the questions that would challenge an observation, a problem that would be particularly important in the face of a striking quantitative observation. It was too easy to get away with a captivating number without probing it for the useful insight that might be embedded behind it.

One of my early experiences at an ASC meeting in the early 1970s was on a panel with a then-distinguished criminologist who was railing against the new methodologies being introduced into criminology and declaiming that this was not the criminology he was familiar with. My rejoinder was that this was characteristic of all fields of research and that new methods—almost always quantitative—would always appear. He could try to keep up or not, but regardless of his choice, it was most important to not hinder his students from keeping with the latest developments, both within criminology and within related fields like statistics, economics, and sociology that might be applicable to criminology.

Growth of Quantification in Criminology

The level of quantification in criminology at that time was rather basic—mostly bivariate correlations. Of course, there was a long but spotty history of quantification that included the Belgian astronomer, Quetelet, who wrote The Propensity to Crime in 1831 (Beirne 1987), but those were singular ventures rather than a cumulative growth. Perhaps the best indication of the changes I have seen over the 43 years since the President’s Crime Commission is reflected in the recently published Handbook of Quantitative Criminology (Piquero and Weisburd 2010) with its 35 chapters and 787 pages of excellent articles written by the kind of the authors for whom quantitative analysis is a powerful skill.

One interesting vignette that characterized the limited state of quantitative criminology at the time occurred surrounding the publication in the mid-1970s of econometric analyses of the deterrent effect of incarceration, and of the death penalty in particular, by Isaac Ehrlich, then at the University of Chicago’s Department of Economics. That was a time when the Supreme Court was considering abolition of or a moratorium on the death penalty. Ehrlich’s (1975) manuscript on the death penalty claiming that each execution averted eight homicides was submitted as part of an amicus brief arguing in favor of the death penalty. In a conversation with Gerry Kaplan, a lawyer, then the director of the National Institute of Justice, we discussed the need for a careful assessment of the validity of those estimates. Kaplan and I knew each other fairly well since we had served together on the President’s Crime Commission. I suggested that the National Academy of Sciences, which had recently established a committee on Research on Law Enforcement and the Administration of Justice and which had already established its first panel to study the impact of legislation on the courts, would be a useful vehicle for making such an assessment. He liked that approach, asked if I would share such a panel, I agreed, and that gave rise to the first technical panel from that NRC committee, the Panel on Deterrent and Incapacitative Effects (Blumstein et al. 1978). Fortunately, I had been working with two excellent graduate students at the time, Daniel Nagin and Jacqueline Cohen, and they carried out much of the staff and analytic work for the panel, Nagin on deterrence and Cohen on incapacitation, which gave rise to their excellent dissertations. The panel membership included some excellent individuals who brought both technical expertise and policy relevance to the work of the panel. This history conveys the thinness at the time of the field of people who could address fairly complex technical issues from the perspective of criminology.

That was less than 10 years before the founding of JQC, but it has been impressive how JQC has been such a strong force for bringing new methodologies into criminology. To a very large degree these new methodologies derived from more traditional sources in statistics or econometrics. At some point one might anticipate that the distinctive problems of criminology will generate some methodologies that are focused specifically on those kinds of problems.

The trajectory methodology developed by Daniel Nagin (2005) is a useful example. Criminologists for a long time have been dealing with individual trajectories of offending and other aspects of their development and their criminal careers. The contribution of the trajectory models is that it aggregates the many individual trajectories available in the data into a limited number of groups somewhat as cluster analysis does of individual points in a multidimensional space. In most cases, where the trajectory represents an individual’s offending rate as a function of age, the majority of the trajectories are uninteresting, reflecting a low rate of activity and lie close to the horizontal axis. The interesting ones, however, are those that rise rapidly and stay high, or rise rapidly and then decline, or rise slowly to a peak and then decline, perhaps slowly or rapidly. The members of each of these trajectory groups represent distinctive offending patterns and each group can be explored to identify what factors in their environment contributed to that particular pattern. It is also possible to use the different trajectory groups as controls for other analyses because the determinants within each group are presumably more similar.

Of course, these trajectory patterns could be used for analysis of any other longitudinal phenomena such as crime patterns within individual cities or neighborhoods. As is often the case, finding those places that are similar and analyzing their similarities can often provide some useful insights into distinctive factors contributing to the longitudinal patterns, inevitably with a primary interest in those with the most severe crime problems.

The Emergence of Emphasis on “Evidence-Based Policies”

The catch phrase that has emerged over the past several years has been the demand for “evidence-based policies”. Of course, any reader of JQC would be a strong subscriber to such a demand, especially when it is posed in contrast to ideology-based policies, which have long been dominant in the realm of crime control, and especially so for the past 30 years. This is perhaps more true in crime-control policy than in any other policy domain. One certainly wants strong evidence to help understand the effects of criminal justice policies in allocating resources, in deploying those resources and in treating identified offenders. But the important feature in evaluating the “evidence” is its quality—its errors, flaws in its application, and violations of the assumptions in the model being applied. Too often, the quality is weighted simply by the method of the analysis, with various methods ranked with randomized control trials at the top and then in descending order by how close the methods come to a randomized trial. But, of course, even the best method can have flaws in its execution, and so one needs careful examination of those flaws to assess the quality of the evidence.

There are, of course, many other sources of evidence for gaining insights into the nature of the problems and the consequences of various policies. And a randomized trial is indeed a “gold standard” when the primary conditions of its execution—double-blind randomization between treatment and placebo—are fully satisfied. When dealing with individualized treatment of individual offenders, such randomization is relatively manageable.

Such randomized trials were used in the 1950s and 1960s to test a variety of therapeutic manipulations of prisoners, but to the distress of many, the dominant result of those trials was a null effect, whereby the treatment resulted in no better outcomes in terms of recidivism by the treated compared to the untreated controls. Those negative results, assessed by Lipton et al. (1975), widely publicized by Martinson (1974) as “nothing works”, and affirmed by a panel of the National Research Council (Sechrest et al., 1979), were important contributors to the demise of the rehabilitation ideal in corrections in the late 1970s and 1980s. And that led to the corresponding growth in the pressure to “lock ‘em up and throw away the key” (e.g., Wilson 1975a, b). That politicization of incarceration policy resulted in growth by a factor of almost five in the national incarceration rate that had previously been an impressively stable homeostatic process for at least 50 years (see Blumstein and Cohen 1973 and Blumstein et al. 1976) while it was under the control of the criminal justice system.

The distinguished physicist, Freeman Dyson, in referring to the scientists working on the atomic bomb at Los Alamos, talked about the “sin of the scientists”.Footnote 5 To a much lesser extent, the “sin” of these evaluation researchers also lies in their contribution to the view that “nothing works” because the treatments they were evaluating were typically narrowly focused on addressing particular needs of prisoners being treated, whereas the actual needs covered far more territory. Since only a limited subset of the prisoners could benefit from any of the treatments being tested, the aggregate effect was sufficiently small, even though the design of the experiments may have met all the strictures of a randomized control trial. Those results opened the door to the era of “tough on crime” politicking. Those politics defied all rational policy analysis until confronted by the state budget constraints wrought by the 2007–2009 Great Recession. The earlier 30 years saw many appeals by criminologists challenging the increasingly punitive policies as ineffective and wasteful, but with little effect until the states’ budget crises generated a search for solutions, and reversing the dramatic growth in incarceration costs represented an attractive opportunity. Also, the rehabilitation ideal was being realized under a new rubric called “reentry”.

In contrast to many other disciplines, where the “gold standard” of careful randomized trials can be carried out relatively easily,Footnote 6 randomization in decisions of interest to criminologists are often seen as unacceptable, especially by judges, and can be extremely difficult and expensive to undertake. Lawrence Sherman has been resolute in carrying out randomized trials with police patrol operations, police response to domestic violence (Sherman and Berk 1984), but the results were challenged and, with a display of considerable wisdom by NIJ, the trials were replicated in six sites with uneven results, and the results have been found to depend strongly on whether the offending spouse is employed or not. The difficulty of carrying out the randomized trial is the necessary deviation from the classic model where the treater, treatee, and observer have no knowledge of whether they are observing the treatment or the placebo. Furthermore, it is obviously difficult to produce an effective social placebo and it is extremely difficult to introduce double-blind treatment and measurement.

As a result, criminologists must resort to other methods for testing the effects of the variety of interventions they would like to consider. This has given rise to a wide variety of statistical treatments in the realm of quasi-experiments (e.g., Cook and Campbell 1979), the wide variety of statistical models and the variety of statistical methods that have become a standard part of all social sciences.

This has resulted in great difficulty of sorting out the various treatments imposed on offenders because of the confounding of the treatment effects with the selection effects. Thus, one of the continuing burning questions in criminology is one of assessing the degree to which incarceration is either rehabilitative (through some combination of specific deterrence of the individual incarcerated and whatever supportive services such as education, job skills, or addiction treatment are provided) or criminogenic (through connections to crime networks, learning better offending skills from the more seasoned prisoners, or because of the difficulty of accommodating to family and reentering the legitimate economy after release).

Most of the primitive research that tries to address this issue inevitably finds that those sent to prison do worse than “similar” others not sent to prison. Of course, judges exercise some judgment of who represents the greater risk. They obviously take account of information like prior record, family connections, education, job performance, all information that could be incorporated into a regression model intended to “control for” these factors affecting selection. That requires that information on those factors be available in making the estimate of the selection effect; inevitably, only some will be recorded. But there will almost never be any recording of the defendant’s “attitude,” which can have a strong influence on a judge’s decision. One can also control for selection by propensity-score matching to find the subset of people not sent to prison who are most similar statistically to those who are sent to prison, but again that uses the same set of variables, perhaps in a more appropriate way than the traditional simple regression model, but it provides an opportunity for “matching” the prisoners and the non-prisoners to find more similar pairs. This continues to be a central difficulty in understanding the good and the harm we do through routine criminal justice operations.

Inevitably, incarceration has different effects on different people, some coming out better and some coming out worse. A major continuing challenge that goes along with assessing the net effect of incarceration is being able to identify those who would benefit and those who would come out worse. That issue has not yet been addressed adequately, and one would hope that future research and quantitative criminology would begin to handle that issue.

Criminal Careers

One area of important concern to criminologists should be the issue of criminal careers. Understanding the concept of a criminal career and its various parameters is clearly fundamental to being able to talk about offending patterns and the way they change with age, to distinguish among offenders in the crimes they commit and the frequency with which they commit them, and to assess the effects of incarceration on offending, especially through incapacitation. The key parameters of interest are participation rate from any population subgroup, offending frequency or λ, career length or residual career length remaining after some criminal justice intervention.

There was a great flurry of activity in measurement of criminal-career parameters in the 1970s and 1980s, largely through the initiative of Richard Linster of the National Institute of Justice under the “crime control theory program” he administered. There were strong estimates of the key parameters, especially in a major research effort by RAND interviewing prisoners in three states.

One of the interesting outcomes from that effort was an almost classic confrontation of quantitative criminology with value systems. As a result of his involvement in those measurements, Greenwood (with Abrahamse) (1982) proposed a policy he called “selective incapacitation”. His analysis of the survey results found that those with the highest offending frequency (λ) had some characteristics that distinguished them from the others. This was particularly important because the surveys found a highly skewed distribution of λ, so the crimes averted through such identification could be considerable. That proposal was met with considerable hostility on value grounds: it would introduce unwanted disparity because two people convicted of the same offense would receive different punishment. Also, there was great concern about the false positives, i.e. those incorrectly predicted to be high-λ people. As a result, there was no further progress on selective incapacitation. Indeed, at about that same time a number of states and even the federal government set up sentencing commissions with a major motivation to reduce disparity.

Some later research by Canela-Cacho et al. (1997) showed that the high-λ offenders were already disproportionately in prison, presumably because they “rolled the dice more often”, resulting in what was called “stochastic selectivity” without having to make the explicit selection. Interestingly, RAND undertook an experiment by following after their release from prison the individuals they identified as high- λ compared to those they predicted would be low—λ and found little difference in their subsequent arrest patterns. Of course, what the experiment measured was not λ but λ*q, where q is the probability of arrest following a crime. To the extent that high–λ people had low values of q, that could wash out at least some of the differences in λ.

Many of the research issues surrounding criminal careers were addressed in the NRC report on criminal careers and “career criminals” (Blumstein et al. 1986), and a concise version of the issues appeared in Science (Blumstein and Cohen 1987). The NRC report was replete with analyses using the limited data that were then available. But that was a quarter century ago and offenders’ patterns could well have changed since then and much of the data were quite preliminary. There has been some progress since then, including some important developments in measuring key parameters. Some of those developments were summarized by Piquero et al. (2003). One important contribution is that by Piquero et al. (2007) involving a number of significant analyses of David Farrington’s Cambridge Study in Delinquent Development. But there is still so much to be done in getting better measurements of the distributions of participation rates among various subpopulations, offending frequency, career length, termination rate, trends in specialization and in seriousness of the offenses as the career progresses.

In an interview as he was leaving office as the director of NIJ, I questioned Jeremy Travis (2000) about the absence of any research on crime-control theory during his time as director. He acknowledged that this was “a major regret. We wanted to be able to update the lambda estimates, in part because they provide the basis for so much policy debate and discussion and because they have been critiqued by scholars as being inadequate or limited. I think that in the next 5 years, the Institute will be able to mount a major initiative to re-estimate the rates of offending.”

That 5-year expectation was never realized. It is certainly the case that there could have been significant changes in the offender population, that the massive growth in incarceration since the original research on crime control theory could well have changed the offending population and certainly its parameters. Also there had been very little research into the determinants of the criminal career parameters and that was very much needed, particularly to know how those parameters had changed.

Many Opportunities in the Future

As we consider some of the challenges for quantitative criminology in the future, it would be important to take advantage of the generic changes in the analytics, computing, and data resources that will be becoming available. We can certainly expect the technology implied by Moore’s Law to give us greater computing speeds, we can expect to be able to work with larger and larger data sets providing richer samples that can be partitioned in interesting ways, and we can expect statisticians, econometricians, computer scientists specializing in machine learning, and operations researchers to provide us with richer methodologies to handle more subtle differences that will increasingly become of interest.

Also, we should be able to take advantage of the major developments in brain imaging and in genetic analysis to help us sort out whether those aspects of physiology provide major determination of criminal propensity or whether they provide an opportunity to bring a new dimension to partition what we already know. The now-classic paper by Caspi et al. (2002) showing that the adult violence generated by maltreatment of young people depended strongly on their genetic makeup; those with the right genotype were less likely to move on to adult violence and those without it more likely. Thus, the long-standing search for a “crime gene” is much more likely to show itself as a gene-environment interaction than something likely to be generated by genes alone.

On the other hand, many of our current problems will continue to plague us. There will continue to be reluctance to record honestly and completely official-record statistics when political decisions depend on them. Self-reports will continue to be expensive because of the large amount of increasingly expensive labor involved, and privacy concerns may limit access to certain information that will be important to criminology.

With the richer tools we can anticipate to serve the missions of quantitative criminology, I anticipate that we will see over the next decade much better information on the factors contributing to criminology based on individual behavior, on improved decision-making in sentencing, on the effects of criminal justice interventions—and especially incarceration—on individual offenders and on crime patterns, and on one of the most troubling aspects of criminal justice operations—the racial disproportionality at all levels of the criminal justice system.

There have been a sizable number of longitudinal studies seeking to identify factors contributing to involvement in crime. It is striking that there has been very limited aggregation of information across these various studies; virtually all the research output—and there has been considerable—derive from the participants of an individual study. This has been true even of the three studies in Pittsburgh, Denver, and Rochester that were funded by the Office of Juvenile Justice and Delinquency Prevention (OJJDP) with an explicit requirement to use similar data formats in order to provide joint studies. One would hope that, with the greater facility at analyzing complex information emerging from the machine-learning community, we would see some aggregation of the very large databases accumulated in these longitudinal studies. This could contribute to new insights resulting from the diverse populations and diverse analytic perspectives and from the much larger aggregate samples that can become available.

In addition to the opportunity to learn much more about individual offenders, there is still an enormous gap in our knowledge of co-offending. This is obviously a result of the complexity of individual social networks whereby any individual offender can be close to many individuals who do not offend and almost certainly includes a number who do; those latter relationships are particularly important to document and to understand the leader–follower relationships. This is particularly salient for juveniles, whose offending patterns much more often involve co-offending than is the case with adults. The National Longitudinal Study of Adolescent Health (Add Health) project has initiated such efforts by collecting information on their subjects’ peers, and that should open new opportunities for studying co-offending.

A key activity in the operation of the criminal justice system involves sentencing decisions by judges, which is affected by their own backgrounds and values, their political position and how it affects their retention in their position, the influence of prosecutors in deciding what charges to apply in any particular case, guidance provided by sentencing commissions, and mandates by legislatures restricting their choices in a variety of ways. A significant fraction of the research on judges’ sentencing decisions has been focused on trying to assess racial bias in those decisions, but that research is inherently troubled by the fact that selection bias by prosecutors can mask whatever bias the judge might introduce. Understanding the decision-making process will provide important information to sentencing commissions and to understanding how best to deal with offenders.

Since incarceration is the primary instrument by which the criminal justice system responds to individual offenders, it has obviously been an important focus of much research attention. There have been many efforts to estimate the deterrent effect of incarceration using econometric models, but these have been found to have serious problems of robustness and identification. Studies of incapacitation inherently depend on improved knowledge of criminal careers, and what we know in that regard is still very limited. It is also very important to come to understand the effects of time in prison on subsequent offending by the individuals incarcerated, the degree to which their experience there is rehabilitative or criminogenic, and knowledge about that is inherently limited by the ability to control for selection effects in terms of judges’ decisions regarding who goes to prison and how long a sentence they serve. There have been extensive debates on the direction of those effects but it is almost certainly the case that some individuals benefit and others are harmed, but we still know very little about who is affected in which direction.

One of the continuing concerns is a major racial disproportionality at all stages of the criminal justice system. The current black-to-white incarceration-rate ratio is currently about 6:1, down from recent ratios as high as 8:1. While this may convey to some a sense of gross discrimination, and some rhetoric attributes it all to discrimination, it is clear that most of it is attributable to differential involvement in the kinds of crimes that lead to incarceration. This is not to say that there is no discrimination within the system, and such discrimination should certainly be a policy target. Particularly as we get information systems linking rap sheet data with improved disposition reporting with data on incarceration and employment and family structure, we will have much greater opportunity to identify the various sources of the differential involvement.

There are major opportunities for significant advances in criminological research over the next decade. We have much better tools, many more people skilled in using them wisely, and an accumulation of rich data sets that can be applied to this endeavor. The major shortcoming has been and continues to be funding to carry out these efforts. The National Institute of Justice has hobbled along on annual budgets in the order of $50 million, which is a sharp contrast to the $400 million provided to the National Institute of Dental Research. The current direct expenditures on the criminal justice system total more than $200 billion. A general rule of thumb in industry typically allocates at least 1% of gross volume to research and development to improve operations; applying that rule would call for at least $2 billion to be applied to research to address the problem of crime and the criminal justice system. It would be unreasonable to expect to be able to ramp up to that level very quickly, but in face of the need and the opportunity, one would hope to see the resources grow significantly in coming years to meet those needs and exploit the opportunities.