Skip to main content

Fostering Statistical Rigor forEvidence-Based Policy at the National Academies of Sciences, Engineering, and Medicine

  • Chapter
  • First Online:
Statistics in the Public Interest

Abstract

This chapter highlights Steve Fienberg’s multifaceted, pro bono contributions to the work of the National Academies of Sciences, Engineering, and Medicine over a 40-year time frame. In addition to chairing the Committee on National Statistics (CNSTAT), Steve served on 15 study committees as well as on several oversight bodies under the auspices of the National Academy of Sciences and National Research Council—including the Report Review Committee (RRC), which he cochaired for 8 years. Described here are several examples of his impact on public policy issues through his service on CNSTAT committees and the RRC. Not covered here (but nevertheless important) was his role as an enthusiastic mentor to many National Academies staff, including the three authors of this chapter, who have greatly benefited from his wise counsel.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Connie Citro is a senior scholar with the Committee on National Statistics (she directed CNSTAT from 2004–2017); Mike Cohen is a senior program officer with CNSTAT; Porter Coggeshall directed the Report Review Committee from 1992 to 2017.

  2. 2.

    National Research Council. 2003. The Polygraph and Lie Detection. Washington, D.C.: The National Academies Press. https://doi.org/10.17226/10420

  3. 3.

    In addition to serving as RRC monitor and cochair, Steve reviewed a total of 15 National Academies reports. We thank RRC staff member Dalia Hedges, who provided the appended lists of the reports that Steve monitored and reviewed.

  4. 4.

    One important difference is that the review comments on National Academies reports are submitted to the institution, which is represented by the RRC monitor and a review coordinator appointed by the Academies division overseeing the study. They together assess the adequacy of the authoring committee’s responses to all review comments. The reviewers themselves never see these responses. Nor do they see the revisions to the report until it is published.

  5. 5.

    The review of a National Academies report typically takes 10–12 weeks to complete (i.e., from the time the draft is sent to reviewer to the date of RRC signoff).

  6. 6.

    National Academy of Sciences. 2009. Global Security Engagement: A New Model for Cooperative Threat Reduction. Washington, DC: The National Academies Press. https://doi.org/10.17226/12583

  7. 7.

    National Research Council. 2011. Review of the Scientific Approaches Used During the FBI’s Investigation of the 2001 Anthrax Letters. Washington, DC: The National Academies Press. https://doi.org/10.17226/13098

  8. 8.

    National Research Council. 2013. Monitoring Progress Toward Successful K-12 STEM Education: A Nation Advancing?. Washington, DC: The National Academies Press. https://doi.org/10.17226/13509

  9. 9.

    National Academies of Sciences, Engineering, and Medicine. 2016. Families Caring for an Aging America. Washington, DC: The National Academies Press. https://doi.org/10.17226/23606

  10. 10.

    National Academies of Sciences, Engineering, and Medicine. 2016. Mainstreaming Unmanned Undersea Vehicles into Future U.S. Naval Operations: Abbreviated Version of a Restricted Report. Washington, DC: The National Academies Press. https://doi.org/10.17226/21862

  11. 11.

    National Research Council, 2014: Identifying the Culprit: Assessing Eyewitness Identification. Washington, DC: The National Academies Press. https://www.nap.edu/catalog/18891/identifying-the-culprit-assessing-eyewitness-identification

  12. 12.

    The use of receiver operating receiver curves to measure the accuracy of eyewitness identification decisions has been widely debated. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5256436/.

  13. 13.

    From a case study that was prepared by the RRC staff for discussion at the 2015 RRC annual meeting.

  14. 14.

    Ibid.

  15. 15.

    National Research Council. 2009. A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs. Washington, DC: The National Academies Press. https://doi.org/10.17226/12676

  16. 16.

    National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. https://doi.org/10.17226/12994

  17. 17.

    National Research Council. 1995. Research-Doctorate Programs in the United States: Continuity and Change. Washington, DC: The National Academies Press.

  18. 18.

    Jones, Lyle V., Gardner Lindzey, and Porter E. Coggeshall. 1982. An Assessment of Research-Doctorate Programs in the United States (five volumes). Washington, DC: The National Academies Press. This 1982 report was undertaken under the auspices of the Conference Board of Associated Research Councils in the United States, which included representatives of the American Council of Learned Societies, the American Council on Education, the Social Science Research Council, and the National Research Council. The 1995 report was undertaken by the National Research Council alone.

  19. 19.

    The names of the 20 reviewers may be found in the Preface and Acknowledgments section of the 2009 report.

  20. 20.

    From a case study that was prepared by the RRC staff for discussion at the 2009 RRC annual meeting

  21. 21.

    The complex technique for estimating the interquartile range of program rankings is described in Appendix A of the 2009 report.

  22. 22.

    The 2011 assessment report presents a brief description of the doctoral program data collected and how the rankings were calculated, including a detailed example of this calculation for a program in economics.

  23. 23.

    The assessment data covered doctoral programs in 61 fields at 222 institutions. The names of the institutions and departments were not identified in the spreadsheet to prevent any reviewers, monitor, coordinator, and staff from leaking the assessment results before the report was publicly released.

  24. 24.

    From a case study that was prepared by the RRC staff for discussion at the 2011 RRC annual meeting.

  25. 25.

    Ibid.

  26. 26.

    National Research Council. 2011. Op. cit., p. 1

  27. 27.

    From a case study that was prepared by the RRC staff for discussion at the 2011 RRC annual meeting.

  28. 28.

    The Chronical of Higher Education. September 30, 2010. “A Critic Sees Deep Problems in the Doctoral Rankings”; https://www.chronicle.com/article/A-Critic-Sees-Deep-Problems-in/124725

  29. 29.

    Much to the surprise of many RRC members, it was discovered that as many as 5 percent of all National Academies studies involved some form of original data collection, analyses, or modeling.

National Academies Reports Monitored by Steve Fienberg

[in chronological order]

National Academies Reports Reviewed by Steve Fienberg

    [in chronological order]

    Download references

    Author information

    Authors and Affiliations

    Authors

    Corresponding author

    Correspondence to Constance F. Citro .

    Editor information

    Editors and Affiliations

    Rights and permissions

    Reprints and permissions

    Copyright information

    © 2022 Springer Nature Switzerland AG

    About this chapter

    Check for updates. Verify currency and authenticity via CrossMark

    Cite this chapter

    Citro, C.F., Cohen, M.L., Coggeshall, P.E. (2022). Fostering Statistical Rigor forEvidence-Based Policy at the National Academies of Sciences, Engineering, and Medicine. In: Carriquiry, A.L., Tanur, J.M., Eddy, W.F. (eds) Statistics in the Public Interest. Springer Series in the Data Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-75460-0_21

    Download citation

    Publish with us

    Policies and ethics