“Jews are just like everyone else…only more so.” — Rabbi Lionel Blue
Abstract
Jewish community research and demography has seen its share of challenges in the past decade. These challenges, such as substantially decreased rates of participation and the decline of landline ownership, may have particular impact on Jewish community research but are in fact broader overarching trends. This paper documents these trends and notes research showing that while cost in probability surveys have gone up as a consequence, data quality has remained consistently strong. Research comparing telephone research to online Internet surveys shows substantially better data quality in telephone research, even with very low response rates, and these differences can be particularly true in Jewish research. I provide a typical typology of errors found in survey research and afford the reader ways in which such errors can occur in Jewish community research, and how to avoid such errors. The paper then briefly reviews common approaches to Jewish community research and discusses each in the context of the typology of potential survey errors. As is noted elsewhere in the papers of this journal, despite increased costs, RDD telephone remains a uniquely powerful and useful data gathering tool for Jewish community research, to the extent that to date, no other approach executed has yet to result in survey estimates which have not been out-of-trend with past surveys in the same community as well as trends found in similar communities nationwide. The challenge of survey research then continues to be in finding designs that incorporate RDD interviewing with other methods that may be less desirable in terms of data quality but are significantly less costly to administer.
Similar content being viewed by others
Notes
Because telephone companies tend to assign residential numbers in batches, dialing only to “100 blocks” (for example, 888-999-1001 through 888-999-1100) that included at least one residential phone number listed in the White Pages, resulted in avoiding vast amounts of non-working and business numbers without dramatically losing any residential numbers.
Initially, this was accomplished with “cell weighting,” but the development of iterative proportional fitting, all known as “raking,” allowed researchers to develop weighting routines with a much larger number of targets and target breaks. Given these developments, it is not uncommon today for studies to be weighted by nearly a dozen parameters, such as educational attainment, population density, age, and others.
These are devices that can screen calls from unknown numbers and prevent them from ever ringing on a particular telephone.
The practice of advertising about a study beforehand is a potential concern if the practice leads to some Jewish households having a much higher probability of being reached by these marketing efforts than other Jewish households, as this could potentially lead to bias in the sample. Communities need to fully weigh the relative advantages and disadvantages of this practice with their researchers to make an informed decision as to whether the benefits outweigh the hazards.
Given that half of households own only cell phones and telephone listings are predominantly landline-based, the one-third coverage given in the above example would be only one-sixth coverage today.
With around 50% of US households now owning only a cell phone, if 25% of a community’s cell phone households have out-of-area numbers, this translates into a 12.5% coverage gap. Studies that utilize lists of all kinds (federation-based lists, consumer-based lists, etc.) can reduce this under-coverage to the degree that such lists cover the Jewish population and include out-of-area cell phone numbers. If, for example, the federation-based list includes half of all Jews, and consumer lists of cell phone households cover another 5% to 10%, then the under-coverage in this example could be reduced to as little as 5% of all households.
For example, this could be some form of List-Assisted Disproportionate Stratified Design (LADS), as detailed later in this paper.
Another concern with regard to post-survey adjustments is the proper weighting of designs that disproportionately sample the target area. For example, designs that over-sample areas of expected high Jewish incidence must then make the proper adjustments so that, once weighted, all sample members have an equal probability of being selected. Given the complexity of many Jewish research designs and the limited information available in small communities, post-survey weighting adjustments have often become highly complex, laborious, and reliant at many points on assumptions about the target population and study area, which may vary in their degree of accuracy.
The SSRS Omnibus survey is a national, weekly general population dual-frame bilingual telephone survey of American adults, run between one and four times weekly at one thousand interviews per wave. Each wave consists of inserts paid by varied clients, as well as a large battery of demographic questions.
This includes modeling conducted by the author and detailed in a forthcoming paper.
References
Blumberg, S.J., and J.V. Luke. 2016. Wireless substitution: Early release of estimates from the National Health Interview Survey, July–December 2015. National Center for Health Statistics. http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201605.pdf.
Bradburn, Norman. 1992. “A response to the nonresponse problem.” 1992 American Association for Public Opinion Research Presidential Address. Public Opinion Quarterly 56(3): 391–397.
Brehm, J. 1994. Stubbing our toes for a foot in the door? Prior contact, incentives and survey response improving response to surveys. International Public Opinion Research 6(1): 45–64.
Chiang, Linchiat, and Jon A. Krosnick. 2009. National surveys via RDD telephone interviewing versus the Internet: Comparing sample representativeness and response quality. Public Opinion Quarterly. doi:10.1093/poq/nfp075.
Curtin, R., Stanley Presser, and Eleanor Singer. 2000. The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly 64: 413–428.
Curtin, R., Stanley Presser, and Eleanor Singer. 2005. Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly 69(1): 87–98.
Dutwin, D., and Paul Lavrakas. 2016. Trends in dual-frame response, 2007–2015. Survey Practice 9(3). http://www.surveypractice.org/index.php/SurveyPractice/article/view/346/html_62.
Dutwin, David, and Trent Buskirk. 2016. Telephone sample surveys: Dearly beloved or nearly departed? Trends in survey errors in the age of declining response rates. Unpublished manuscript under peer review.
Dutwin, David, and Trent D. Buskirk. 2017. Apples to oranges or gala versus golden delicious? Comparing data quality of nonprobability Internet samples to low response rate probability samples. Public Opinion Quarterly, 2017 Special Issue on the Future of Survey Research (in press).
Dutwin, David, Eran Ben-Porath, Eran, and Ron Miller. 2014. U.S. Jewish population studies: Opportunities and challenges. In The social scientific study of jewry: Sources, approaches, debates, Uzi Rebhun, ed., 55–73. Oxford: Oxford University Press.
Groves, Robert M. 2006. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly 70(5): 646–675.
Groves, Robert M. 2011. Three eras of survey research. Public Opinion Quarterly 75(5): 861–871.
Groves, Robert M., F.J.J. Fowler Jr., M.P. Couper, J.M. Lepkowski, E. Singer, and R. Tourangeau. 2009. Survey methodology, 2nd ed. Hoboken, N.J.: Wiley.
Groves, Robert M., and Emilia Peytcheva. 2008. The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly 72: 167–189.
Guterbock, Thomas, and Grant Benson. 2016. Trends in costs of telephone surveys: A survey of organizations productivity. Unpublished manuscript.
Keeter, Scott, Carolyn Miller, Andrew Kohut, Robert M. Groves, and Stanley Presser. 2000. Consequences of reducing nonresponse in a large national telephone survey. Public Opinion Quarterly 67: 125–148.
Keeter, Scott, Courtney Kennedy, Michael Dimock, Jonathan Best, and Peyton Craighill. 2006. Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey. Public Opinion Quarterly 70(5): 759–779.
Malhotra, Neil, and Jon A. Krosnick. 2007. The effect of survey mode and sampling on inferences about political attitudes and behavior: Comparing the 2000 and 2004 ANES to Internet surveys with nonprobability samples. Political Analysis 15: 286–323.
Pew Research Center. 2012. Assessing the Representativeness of Public Opinion Surveys. Accessed at http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/.
Pew Research Center. 2009. Magnet or sticky? A state-by-state typology. Accessed at http://www.pewsocialtrends.org/2009/03/11/magnet-or-sticky/.
Saris, William E., and Irmtraud N. Gallhofer. 2014. Design, evaluation, and analysis of questionnaires for survey research, 2nd ed. New York: Wiley.
Silver, Nate. (2015). Is the polling industry instasis or incrisis? Retrieved at http://fivethirtyeight.com/tag/polling-accuracy/ on February 4, 2016.
Singer, Eleanor. 2006. Introduction: Nonresponse bias in household surveys. Public Opinion Quarterly 70(5): 637–645.
Steeh, Charlotte. 1981. Trends in nonresponse rates, 1952–1979. Public Opinion Quarterly 45(1): 40–57.
Tighe, Elizabeth, Leonard Saxe, Charles Kadushin, Raquel Magidin, Begli de Kramer, Janet Aronson Nursahedov, and Lynn Cherny. 2011. Estimating the Jewish population of the United States: 2000–2010. Waltham, MA: Steinhart Social Research Institute.
Traugott, Michael W. 2005. The accuracy of the nationalpreelection polls in the 2004 presidentialelection. Public Opinion Quarterly 69(5): 642–654.
Tourangeau, Roger, and Thomas J. Plewes, eds. 2013. Nonresponse in social science surveys: A research agenda. A report by the National Research Council of the National Academies. Washington, D.C.: National Academies Press.
Yeager, David S., Jon A. Krosnick, LinChait Chang, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser, and Rui Wang. 2011. Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples. Public Opinion Quarterly 75: 709–747.
Walker, R., R. Pettit, and J. Rubinson. 2009. A special report from the Advertising Research Foundation: The foundations of quality initiative: A five-part immersion into the quality of online research. Journal of Advertising Research 49: 464–485.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dutwin, D. Everything You Need to Consider When Deciding to Field a Survey of Jews: Choices in Survey Methods and Their Consequences on Quality. Cont Jewry 36, 297–318 (2016). https://doi.org/10.1007/s12397-016-9189-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12397-016-9189-y