Computer Networks in Field Research

  • Sara Kiesler
  • John Walsh
  • Lee Sproull
Part of the Social Psychological Applications to Social Issues book series (SPAS, volume 2)

Abstract

In applied social research, the questions we can ask and the data we can collect depend on our tools. Computers have made possible many new tools—for instrument design, sampling, scheduling research, coding and editing, data entry, data cleaning, scale and index construction, data base organization and retrieval, statistical analysis, documentation, and report writing (Karweit & Meyers, 1983, pp. 379–414). In the past two decades, these tools have led to efficiencies and a scale of research heretofore impossible, but the data collection process remains much as it was. Today, most field researchers rely on paper questionnaires, personal observations, and interviews carried out face-to-face or by telephone. For example, of the 90 articles published in the Journal of Applied Social Psychology in 1989, 46 report data from a paper-and-pencil survey or face-to-face interview; in another 22, investigators used a personality inventory or similar paper-and-pencil instrument as part of experimental research. In this chapter we consider new tools for data collection based on networking and computers.1 These tools can increase the efficiency and scale of research. More interesting, they make possible new ways of collecting data and give access to data that in the past were virtually unobtainable.

Keywords

Income Expense Nylon Arena Nism 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bailey, L., Moore, T., & Bailar, B. (1978). An interviewer variance study for the eight impact cities of the National Crime Survey. Journal of the American Statistical Association, 73, 16–23.CrossRefGoogle Scholar
  2. Beniger, J. R. (1986). The control revolution. Cambridge, MA: Harvard University Press.Google Scholar
  3. Binik, Y. M., Westbury, C. F., & Servan-Schreiber, D. (1989). Case histories and shorter communications. Behavioral Research Therapy, 27, 303–306.CrossRefGoogle Scholar
  4. Bradburn, N. (1983). Response effects. In P. H. Rossi, J. D. Wright, & A. B. Anderson (Eds.), Handbook of survey research (pp. 289–328). New York: Academic Press.Google Scholar
  5. Bradburn, N. M., & Sudman, S. (1988). Polls and surveys. San Francisco: Jossey-Bass.Google Scholar
  6. Carley, K. (1988). Formalizing the social expert’s knowledge. Sociological Methods and Research, 17, 165–232.CrossRefGoogle Scholar
  7. Collier, J., Jr. (1967). Visual anthropology: Photography as a research method. New York: Holt, Rinehart & Winston.Google Scholar
  8. Danowski, J. A., & Edison-Swift, P. (1985). Crisis effects on intraorganizational computer-based communication. Communication Research, 12, 251–270.CrossRefGoogle Scholar
  9. Dawes, R. M. (1988). Rational choice in an uncertain world. San Diego: Harcourt Brace Jovanovich.Google Scholar
  10. Dubrovsky, V., Kiesler, S., & Sethna, B. (1991). The equalization phenomenon: Status effects in computer-mediated and face-to-face decision making groups. Human Computer Interaction, 6, 119–146.CrossRefGoogle Scholar
  11. Dutton, W., & Guthrie, K. (1990). Santa Monica’s public electronic network: The political ecology of a teledemocracy project. Unpublished manuscript, Annenberg School for Communication, University of Southern California, Los Angeles.Google Scholar
  12. Eklundh, K. (1986). Dialogue processes in computer-mediated communication. Linköping, Sweden: Linköping University.Google Scholar
  13. Erdman, H. P., Klein, M. H., & Greist, J. H. (1985). Direct patient computer interviewing. Journal of Counseling and Clinical Psychology, 53, 760–773.CrossRefGoogle Scholar
  14. Eveland, J. D., & Bikson, T. K. (1988). Work group structures and computer support: A field experiment. Transactions on Office Information Systems, 6, 354–379.CrossRefGoogle Scholar
  15. Ferrara, R., & Nolan, R. L. (1974). New look at computer data entry. In W. C. House (Ed.), Data base management (pp. 25–46). New York: Petrocelli Books.Google Scholar
  16. Fienberg, S. E., Martin, M. E., & Straf, M. L. (Eds.) (1985). Sharing research data. Washington, DC: National Academy Press.Google Scholar
  17. Finholt, T., & Sproull, L. S. (1990). Electronic groups at work. Organization Science, 1, 41–64.CrossRefGoogle Scholar
  18. Goyder, J. (1987). The silent minority. Boulder, CO: Westview.Google Scholar
  19. Groves, R. M., & Kahn R. L. (1979). Survey by telephone: A national comparison with personal interviews. New York: Academic Press.Google Scholar
  20. Groves, R. M., & Mathiowetz, N. A. (1984). Computer assisted telephone interviewing: Effects on interviewers and respondents. Public Opinion Quarterly, 48, 356–359.PubMedGoogle Scholar
  21. Hagstrom, W. O. (1965). The scientific community. Carbondale, IL: Southern Illinois University Press.Google Scholar
  22. Hannaway, J. (1989). Signals and signalling: The workings of an administrative system. Fair Lawn, NJ: Oxford University Press.Google Scholar
  23. Hargens, L. L. (1975). Patterns of scientific research. Washington, DC: American Sociological Association.Google Scholar
  24. Hesse, B.W., Sproull, L., Kiesler, S., & Walsh, J. P. (in press). Returns to science: Network and scientific research in oceanography. Communications of the ACM, 265. Working paper, Carnegie Mellon University.Google Scholar
  25. Hiltz, S. R., & Turoff, M. (1978). The network nation: Human communication via computer. New York: Addison-Wesley.Google Scholar
  26. Huff, C., Sproull, L., & Kiesler, S. (1989). Computer communication and organizational commitment: Tracing the relationship in a city government. Journal of Applied Social Psychology, 19, 1371–1391.CrossRefGoogle Scholar
  27. Karweit, N., & Meyers, E. D., Jr. (1983). Computers in survey research. In P. H. Rossi, J. D. Wright, & A. B. Anderson (Eds.), Handbook of survey research (pp. 379–414). New York: Academic Press.Google Scholar
  28. Katz, J. E. (1988). U.S. telecommunications privacy policy: Socio-political responses to technological advances. Telecommunications Policy, 12, 353–368.CrossRefGoogle Scholar
  29. Kiesler, S., & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402–413.CrossRefGoogle Scholar
  30. Kraut, R. E., & Streeter, L. A. (1990). Satisfying the need to know: Interpersonal information access. Unpublished manuscript, Bell Communications Research, Morristown, NJ.Google Scholar
  31. Larson, R. F., & Catton, W. R., Jr. (1959). Can the mail-back bias contribute to a study’s validity. American Sociological Review, 24, 243–245.CrossRefGoogle Scholar
  32. Lederberg, J. (1978). Digital communications and the conduct of science: The new literacy. IEEE Proceedings, 66, 1314–1319.CrossRefGoogle Scholar
  33. Licklider, J. C. R., & Vezza, A. (1978). Applications of information networks. Proceedings of the IEEE, 66, 1330–1346.CrossRefGoogle Scholar
  34. Martin, C. L., & Nagao, D. H. (1989). Some effects of computerized interviewing on job applicant responses. Journal of Applied Psychology, 74, 72–80.CrossRefGoogle Scholar
  35. Marwell, G., & Ames, R. (1979). Experiments on the provision of public goods. I. Resources, interest, group size and the free-rider problem. American Journal of Sociology, 84, 1335–1360.CrossRefGoogle Scholar
  36. Merton, R. K. (1968). The Matthew effect in science. Science, 159, 56–63.CrossRefGoogle Scholar
  37. Merton, R. K. & Zuckerman, H. (1973). Age, aging and age structure in science. In N. W. Storer (Ed.), The sociology of science (pp. 497–559). Chicago: University of Chicago Press.Google Scholar
  38. Messick, D. M., & Brewer, M. B. (1983). Solving social dilemmas: A review. In L. Wheeler & P. Shaver (Eds.), Review of personality and social psychology (pp. 11–44). Beverly Hills, CA: Sage.Google Scholar
  39. Myers, D. (1987). Anonymity is part of the magic: Individual manipulation of computer-mediated communication contexts. Qualitative Sociology, 10, 251–266.CrossRefGoogle Scholar
  40. National Research Council, Panel on Information Technology and the Conduct of Research, Committee on Science, Engineering, and Public Policy. (1989). Information technology and the conduct of research: The user’s view. Washington, DC: National Academy Press.Google Scholar
  41. Plutchik, R., & Karasu, T. B. (1991). Computers in psychotherapy: An overview. Computers in Human Behavior, 7, 33–44.CrossRefGoogle Scholar
  42. Rugg, W. D. (1941). Experiments in wording questions II. Public Opinion Quarterly, 5, 91–92.CrossRefGoogle Scholar
  43. Rule, J., & Brantley, P. (1990). Surveillance in the workplace: A new meaning to “personal” computing. Unpublished manuscript, State University of New York, Stony Brook.Google Scholar
  44. Salancik, G. R. (1983). Field simulations for organizational behavior research. In J. Van Maanen (Ed.), Qualitative Methodology (pp. 191–208). Newbury Park, CA: Sage.Google Scholar
  45. Salomon, G. (in press). Studying the flute and the orchestra: Controlled experimentation vs. whole classroom research on computers. International Journal of Educational Research, 14, 31-41.Google Scholar
  46. Schneider, S. J., Walter, R., & O’Donnell, R. (1990). Computerized communication as a medium for behavioral smoking cessation treatment: Controlled evaluation. Computers in Human Behavior, 6, 141–151.CrossRefGoogle Scholar
  47. Schuman, H., & Presser, S. (1981). Questions and answers in attitude surveys: Experiments in question form, wording, and context. New York: Academic Press.Google Scholar
  48. Servan-Schreiber, D., & Binik, Y. M. (1989). Extending the intelligent tutoring system paradigm: Sex therapy as intelligent tutoring. Computers in Human Behavior, 5, 241–259.CrossRefGoogle Scholar
  49. Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159–169.CrossRefGoogle Scholar
  50. Sproull, L. S., & Kiesler, S. (1986). Reducing social context cues: Electronic mail in organizational communication. Management Science, 32, 1492–1512.CrossRefGoogle Scholar
  51. Sproull, L. S., & Kiesler, S. (1991). Connections: New ways of working in the networked organization. Cambridge, MA: MIT Press.Google Scholar
  52. Steeh, C. G. (1981). Trends in nonresponse rates, 1952–1979. Public Opinion Quarterly, 45, 40–57.CrossRefGoogle Scholar
  53. Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive interviewing in survey research. Psychology and Marketing, 5, 117–137.CrossRefGoogle Scholar
  54. Tallent, N., & Reiss, W. J. (1959). A note on the unusually high rate of returns for a mail questionnaire. Public Opinion Quarterly, 23, 579–581.CrossRefGoogle Scholar
  55. Thomsen, A., & Siring, E. (1980). On the causes and effects of non-response: Norwegian experiences. Artikler. Fra Statistisk Sentralbyrå, Nr. 121.Google Scholar
  56. Thorn, B. K., & Connolly, T. (1987). Discretionary data bases: A theory and some experimental findings. Communication Research, 14, 512–528.CrossRefGoogle Scholar
  57. Turner, C., Dubnoff, S., & Kiesler, S. (1987). Research plan for studies of electronic interviews in health surveys. Unpublished manuscript, Carnegie Mellon University.Google Scholar
  58. Turner, C., & Martin, E. (Eds.). (1984). Surveying subjective phenomena (two volumes). New York: Russell Sage Foundation and Basic Books.Google Scholar
  59. U.S. Bureau of the Census (1968). 1960 census of population and housing, evaluation and research program (Series ER-60, No. 7). Effects of interviewers and crew leaders. Washington, DC: Author.Google Scholar
  60. U.S. Bureau of the Census (1979). 1970 census of population and housing, evaluation and research program (Series PHC(E)-13). Enumerator variance in the 1970 census. Washington, DC: U.S. Government Printing Office.Google Scholar
  61. U.S. Bureau of the Census (1971). Current Population Reports, P-20, No 217. Washington, DC: Author.Google Scholar
  62. Waller, N. G., & Reise, S. P. (1989). Computerized adaptive personality assessment: An illustration with the absorption scale. Journal of Personality and Social Psychology, 57, 1051–1058.PubMedCrossRefGoogle Scholar
  63. Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. (in press). Self-selected and randomly-selected respondents in a computer network survey. Public Opinion Quarterly. Working paper, Carnegie Mellon University.Google Scholar
  64. Waterton, J. J., & Duffy, J. C. (1984). A comparison of computer interviewing techniques and traditional methods in the collection of self-report alcohol consumption data in a field study. International Statistical Review, 52, 173–182.CrossRefGoogle Scholar
  65. Whyte, W. F. (1984). Learning from the field. Newbury Park, CA: Sage.Google Scholar

Copyright information

© Springer Science+Business Media New York 1992

Authors and Affiliations

  • Sara Kiesler
    • 1
  • John Walsh
    • 2
  • Lee Sproull
    • 3
  1. 1.Department of Social and Decision SciencesCarnegie Mellon UniversityPittsburghUSA
  2. 2.Department of SociologyUniversity of Illinois at ChicagoChicagoUSA
  3. 3.School of ManagementBoston UniversityBostonUSA

Personalised recommendations