Skip to main content

Value Sensitive Design and Information Systems

  • Chapter
  • First Online:
Early engagement and new technologies: Opening up the laboratory

Part of the book series: Philosophy of Engineering and Technology ((POET,volume 16))

Abstract

Value Sensitive Design is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process. It employs an integrative and iterative tripartite methodology, consisting of conceptual, empirical, and technical investigations. We explicate Value Sensitive Design by drawing on three case studies. The first study concerns information and control of web browser cookies, implicating the value of informed consent. The second study concerns using high-definition plasma displays in an office environment to provide a “window” to the outside world, implicating the values of physical and psychological well-being and privacy in public spaces. The third study concerns an integrated land use, transportation, and environmental simulation system to support public deliberation and debate on major land use and transportation decisions, implicating the values of fairness, accountability, and support for the democratic process, as well as a highly diverse range of values that might be held by different stakeholders, such as environmental sustainability, opportunities for business expansion, or walkable neighborhoods. We conclude with direct and practical suggestions for how to engage in Value Sensitive Design.

The original version of this chapter is published by M.E. Sharpe (www.mesharpe.com). This chapter contains a reprint of the original paper with an additional commentary.

From Human-Computer Interaction and Management Information Systems: Foundations Advances in Management Information Systems, Volume 5 (Advances in Management Information Systems), ed. Ping Zhang & Dennis Galletta (Armonk, NY: M.E. Sharpe, 2006), pp. 348–372. Copyright © 2006 by M.E. Sharpe, Inc. Reprinted with permission.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The Oxford English Dictionary definition of this sense of value is: “the principles or standards of a person or society, the personal or societal judgement of what is valuable and important in life.” (Simpson and Weiner 1989).

References

  • Aberg, J., & Shahmehri, N. (2001). An empirical study of human web assistants: Implications for user support in Web information systems. In Proceedings of the conference on human factors in computing systems (CHI 2000) (pp. 404–411). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Ackerman, M. S., & Cranor, L. (1999). Privacy critics: UI components to safeguard users’ privacy. In Extended abstracts of CHI 1999 (pp. 258–259). New York: ACM Press.

    Google Scholar 

  • Adler, P. S., & Winograd, T. (Eds.). (1992). Usability: Turning technologies into tools. Oxford: Oxford University Press.

    Google Scholar 

  • Agre, P. E., & Rotenberg, M. (Eds.). (1998). Technology and privacy: The new landscape. Cambridge, MA: MIT Press.

    Google Scholar 

  • Baier, A. (1986). Trust and antitrust. Ethics, 92, 231–260.

    Google Scholar 

  • Beck, A., & Katcher, A. (1996). Between pets and people. West Lafayette: Purdue University Press.

    Google Scholar 

  • Becker, L. C. (1977). Property rights: Philosophical foundations. London: Routledge & Kegan Paul.

    Google Scholar 

  • Bellotti, V. (1998). Design for privacy in multimedia computing and communications environments. In P. E. Agre & M. Rotenberg (Eds.), Technology and privacy: The new landscape (pp. 63–98). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Bennet, W. J., & Delatree, E. J. (1978). Moral education in the schools. The Public Interest, 50, 81–98.

    Google Scholar 

  • Bers, M. U., Gonzalez-Heydrich, J., & DeMaso, D. R. (2001). Identity construction environments: Supporting a virtual therapeutic community of pediatric patients undergoing dialysis. In Proceedings of the conference of human factors in computing systems (CHI 2001) (pp. 380–387). New York: Association for Computing Machinery.

    Google Scholar 

  • Bjerknes, G., & Bratteteig, T. (1995). User participation and democracy: A discussion of Scandinavian research on system development. Scandinavian Journal of Information Systems, 7(1), 73–97.

    Google Scholar 

  • Bødker, S. (1990). Through the interface – A human activity approach to user interface design. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Borning, A., Friedman, B., Davis, J., & Lin, P. (2005, January). Informing public deliberation: Value sensitive design of indicators for a large-scale urban simulation. In ECSCW 2005 (pp. 449–468). Dordrecht: Springer.

    Google Scholar 

  • Borning, A., Waddell, P., & Förster, R. (2008). UrbanSim: Using simulation to inform public deliberation and decision-making. In Digital government (pp. 439–464). New York: Springer.

    Google Scholar 

  • Boyle, M., Edwards, C., & Greenberg, S. (2000). The effects of filtered video on awareness and privacy. In Proceedings of conference on computer supported cooperative work (CSCW 2000) (pp. 1–10). New York: Association for Computing Machinery.

    Google Scholar 

  • Bynum, T. W. (Ed.). (1985). Metaphilosophy, 16(4), 263–377. [Entire issue.]

    Google Scholar 

  • Camp, L. J. (2000). Trust & risk in internet commerce. Cambridge, MA: MIT Press.

    Google Scholar 

  • Campbell, R. L., & Christopher, J. C. (1996). Moral development theory: A critique of its Kantian presuppositions. Developmental Review, 16, 1–47.

    Google Scholar 

  • Carroll, J. M., & Rosson, M. B. (2006). Dimensions of participation in information system design. In P. Zhang & D. Galletta (Eds.), Human-computer interaction and management information systems: Applications (Advances in management information systems, pp. 337–354). Armonk: M.E. Sharpe.

    Google Scholar 

  • Cooper, M., & Rejmer, P. (2001). Case study: Localization of an accessibility evaluation. In Extended abstracts of the conference on human factors in computing systems (CHI 2001) (pp. 141–142). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Davis, J. (2006, August). Value sensitive design of interactions with UrbanSim indicators. Ph.D. dissertation, Department of Computer Science & Engineering, University of Washington.

    Google Scholar 

  • Dieberger, A., Hook, K., Svensson, M., & Lonnqvist, P. (2001). Social navigation research agenda. In Extended abstracts of the conference on human factors in computing systems (CHI 2001) (pp. 107–108). New York: Association of Computing Machinery Press.

    Google Scholar 

  • Dworkin, R. (1978). Taking rights seriously. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Egger, F. N. (2000). “Trust me, I’m an online vendor”: Towards a model of trust for e-commerce system design. In Extended abstracts of the conference of human factors in computing systems (CHI 2000) (pp. 101–102). New York: Association for Computing Machinery.

    Google Scholar 

  • Ehn, P. (1989). Work-oriented design of computer artifacts. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Faden, R., & Beauchamp, T. (1986). A history and theory of informed consent. New York: Oxford University Press.

    Google Scholar 

  • Federal Trade Commission. (2000, May). Privacy online: Fair information practices in the electronic marketplace. A Report to Congress. Washington, DC: Federal Trade Commission.

    Google Scholar 

  • Fogg, B. J., & Tseng, H. (1999). The elements of computer credibility. In Proceedings of CHI 1999 (pp. 80–87). Cambridge, MA: ACM Press.

    Google Scholar 

  • Foot, P. (1978). Virtues and vices. Berkeley/Los Angeles: University of California Press.

    Google Scholar 

  • Frankena, W. (1972). Value and valuation. In P. Edwards (Ed.), The encyclopedia of philosophy (Vol. 7–8, pp. 409–410). New York: Macmillan.

    Google Scholar 

  • Franklin, J., Waddell, P., & Britting, J. (2002, November 21–24). Sensitivity analysis approach for an Integrated Land Development & Travel Demand Modeling System. Presented at the Association of Collegiate Schools of Planning 44th annual conference, Baltimore. Preprint available from www.urbansim.org

  • Freeman-Benson, B. N., & Borning, A. (2003, June). YP and urban simulation: Applying an agile programming methodology in a politically tempestuous domain. In Proceedings of the 2003 Agile Programming Conference, Salt Lake City. Preprint available from www.urbansim.org

  • Friedman, B. (Ed.). (1997a). Human values and the design of computer technology. New York: Cambridge University Press.

    Google Scholar 

  • Friedman, B. (1997b). Social judgments and technological innovation: Adolescents’ understanding of property, privacy, and electronic information. Computers in Human Behavior, 13(3), 327–351.

    Google Scholar 

  • Friedman, B., & Kahn, P. H., Jr. (1992). Human agency and responsible computing: Implications for computer system design. Journal of Systems Software, 17, 7–14.

    Google Scholar 

  • Friedman, B., & Kahn, P. H., Jr. (2003). Human values, ethics, and design. In J. Jacko & A. Sears (Eds.), The human-computer interaction handbook. Mahwah: Lawrence Erlbaum Associates.

    Google Scholar 

  • Friedman, B., & Millett, L. (1995). “It’s the computer’s fault” – Reasoning about computers as moral agents. In Conference companion of the conference on human factors in computing systems (CHI 95) (pp. 226–227). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347.

    Google Scholar 

  • Friedman, B., & Nissenbaum, H. (1997). Software agents and user autonomy. In Proceedings of the first international conference on autonomous agents (pp. 466–469). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Friedman, B., Kahn, P. H., Jr., & Howe, D. C. (2000a). Trust online. Communications of the ACM, 43(12), 34–40.

    Google Scholar 

  • Friedman, B., Millett, L., & Felten, E. (2000b). Informed consent online: A conceptual model and design principles. University of Washington Computer Science & Engineering Technical Report 00-12-2.

    Google Scholar 

  • Friedman, B., Howe, D. C., & Felten, E. (2002, January). Informed consent in the Mozilla browser: Implementing value-sensitive design. In HICSS. Proceedings of the 35th annual Hawaii international conference on system sciences, 2002 (pp. 247–257). IEEE.

    Google Scholar 

  • Friedman, B., Kahn, P. H., Jr., & Hagman, J. (2003). Hardware companions?: What online AIBO discussion forums reveal about the human-robotic relationship. In Conference proceedings of CHI 2003 (pp. 273–280). New York: ACM Press.

    Google Scholar 

  • Friedman, B., Kahn, P. H., Jr., Hagman, J., Severson, R. L., & Gill, B. (2006). The watcher and the watched: Social judgments about privacy in a public place. Human-Computer Interaction, 21(2), 235–272.

    Google Scholar 

  • Fuchs, L. (1999). AREA: A cross-application notification service for groupware. In Proceedings of ECSCW 1999 (pp. 61–80). Dordrecht: Kluwer.

    Google Scholar 

  • Galegher, J., Kraut, R. E., & Egido, C. (Eds.). (1990). Intellectual teamwork: Social and technological foundations of cooperative work. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Gallopin, G. C. (1997). Indicators and their use: Information for decision-making. In B. Moldan, S. Billharz, & R. Matravers (Eds.), Sustainability indicators: A report on the project on indicators of sustainable development. Chichester: Wiley.

    Google Scholar 

  • Gewirth, A. (1978). Reason and morality. Chicago: University of Chicago Press.

    Google Scholar 

  • Greenbaum, J., & Kyng, M. (Eds.). (1991). Design at work: Cooperative design of computer systems. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Grudin, J. (1988). Why CSCW applications fail: Problems in the design and evaluation of organizational interfaces. In Proceedings of the conference on computer supported cooperative work (CSCW ‘88) (pp. 85–93). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Habermas, J. (1979). Communication and the evolution of society (trans: McCarthy, T.). Boston: Beacon Press.

    Google Scholar 

  • Habermas, J. (1984). The theory of communicative action, Vol 1. (trans: McCarthy, T.). Boston: Beacon Press.

    Google Scholar 

  • Hagman, J., Hendrickson, A., & Whitty, A. (2003). What’s in a barcode: Informed consent and machine scannable driver licenses. In CHI 2003 extended abstracts of the conference on human factors in computing system (pp. 912–913). New York: ACM Press.

    Google Scholar 

  • Harris Poll/Business Week. (2000). A growing threat. http://www.buisnessweek.come/2000/00_12/b3673010.htm

    Google Scholar 

  • Hart, M. (1999). Guide to sustainable community indicators. Hart Environmental Data, PO Box 361, North Andover, MA 01845, second edition.

    Google Scholar 

  • Herskovits, M. J. (1952). Economic anthropology: A study of comparative economics. New York: Knopf.

    Google Scholar 

  • Hill, T. E., Jr. (1991). Autonomy and self-respect. Cambridge: Cambridge University Press.

    Google Scholar 

  • Isaacs, E. A., Tang, J. C., & Morris, T. (1996). Piazza: A desktop environment supporting impromptu and planned interactions. In Proceedings of the conference on computer supported cooperative work (CSCW 96) (pp. 315–324). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Jacko, J. A., Dixon, M. A., Rosa, R. H., Jr., Scott, I. U., & Pappas, C. J. (1999). Visual profiles: A critical component of universal access. In Proceedings of the conference on human factors in computing systems (CHI 99) (pp. 330–337). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Jancke, G., Venolia, G. D., Grudin, J., Cadiz, J. J., & Gupta, A. (2001). Linking public spaces: Technical and social issues. In Proceedings of CHI 2001 (pp. 530–537). New York: ACM Press.

    Google Scholar 

  • Johnson, E. H. (2000). Getting beyond the simple assumptions of organization impact [social informatics]. Bulletin of the American Society for Information Science, 26(3), 18–19.

    Google Scholar 

  • Johnson, D. G., & Miller, K. (1997). Ethical issues for computer scientists and engineers. In A. B. Tucker, Jr. (Ed.-in-Chief), The computer science and engineering handbook (pp. 16–26). Boca Raton: CRC Press.

    Google Scholar 

  • Kahn, P. H., Jr. (1999). The human relationship with nature: Development and culture. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kahn, P. H., Jr., & Kellert, S. R. (Eds.). (2002). Children and nature: Psychological, sociocultural, and evolutionary investigations. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kahn, P. H., Jr., & Turiel, E. (1988). Children’s conceptions of trust in the context of social expectations. Merrill-Palmer Quarterly, 34, 403–419.

    Google Scholar 

  • Kant, I. (1964). Groundwork of the metaphysic of morals (trans: Paton, H. J.). New York: Harper Torchbooks. (Original work published 1785.)

    Google Scholar 

  • Kling, R., & Star, S. L. (1998). Human centered systems in the perspective of organizational and social informatics. Computers and Society, 28(1), 22–29.

    Google Scholar 

  • Kling, R., Rosenbaum, H., & Hert, C. (1998). Social informatics in information science: An introduction. Journal of the American Society for Information Science, 49(12), 1047–1052.

    Google Scholar 

  • Kyng, M., & Mathiassen, L. (Eds.). (1997). Computers and design in context. Cambridge, MA: MIT Press.

    Google Scholar 

  • Leveson, N. G. (1991). Software safety in embedded computer systems. Communications of the ACM, 34(2), 34–46.

    Google Scholar 

  • Lipinski, T. A., & Britz, J. J. (2000). Rethinking the ownership of information in the 21st century: Ethical implications. Ethics and Information Technology, 2(1), 49–71.

    Google Scholar 

  • MacIntyre, A. (1984). After virtue. Notre Dame: University of Notre Dame Press.

    Google Scholar 

  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734.

    Google Scholar 

  • Millett, L., Friedman, B., & Felten, E. (2001). Cookies and web browser design: Toward realizing informed consent online. In Proceedings of CHI 2001 (pp. 46–52). New York: ACM Press.

    Google Scholar 

  • Moldan, B., Billharz, S., & Matravers, R. (Eds.). (1997). Sustainability indicators: A report on the project on indicators of sustainable development. Chichester: Wiley.

    Google Scholar 

  • Moore, G. E. (1978). Principia ethica. Cambridge: Cambridge University Press. (Original work published 1903).

    Google Scholar 

  • Nass, C., & Gong, L. (2000). Speech interfaces from an evolutionary perspective. Communications of the ACM, 43(9), 36–43.

    Google Scholar 

  • Neumann, P. G. (1995). Computer related risks. New York: Association for Computing Machinery Press.

    Google Scholar 

  • Nielsen, J. (1993). Usability engineering. Boston: AP Professional.

    Google Scholar 

  • Nissenbaum, H. (1998). Protecting privacy in an information age: The problem with privacy in public. Law and Philosophy, 17, 559–596.

    Google Scholar 

  • Nissenbaum, H. (1999). Can trust be secured online? A theoretical perspective. Etica e Politca, 2 (Electronic journal).

    Google Scholar 

  • Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.

    Google Scholar 

  • Norman, D. A. (1988). The psychology of everyday things. New York: Basic Books.

    Google Scholar 

  • Northwest Environment Watch (2002). This place on earth 2002: Measuring what matters. Northwest Environment Watch, 1402 Third Avenue, Seattle, WA 98101.

    Google Scholar 

  • Noth, M., Borning, A., & Waddell, P. (2003). An extensible, modular architecture for simulating urban development, transportation, and environmental impacts. Computers, Environment and Urban Systems, 27(2), 181–203.

    Google Scholar 

  • Olson, J. S., & Olson, G. M. (2000). i2i trust in e-commerce. Communications of the ACM, 43(12), 41–44.

    Google Scholar 

  • Olson, J. S., & Teasley, S. (1996). Groupware in the wild: Lessons learned from a year of virtual collaboration. In Proceedings of the conference on computer supported cooperative work (CSCW 96) (pp. 419–427). New York: Association for Computing Machinery Press.

    Google Scholar 

  • Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “IT” in IT research—a call to theorizing the IT artifact. Information Systems Research, 12(2), 121–134.

    Google Scholar 

  • Palen, L., & Dourish, P. (2003). Privacy and trust: Unpacking “privacy” for a networked world. In Proceedings of CHI 2003 (pp. 129–136). New York: ACM Press.

    Google Scholar 

  • Palen, L., & Grudin, J. (2003). Discretionary adoption of group support software: Lessons from calendar applications. In B. E. Munkvold (Ed.), Implementing collaboration technologies in industry. Heidelberg: Springer.

    Google Scholar 

  • Palmer, K. (Ed.). (1998). Indicators of sustainable community. Seattle: Sustainable Seattle.

    Google Scholar 

  • Phillips, D. J. (1998). Cryptography, secrets, and structuring of trust. In P. E. Agre & M. Rotenberg (Eds.), Technology and privacy: The new landscape (pp. 243–276). Cambridge, MA: MIT Press.

    Google Scholar 

  • Pruitt, J., & Grudin, J. (2003). Personas: Practice and theory. In Proceedings of DUX 2003. Cambridge, MA: ACM Press.

    Google Scholar 

  • Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York/Stanford: Cambridge University Press/CSLI Publications.

    Google Scholar 

  • Riegelsberger, J., & Sasse, M. A. (2002). Face it – Photos don’t make a web site trustworthy. In Extended abstracts of CHI 2002 (pp. 742–743). New York: ACM Press.

    Google Scholar 

  • Rocco, E. (1998). Trust breaks down in electronic contexts but can be repaired by some initial face-to-face contact. In Proceedings of CHI 1998 (pp. 496–502). New York: ACM Press.

    Google Scholar 

  • Rosenberg, S. (1997). Multiplicity of selves. In R. D. Ashmore & L. Jussim (Eds.), Self and identity: Fundamental issues (pp. 23–45). New York: Oxford University Press.

    Google Scholar 

  • Sawyer, S., & Rosenbaum, H. (2000). Social informatics in the information sciences: Current activities and emerging direction. Informing Science, 3(2), 89–95.

    Google Scholar 

  • Scheffler, S. (1982). The rejection of consequentialism. Oxford: Oxford University Press.

    Google Scholar 

  • Schiano, D. J., & White, S. (1998). The first noble truth of cyberspace: People are people (even when they MOO). In Proceedings of the conference of human factors in computing systems (CHI 98) (pp. 352–359). New York: Association for Computing Machinery.

    Google Scholar 

  • Schneider, F. B. (Ed.). (1999). Trust in cyberspace. Washington, DC: National Academy Press.

    Google Scholar 

  • Schoeman, F. D. (Ed.). (1984). Philosophical dimensions of privacy: An anthology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Schwartzman, Y., & Borning, A. (2007, January). The indicator browser: A web-based interface for visualizing UrbanSim simulation results. In HICSS 2007. 40th annual Hawaii international conference on system sciences, 2007 (pp. 92–92). IEEE.

    Google Scholar 

  • Shneiderman, B. (1999). Universal usability: Pushing human-computer interaction research to empower every citizen (ISR Technical Report 99–72). College Park: University of Maryland, Institute for Systems Research.

    Google Scholar 

  • Shneiderman, B. (2000). Universal usability. Communications of the ACM, 43(5), 84–91.

    Google Scholar 

  • Simpson, J. A., & Weiner, E. S. C. (Eds.). (1989). “value, n.” Oxford English Dictionary. Oxford: Clarendon Press, 1989. OED Online. Oxford University Press. 30 May 2003. http://dictionary.oed.com/cgi/entry/00274678

  • Smart, J. J. C., & Williams, B. (1973). Utilitarianism for and against. Cambridge: Cambridge University Press.

    Google Scholar 

  • Stephanidis, C. (Ed.). (2001). User interfaces for all: Concepts, methods, and tools. Mahwah: Lawrence Erlbaum Associates.

    Google Scholar 

  • Suchman, L. (1994). Do categories have politics? The language/action perspective reconsidered. CSCW Journal, 2(3), 177–190.

    Google Scholar 

  • Svensson, M., Hook, K., Laaksolahti, J., & Waern, A. (2001). Social navigation of food recipes. In Proceedings of the conference of human factors in computing systems (CHI 2001) (pp. 341–348). New York: Association for Computing Machinery.

    Google Scholar 

  • Tang, J. C. (1997). Eliminating a hardware switch: Weighing economics and values in a design decision. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 259–269). New York: Cambridge University Press.

    Google Scholar 

  • The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. (1978). The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.

    Google Scholar 

  • Thomas, J. C. (1997). Steps toward universal access within a communications company. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 271–287). New York: Cambridge University Press.

    Google Scholar 

  • Turiel, E. (1983). The development of social knowledge. Cambridge: Cambridge University Press.

    Google Scholar 

  • Turiel, E. (1998). Moral development. In N. Eisenberg (Ed.), Social, emotional, and personality development (Vol. 3 of W. Damon, Ed., Handbook of child psychology. 5th ed. pp. 863–932). New York: Wiley.

    Google Scholar 

  • Turiel, E. (2002). The culture of morality: Social development, context, and conflict. Cambridge: Cambridge University Press.

    Google Scholar 

  • Turkle, S. (1996). Life on the screen: Identify in the age of the internet. New York: Simon and Schuster.

    Google Scholar 

  • Ulrich, R. S. (1984). View through a window may influence recovery from surgery. Science, 224, 420–421.

    Google Scholar 

  • Ulrich, R. S. (1993). Biophilia, biophobia, and natural landscapes. In S. R. Kellert & E. O. Wilson (Eds.), The biophilia hypothesis (pp. 73–137). Washington, DC: Island Press.

    Google Scholar 

  • United Nations (2002). Report of the United Nations Conference on Environment and Development, held in Rio de Janeiro, Brazil, 1992. Available from http://www.un.org/esa/sustdev/documents/agenda21/english/agenda21toc.htm

  • Waddell, P. (2002). UrbanSim: Modeling urban development for land use, transportation, and environmental planning. Journal of the American Planning Association, 68(3), 297–314.

    Google Scholar 

  • Waddell, P., Borning, A., Noth, M., Freier, N., Becke, M., & Ulfarsson, G. (2003). Microsimulation of urban development and location choices: Design and implementation of UrbanSim. Networks and Spatial Economics, 3(1), 43–67.

    Google Scholar 

  • Weiser, M., & Brown, J. S. (1997). The coming age of calm technology. In P. Denning & B. Metcalfe (Eds.), Beyond calculation: The next 50 years of computing (pp. 75–85). New York: Springer.

    Google Scholar 

  • Weizenbaum, J. (1972). On the impact of the computer on society: How does one insult a machine? Science, 178, 609–614.

    Google Scholar 

  • Wiener, N. (1985). The machine as threat and promise. In P. Masani (Ed.), Norbert wiener: Collected works and commentaries (Vol. IV, pp. 673–678). Cambridge, MA: MIT Press. (Reprinted from St. Louis Post Dispatch, 1953, December 13.).

    Google Scholar 

  • Winograd, T. (1994). Categories, disciplines, and social coordination. CSCW Journal, 2(3), 191–197.

    Google Scholar 

  • World Commission on Environment and Development (Gro Harlem Brundtland, Chair). (1987). Our common future. Oxford: Oxford University Press.

    Google Scholar 

  • Wynne, E. A., & Ryan, K. (1993). Reclaiming our schools: A handbook on teaching character, academics, and discipline. New York: Macmillan.

    Google Scholar 

  • Zheng, J., Bos, N., Olson, J., & Olson, G. M. (2001). Trust without touch: Jump-start trust with social chat. In Extended abstracts of CHI 2001 (pp. 293–294). New York: ACM Press.

    Google Scholar 

Download references

Acknowledgements

Value Sensitive Design has emerged over the past decade and benefited from discussions with many people. We would like particularly to acknowledge all the members of our respective research groups, along with Edward Felten, Jonathan Grudin, Sara Kiesler, Clifford Nass, Helen Nissenbaum, John Thomas, and Terry Winograd. This research was supported in part by NSF Awards IIS-9911185, IIS-0325035, EIA-0121326, and EIA-0090832.

Notes The Oxford English Dictionary definition of this sense of value is: “the principles or standards of a person or society, the personal or societal judgement of what is valuable and important in life” (Simpson and Wiener 1989).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Batya Friedman .

Editor information

Editors and Affiliations

Addendum: Practical Considerations of Value Sensitive Design

Addendum: Practical Considerations of Value Sensitive Design

1.1 Practical Value Sensitive Design Challenges

Value sensitive design (VSD) has evolved over time and proven its ability to guide developers and researchers in considering human values in the designs of their systems. Although applied successfully for almost 20 years in diverse projects focusing on different values, e.g. informed consent in browsers (see Sect. 4.4.1), independence for blind and deaf transit riders (Azenkot et al. 2011) or security of implantable medical devices (Denning et al. 2010), VSD is subject to critical and constructive reviews by its creators and other researchers. Learning from practice, reflecting on the methodological assumptions and creating new tools and methods is crucial for its further development and its widespread acceptance by researchers and practitioners.

In their recent work Borning and Muller (2012) have identified a number of issues in VSD research that could lead to a lack of adoption of VSD in value-focused human computer interaction (HCI) research. Related critique has been presented earlier, e.g., from Le Dantec and colleagues (2009). In the following we will discuss three points that are relevant within the scope of this book: the nature of values that are considered in a project, the role of different stakeholders in the design process and concrete methods used in VSD.

1.1.1 Nature of Values

VSD refers to values as “what a person or group of people consider important in life.” (this chapter) While this definition is kept broad, Friedman and colleagues also provide “a list of human values with ethical import that are often implicated in system design”. When talking about ethical or moral values the question arises whether certain values are universal. In this chapter the authors state that VSD “builds from the psychological proposition that certain values are universally held” [p.4] which may play out differently in a given culture and time.

Borning and Muller (2012) discuss the problematic nature of taking a stance on the universality of values and conclude that the existence of universal values has little impact for the practical application of VSD. As values play out sufficiently different in each design context, universal designs that account for a certain value are not attainable. Instead, it is important that the values at stake are identified and analyzed carefully as well as defined with respect to the particular context and new design solutions for the given context have to be created.

With regard to identifying the values at stake, an important question that has recently been discussed within the VSD community is “should VSD single out certain values as particularly worthy of consideration?”

During the evolution of VSD, lists of values with ethical import have been presented from having “a distinctive claim on resources in the design process” (Friedman and Kahn 2003) to being heuristics for designers. Le Dantec and colleagues’ (2009) stance on this issue is that given lists of values may bias researchers and designers towards these values. While expressing classifications of ethically principled values was an important step, more scaffolding is needed to guide the value discovery, i.e. to uncover values as they are lived in-situ, through empirical exploration relevant to the design context. After these so-called local values have been discovered, lists can be used as an analytical tool.

In my opinion, heuristic lists could also be beneficial from the start, especially for practitioners with limited time at hand, as the lists highlight important values and mitigate the odds that these are overlooked.

Borning and Muller (2012) suggest that lists presented in the literature should be contextualized by emphasizing who wrote them and for what purpose. Additional careful empirical investigations can highlight stakeholder values that have not been considered initially by the design team. In this respect an important distinction is to be made among explicitly supported values (i.e., ones that the system is designed to support), stakeholder values (i.e., ones that are important to some but not necessarily all of the stakeholders) and designer values (i.e., ones that the system designers hold).

1.1.2 Role of Stakeholders

When defining whose values are accounted for, the stakeholder concept is the focus of attention. Stakeholders in VSD are not only the clients or end-users, but all people involved directly or indirectly in creating, using or being affected by the new technology. Therefore, developers, designers, researchers, users and other people can be regarded as stakeholders. Borning and Muller (2012) emphasized that giving voice to the participants in VSD studies and clearly expressing the voice of researchers in publications is important to aid others in understanding the setting in which VSD was carried out and how the ethical analysis has taken place. Similar to the motivation of Participatory Design (Schuler 1993), sharing responsibility and power among stakeholders and designers/researchers is beneficial to investigate value questions. To this end, contextualized VSD methods could, e.g., include ethical analysis by stakeholders in-situ, thereby letting them act as “lay ethicists”.

1.1.3 Concrete Methods

VSD does not prescribe concrete methods for empirical investigations, but states: “the entire range of quantitative and qualitative methods used in social science research is potentially applicable here, including observations, interviews, surveys, experimental manipulations, collection of relevant documents, and measurements of user behavior and human physiology.” (see this chapter)

While Borning and Muller (2012) suggest that researchers examined the value suitabilities of these methods, Le Dantec and colleagues (2009) propose that more specific methods are needed to capture values as lived experiences and to give stakeholders the power to express and share their comprehension of local values.

In my opinion, the question of concrete methods for VSD is not only closely related to the methods’ abilities to facilitate participation, but also to the competences within a design team. An important question is “does a design team need to include a social scientist or someone trained in the methods above to be able to carry out VSD?” In reports of past VSD projects the researchers’ background and expertise is not always transparent and VSD does not offer a concrete proposition on the composition of design teams. Given that VSD has until now mainly been carried out by HCI researchers, a combination of knowledge of technology engineering and social science research is often present. Nonetheless, conceptual analyses of values with ethical import may also benefit from professional philosophers or ethicists. Considering the use of VSD in industry practice, expertise in ethics cannot always be easily acquired. Design teams may benefit from value advocates, but reports from the field show that in a business-oriented setting value advocates may meet challenges (Manders-Huits and Zimmer 2009). Their role has to be considered carefully, e.g., with respect to how much leadership they take and how other design team members receive such leadership.

Another way to empower technology developers who are untrained in social science or ethics are specific tools or techniques to deliberately consider and account for values in design. Since the first publication of this chapter, VSD researchers have developed several methods for value discovery and definition (e.g. Le Dantec et al. 2009; Woelfer et al. 2011) and for the consideration of the broader and long-term socio-technical context (e.g. Nathan et al. 2007; Friedman et al. 2012), which can be utilized by researchers, practitioners and other stakeholders. Some of the methods will be elaborated on below.

1.1.4 Summary of Practical Questions

Summarizing the discussion above, I compiled a list of practical questions to be considered when using VSD in practice.

  • Which values are important in a given design case? Whose values are they and how are they defined with respect to the given context?

  • Which methods are suited for conceptual/empirical/technical investigations in VSD? Which methods, in particular, are suited to discover, elicit and define values?

  • What kind of social science knowledge or skill set is needed to engage in VSD?

  • How can methods give power and responsibility to stakeholders and make them “lay” ethicists?

The remainder of the addendum will provide an ongoing design case that exemplifies the use of methods recently developed in VSD and partially addresses these practical questions.

1.2 VSD Case: Safety for Homeless Young People

1.2.1 Socio-technical Context

The socio-technical context for the design case consists of homeless young people, mobile technologies, and safety. People generally want to be safe, know that their families are safe and help others to be safe. For homeless young people, life can be very difficult when securing basic needs, such as safety, food, and shelter, while sometimes even managing physical and mental health problems. They encounter unsafe situations in their struggle to meet their needs, often with civility laws being implicated (Woelfer and Hendry 2011).

Across social classes mobile phones are becoming essential for safety, as they are carried closely to people’s bodies at all times and can be accessed in emergency situations to connect to others. At the same time “overreliance on its safety functions may undermine a person’s resilience” (Woelfer et al. 2011). For homeless young people mobile phones have beneficial safety functions ranging from functionality, e.g. calling or texting in unsafe situations, to form factors, e.g. held in particular ways the phone may resemble a gun. However, mobile phones may also create unsafe situations, e.g. if homeless young people trespass at secluded power outlets in order to recharge their phones. Thus, the use of mobile technology by homeless young people and its relation to safety is multi-faceted and a topic worth investigating. The central question in the ongoing research is “How can mobile technology be designed to keep homeless young people safe?”

This design case is representative for recent developments of theory and method within VSD. In particular, it engages in design work concerning multiple stakeholder groups with different perspectives and values as well as value tensions within and among individuals and groups. Further it reflects how empirical research of current conditions and co-design activities to envision the future can be integrated.

1.2.2 Stakeholder Analysis

Unlike other design and engineering methodologies VSD is a holistic approach to the design and introduction of new technologies considering not only primary (and maybe secondary) users, but also other stakeholders. VSD makes a deliberate distinction between direct stakeholders and indirect stakeholders, and by that dictates an explicit consideration of people affected by the system, who are not users. This allows for an early analysis of benefits and harms of new technology for the whole social environment in which it is situated.

In the given design case the homeless young people were considered as the primary direct stakeholders, but the researchers did not exclude the emergence of another relevant direct stakeholder group throughout the research and later design phases. Three indirect stakeholder groups identified: service providers, police officers, and community members.

1.2.3 Value Analysis & Value Tensions

As explained briefly above the use of mobile technology bears benefits and obstacles for homeless young people with regard to their own needs and values and in interactions with other stakeholders (e.g. other urban dwellers). Therefore, value tensions on three levels were anticipated in the project: (1) within the individual, (2) between an individual and another stakeholder group and (3) between stakeholder groups. In order to get a detailed understanding of how these tensions play out in real life situations, several methods (following in the next subsections) were used. Important to note is that VSD does not require a definition of the concept of a value at the onset of the project. In the given design case, the researchers explicitly did not define their conception of safety, but instead used methods, that were open-ended, yet gave enough structure (through precise tasks) to guide the participants’ reflection. By using verbal and visual methods a rich set of data has been elicited that reveals the nuances of stakeholders’ perceptions of safety and its situational nature.

1.2.4 Value Sketches

Sketching is often used in design work to uncover knowledge for “physical and conceptual structure” (Woelfer et al. 2011). Value sketches in particular are meant to emphasize participants’ values. In this project value sketches were especially useful to uncover situated perceptions of safety, as these are often time and location based. For example, one could feel safe in a specific location during the day but not during the night. Therefore, participants were given two identical maps of their living area, one for daytime and one for night-time activities, and asked to use red and green (graphic and textual) marks to represent their perception of safety of different regions. Participants used different marks to denote safe and unsafe areas, spots and paths.

Through detailed coding and analysis of the sketches, Woelfer and colleagues could retrieve a detailed picture of temporal and location sensitive perceptions of place, mobility and safety for each stakeholder group.

1.2.5 Stakeholder Generated Value Scenarios

Scenarios have a long-standing tradition in scenario-based design (Rosson and Carrol 2003). These scenarios often tell short-term and functionality-focused stories about how the designers intend the system to be used by direct stakeholders. While being a powerful tool to analyze aspects of functionality and usability, these scenarios lack in portraying long-term systemic effects that new technology has on the social and political environment.

Value scenarios (Nathan et al. 2007) are a VSD tool that combines the narrative power of traditional scenarios in design processes with five new key elements that help to engage in (ethical) issues of long-term and emergent use of new technology: indirect stakeholders (additionally to direct ones), pervasiveness (effects from the widespread adoption of the technology), time (long-term effects), systemic effects and value implications. By describing possible positive and negative effects and value tensions that come along with widespread adoption, value scenarios support technologists and policy makers to consider the creation and introduction of new technologies.

While value scenarios were originally intended for early strategic planning of technology projects or as touchstones for policy-making discussions, the design case at hand provides a new way to use value scenarios. In this design case the stakeholders wrote value scenarios. They were prompted to write a true or fictional story of how a mobile phone could keep a homeless young person safe. In this way value scenarios became tools in the design process to elicit stakeholders’ views and experiences. One benefit of writing fictional stories is that participants could mask their identity while still providing perspectives and ideas, which would be too risky to portrait openly. One example of such a fictional story is the following:

Once upon a time there was three little pigs, one lived in a house, one lived on the street, and the last one lived in a squat. One day a big bad wolf was looking for a squatter, the big bad wolf was out to get all the little pigs. The first little pig called the second pig, and he found the third pig through word of mouth. Thank cellphone.

In a recent iteration of VSD investigations in this project the stakeholder-generated scenarios (Fig. 4.4) were utilized in co-design activities with homeless young people, police and service providers (Yoo et al. 2013).

Fig. 4.4
figure 4

Value Scenarios written by stakeholders and used in design iteration

1.2.6 Envisioning Cards

Creating awareness of and considering values and other systemic effects of new technology can be difficult – for designers, technologists and also for other stakeholders engaged in co-design activities. To put technology development in a broader socio-technical, long-term perspective, by highlighting “diversity, complexity and subtlety of human affairs, as well as the interconnections among people and technologies” (Friedman and Hendry 2012), the Envisioning Card toolkit provides a promising means.

Envisioning Cards (see Fig. 4.5) incorporate similar elements to the Value Scenarios: stakeholders, time, values and pervasiveness. The aim of the card toolkit is to raise awareness of long-term and systemic issues in technology design. To this end each card has an evocative image and title on one side and the envisioning criterion, theme description and concrete design activity on the backside.

Fig. 4.5
figure 5

Envisioning Card front (left side) and back (right side) (source: VSD lab, University of Washington (UW), permission to reprint the image and copyright remains with UW. See also: http://www.envisioningcards.comwww.envisioningcards.com)

In the context of the project the cards were used as an iteration step in a co-design activity with homeless young people, police and service providers. After creating 3D prototypes to keep homeless youth safe, participants were asked to select an Envisioning Card, consider the theme and refine their designs if needed (Yoo et al. 2013). The Envisioning Cards stimulated the creative exploration of the design space. They helped participants “to reframe technical problems, to reconsider technical aspects of their designs, and generally to catalyze their technical imaginations.” (Friedman and Hendry 2012).

Overall, the Envisioning Cards are a versatile tool that can be used in many design processes including ideation, co-design, heuristic evaluation, and critique or as educational tools. The cards are self-explanatory and open for different types of use, which makes them equally accessible to designers, technologists and end-users and supports them in ethical reflection. This is especially of interest for design situations in which ethical or social science knowledge is lacking within the design team.

1.2.7 Reflection on the Use of VSD

Reflecting on the project as carried out until now, I would like to highlight a few aspects that can be attributed to the use of VSD. As the project is ongoing, there is no final product to inspect and no definite answer to the initial research question: “how can mobile technology be designed to keep homeless young people safe?” Still the VSD process has already uncovered several interesting results, which common engineering or user-centered design (UCD) approaches would have missed. The latter approaches would have focused solely on the homeless young people as they are defined as the primary users of the technology. Focusing only on this stakeholder group, however, would have prevented the researchers to gain insights from the police and service providers. Such insights and specifically the prototype designs of these groups provided many new opportunities to design not simply a new device that promotes safety, but also the social network and necessary socio-technical solutions (e.g. power supply or back-up phone at service providers’ station).

The VSD methods further provided rich insights into how the different stakeholders perceive safety instead of using merely a designer’s definition of the concept. It became clear that for the homeless people safety is linked to basic needs and also other values such as being part of a community and affordability. In addition, the complex interplay of safety, time and place as well as mobile phone use (leading to increased or reduced safety) was the result of value sketches and scenarios. Although we cannot be certain, I believe, that such complexity is hard to be obtained by standard interviews. Furthermore, as mentioned earlier, face-to-face interviews do not allow participants to mask their identities and may therefore be less apt to obtain sensitive data for this stakeholder group. Last, it was noticed that the envisioning cards supported stakeholders in re-considering and adapting their ideas (in forms of prototypes) in the light of long-term consequences, pervasive use and other stakeholder groups. A standard usability evaluation would have missed such aspects.

1.3 Discussion

In the following, we review the design case to address the questions posed in Sect. 4.8.1.4.

The presented design case set out to investigate the role of mobile technology and safety for homeless young people. While safety was a central value in the project, the researchers deliberately avoided a definition of safety from the onset of the project and allowed other values to emerge. The various methods presented above helped to define the nuanced perceptions of safety of the different stakeholder groups. Especially the value sketches and value scenarios highlighted that safety is fundamentally situational for the homeless young people. Whether a place, another stakeholder group, or the mobile phone use is considered safe or unsafe depends on the situational (e.g. temporal) context. Ongoing co-design work with homeless young people and service providers revealed more dimensions of safety and its relation to other basic needs.

The methods presented in this addendum can be used or combined for different types of VSD investigations. While we focused above on gathering empirical data around the perceptions of safety and mobile technology, more recent work used parts of this data (i.e. the value scenarios) in technical investigation of concrete designs of envisioned mobile technology.

The HCI researchers, who carried out this work, have extensive background in VSD and co-design and experience in working with the homeless community and service providers. HCI and VSD researchers are trained in social science methods and have affinities towards ethics and technology design. Therefore, to consider the taxonomy given in the introduction of this book, VSD can be considered as having a joint project organization, i.e. the ethical assessment is done from within the project. Ideally, VSD projects would include professionals trained in ethics, social sciences, computer science/engineering and design.

However, researchers and developers without such expertise at hand should not refrain from using VSD. Especially in industry practice, where it is equally important to design in a value sensitive manner, it cannot be assumed that a design team is sufficiently trained in ethics or social sciences. One way to address a shortcoming may be to have consultants or value advocates from outside the projects providing these skills. Another way would be to develop more specific toolkits to trigger value sensitive deliberation and discussion within the design team, and tools to work out value definitions and tensions with stakeholders. The Envisioning Cards provide one example of such a toolkit.

Further, Borning and Muller (2012) have introduced the term “lay” ethicists, i.e. stakeholders that act as ethicists in the given design context. While “lay” ethicists are not able to substitute professional ethicists or social scientists, providing stakeholders with specific methods that allow them to consider and discuss ethical design questions in-situ is a step towards accounting for values in design. The methods presented above trigger thinking about ones’ (Dunne and Raby 2001) values and I believe that especially value scenarios or related methods, e.g. design noir, can be used to make stakeholders more aware of the (unforeseen, long-term) ethical issues at stake. More work needs to be done in VSD research that allows for a common ground between designers, researchers, practitioners and other stakeholders in understanding each others’ perspectives and situated value definitions.

Important for methods that allow stakeholders to voice themselves safely is that they provide means for controlling precision and ambiguity for the data they elicit. In the project presented here safety is a sensitive topic for homeless young people as it may be linked to embarrassing or dangerous experiences. Discussing such experiences may be uncomfortable for the participants or even put them at risk, as one homeless person mentioned, “letting people know where I feel safe, makes me feel unsafe” (Woelfer et al. 2011). Such effects were mitigated by the ambiguity of fictional scenarios and variable precision of the value sketches.

1.4 Conclusions and Future Work

In this paper I have pointed out the recent developments within VSD and clarified how VSD fits into the taxonomy presented in the introduction of this book. As an evolving framework, its assumptions and practicability have been under ongoing critical review. This has led to adaptations (e.g. value lists as heuristics) and new methods for VSD investigation. By elaborating on these methods in the context of a recent VSD case and addressing practical questions of VSD, I extended the previous publication of this chapter.

Considering current paradigm shifts in innovation towards co-creation in several public sectors (e.g. healthcare) to achieve solutions, which integrate technology in community-based and social practice, I believe, VSD will play an important role as a framework to envision and design long-term socio-technical change. To facilitate this role HCI researchers will continue to evolve VSD and put more focus on multi-lifespan systems, participation of stakeholders and shared value investigation. An essential part of supporting widespread VSD practice is the early education of researchers and practitioners in various fields, which are being addressed in academic courses and workshops at major research venues (e.g., Detweiler et al. 2012).

1.5 References

Azenkot, S., Prasain, S., Borning, A., Fortuna, E., Ladner, R.E., & Wobbrock, J. O. (2011). Enhancing independence and safety for blind and deaf-blind public transit riders. In‑ Proceedings of CHI 2011. New York: ACM Press.

Borning, A., & Muller, M. (2012). Next steps for value sensitive design. In Proceedings of the 2012 annual conference on human factors in computing systems (pp. 1125–1134). New York: ACM Press.

Dunne, A., & Raby, F. (2001) Design Noir: The secret life of electronic objects. Basel: Birkhäuser.

Friedman, B., & Hendry, D. G. (2012). The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations. In Proceedings of the 2012 annual conference on human factors in computing systems (pp. 1145–1148). New York: ACM Press.

Friedman, B., & Kahn, P. H., Jr. (2003). Human values, ethics, and design. In J. A. Jacko & A. Sears (Eds.), The human-computer interaction handbook (pp. 1177–1201). Mahwah: Erlbaum.

Denning, T., Borning, A., Friedman, B., Gill, B., Kohno, T., & Maisel, W. (2010). Patients, pacemakers, and implantable defibrillators: Human values and security for wireless implantable medical devices. In Proceedings of CHI 2010 conference on human factors in computing systems (pp. 917–926). New York: ACM Press.

Detweiler, C., Pommeranz, A., & Stark, L. (2012). Methods to account for values in human-centered computing. In CHI ‘12 extended abstracts on human factors in computing systems (CHI EA ‘12) (pp. 2735–2738). New York: ACM Press.

Le Dantec, C. A., Poole, E. S. & Wyche, S. P. (2009). Values as lived experience: Evolving value sensitive design in support of value discovery. In Proceedings of the 27th international conference on Human factors in computing systems (CHI ‘09) (pp. 1141–1150). New York: ACM Press.

Manders-Huits, N., & Zimmer, M. (2009). Values and pragmatic action: The challenges of introducing ethical intelligence in technical design communities. International Review of Information Ethics, 10, 37–44.

Nathan, L. P., Klasnja, P. V., & Friedman, B. (2007). Value scenarios: A technique for envisioning systemic effects of new technologies. In CHI ‘07 extended abstracts on human factors in computing systems (pp. 2585–2590). New York: ACM Press.

Schuler, D., & Namioka, A. (Eds.). (1993). Participatory design: Principles and practices. Hillsdale: Lawrence Erlbaum Associates.

Woelfer, J. P., & Hendry, D. G. (2011). Homeless young people and technology: Ordinary interactions, extraordinary circumstances. Interactions, 18(6), 70–73.

Woelfer, J. P., Iverson, A., Hendry, D. G., Friedman, B., & Gill B. T. (2011). Improving the safety of homeless young people with mobile phones: Values, form and function. In Proceedings of the 2011 annual conference on human factors in computing systems (pp.1707–1716), New York: ACM Press.

Yoo, D., Huldtgren, A., Woelfer, J. P., Hendry, D. G., & Friedman, B. (2013). A value sensitive action-reflection model: Evolving a co-design space with stakeholder and designer prompts. In Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’13, pp. 419–428). New York: ACM.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Friedman, B., Kahn, P.H., Borning, A., Huldtgren, A. (2013). Value Sensitive Design and Information Systems. In: Doorn, N., Schuurbiers, D., van de Poel, I., Gorman, M. (eds) Early engagement and new technologies: Opening up the laboratory. Philosophy of Engineering and Technology, vol 16. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7844-3_4

Download citation

Publish with us

Policies and ethics