1 Introduction

Technology is no longer just about technology—now it is about living. Technology is central to our health, education, relationships, work lives, entertainment, finances, and more. Technology is so deeply embedded into our everyday lives that we take it for granted. It is a powerful force, but its impact depends on how we design and use it. The power of technology has spurred advances in medical treatments, transformed agricultural production, and improved transportation safety but has also diminished mental health, stoked social division, and raised privacy and surveillance concerns. Given the power and centrality of technology, the question becomes, “How can we have ethical technology that creates a better life and a better society?” (Winter, 2019)

The answer is that technology must become truly human-centered. It must support human values and fulfill individual needs while also strengthening the social and cultural fabric of society. Shifting from a technology focus to a sociotechnical systems focus will require a radical transformation in how we think about, develop, and use technologies. To truly meet the needs of humans will require that we reconceptualize technology as a tool and as a component within a complex interacting system. Further, if we are to be successful, we must embrace our responsibility to understand and help to continually manage the complex multi-component systems that we influence when we create and use technologies. This will not be easy, but progress is possible if we make a long-term commitment to making this shift to digital humanism.

To better understand the changes that will be required, we identify the origin of some crucial assumptions underlying the current stance toward information technology that must be overturned. To do so, this chapter traces the evolution of information technology from “human-adjacent” to “human-aware” to “human-centered.” It then outlines some of the changes needed to create truly human-centered technologies that focus on human needs within complex sociotechnical systems. These include changing how we think and talk about technologies to avoid category errors, moving to a model of participatory co-design, and building in avenues for feedback and adjustment as we use and manage them.

2 Human-Adjacent Computing

Early computing was “human-adjacent.” From about 1950–1980, information technologies were expensive and were primarily developed and adopted by large organizations (Winter et al., 2014). These large organizations had the resources needed to create new technologies and included governmental applications (especially the military), regulated monopolies (especially in communications), and private sector corporations. The motivation was to improve the organization’s efficiency, effectiveness, and, for corporations, their profitability. Large mainframe computers radically improved targeting for the military, provided circuit switching for telephony, and processed corporate payrolls (Hevner & Berndt, 2000). The systems were designed and developed by organizations to meet their needs, and their value was evaluated relative to their impact on the organization (see Fig. 1).

Fig. 1
A concentric circle diagram of the human-adjacent technology. Impact on envelops technology and organization need.

Human-adjacent technology

The few people who interacted with these mainframe computers did so at a distance as operators in machine rooms, as programmers at dumb terminals, and as the eventual recipients of the output (Fig. 2).

Fig. 2
A photo of the machine room.

ENIAC https://en.wikipedia.org/wiki/History_of_computing_hardware#/media/File:Eniac.jpg

Block diagrams of computers that were popular at the time do not even include a representation of humans as relevant to the system (see Fig. 3).

Fig. 3
A block diagram starts with an input device, linked to the central processing unit and memory unit, and then the output device.

Block diagram of a computer

In short, early mainframe computers were conceptualized as self-contained technologies made up of technical components and housed in a machine room. A computer was a closed system that received input from its environment and returned output to that environment. Input and output were managed using devices that were specifically called “peripherals” because they were attached to the computer and controlled by it, but were not a part of the computer itself. People were often involved in providing these inputs and receiving the outputs, but they were not considered to be part of the computer itself or to play a central role in their functioning. Meeting human needs was not the focus of these machines.

3 Human-Aware Computing

In the early 1980s, computers became smaller and more affordable. By the late 1980s, we saw the rise of personal computers, client-server architectures, and local area networks (LANs). This dramatic expansion of computer use required that ordinary people use PCs. Computing became “human-aware” as it became dependent upon human adoption and use for success.

At the time, businesses started to embrace innovations that relied on the use of these new information technologies such as office automation, total quality management (Deming, 1982), business process reengineering (Hammer & Champy, 1993), and others (Hevner & Berndt, 2000). They developed their own internal networked information systems, often partnering with manufactures such as IBM or DEC to do so. They adopted general-use applications like Microsoft Office Suite, which was often bundled with the hardware. They also created their own custom applications (either in-house or through contracts with software development companies) when off-the-shelf options were inadequate. The focus was still on meeting the organization’s needs, but the success of these emerging business initiatives depended on getting their employees to use these new technologies in the workplace.

We also saw a rise in the use of computers for socializing, learning, shopping, and entertainment (see Fig. 4). Consumers could access networks through Internet Service Providers, and companies started creating applications that they hoped would be used at home. Compuserv and America Online (now AoL) were early entrants in this market. Applications included home versions of some of the same software being used in businesses such as email, word processors, and electronic spreadsheets. However, there were also more entertainment-focused applications such as online games. Again, the success of these applications and their developers’ profits depended on getting large numbers of people to use them.

Fig. 4
A concentric circle diagram of the human-aware technology. From the bottom, organization needs, technology, interface or H C I, people, and impact on.

Human-aware technology

Becoming human-aware added complexity to the technology development process. Information systems had to become more user-friendly, and attention had to be paid to interface design and human computer interaction (HCI) (Card, 1983). We saw rapid advances in both input and output devices, including the two- and three-button mouse, trackballs, touchpads, touchscreens, flatscreens, the shift to graphical user interfaces (GUI), voice user interfaces (VUI), and more. Usability testing rose in importance, but the development process was still usually driven by the organization’s needs for efficiency, effectiveness, and/or profitability. Organizations decided what programs and applications to develop. Users’ feedback on interfaces may be sought as part of usability testing, but this development phase was often limited in the rush to market and further curtailed as projects fell behind schedule. Many applications were still released and implemented with interfaces that users find cryptic and frustrating even when using them to meet the organization’s goals. It was often impossible to use them to meet human goals that do not align with those of the organization.

Fundamentally, the goal of human-aware technology was still to reduce costs and/or generate profits for organizations, and the value of information systems was still evaluated relative to their impact on the organizations (see Fig. 4).

However, the systems themselves look wildly different from the early mainframe computers, and they have moved from locked machine rooms to our desktops and laptops. Depictions of information systems also shifted to use case models and diagrams that include humans who are interacting with the computer. However, these humans (called users) were still shown as outside of the computer (see Fig. 5).

Fig. 5
An illustration of the case model of a computer system. User linked to the register, login, search video, and retrieved result processes, whereas the admin is linked to the uploading of the video.

Use case model of a computer system

Personal computing expanded our conceptualization of a computer to encompass a more complex multicomponent technical system. People were seen as central to organizational success, but calling them users highlights that their use of the system was what the organization most values about them. Thus, considerable attention was paid to enhancing computer system peripherals and interfaces to encourage proper use. The computer was still a closed system, and peoples’ needs were not the focus of these machines. Developers would prefer to engineer the people out of the system entirely, but when necessary, it was vital that people provide the correct inputs and used the outputs as intended by the organization.

4 Human-Centered Computing

Given the power and centrality of technology in our lives, meeting human goals requires that information systems become truly “human-centered” (see Fig. 6). Since the turn of the twenty-first century, there has been a revolution in information technology development that can support this shift to digital humanism, but it will require a transformation in how we think about, design, and use technologies.

Fig. 6
A concentric circle diagram illustrates the human-centered technology. From the bottom, people need, community, technology, and impacts on people are present.

Human-centered technology

Originally, technology development was enormously expensive, so was the purview of large organizations, but this is no longer the case. Increasingly, technical capabilities are readily available “as a service” with the rise of shared robust infrastructures (e.g., AWS), generative platforms for innovation (e.g., Apple’s Mac, iPod, iPad, iPhone, iTunes ecosystem), and free and open-source software (FOSS). These new flexible infrastructures and applications exist independent of a single organization. Costs are dramatically reduced, lowering the barriers to adoption and use. Traditional businesses are making use of these transformative tools, but they have also opened the door to new collaborative arrangements. Their affordability and flexibility have created an opportunity for truly human-centered computing to emerge allowing individuals and communities to choose devices as consumer products and make use of data and information services in the cloud.

The resulting information systems no longer need to reflect only organizational goals. Driven by human needs, citizen science, FOSS development projects, online patient support spaces, and maker communities have all emerged as new organizational forms. For the first time, we can economically apply massive computing power to the goal of fulfilling human needs, and we can do so in a decentralized way, fashioning local solutions tailored to fit particular circumstances. Digital Humanism is finally possible. This societal-level infrastructure can be leveraged in pursuit of human goals for health, work, learning, entertainment, and personal relationships. The value of the resulting systems can be evaluated relative to their impact on people and communities (see Fig. 6).

To move toward human-centered computing and digital humanism, one major shift we have to make in how we think about technology focuses on the development process itself. Human-centered technology requires that we consider people as inherently social beings embedded within social structures such as families and communities. Embracing this social view and privileging the needs of people requires that they be included as equal partners driving initial system design, development, and management. Although this will add more complexity to the development process, it is only through truly human-centered participatory computing that we can design and develop ethical technology to create a better life and a better society. This shift will require a radical transformation in how we think about, develop, and use technologies. Below, we outline these shifts.

4.1 Thinking About Technology and People

Kantian ethicists have long recognized that it is fine to use tools to reach our own ends. However, it is unethical to use humans to reach our ends because doing so fails to respect inherent human worth and dignity. Putting the needs of people first requires that we clearly, continually, and unambiguously maintain a distinction between two categories: technology and humans. Technology is a thing that can be used as a tool. Humans are deserving of agency, autonomy, and respect and so should not be used as a tool. But this distinction is muddied as emerging technologies are often described using human terms—a form of anthropomorphizing. This leads to category errors where human characteristics are ascribed to technologies that cannot possibly have them (Blackburn, 1994). These technologies then start to be thought about and treated as if they were human, which accords them status and privileges that should be reserved for humans.Footnote 1

For example, as shown in Table 1, using the term “artificial intelligence/machine learning (AI/ML)” implies that this technology has intelligence and can learn, but intelligence and learning are human and not technological capabilities. The AI/ML technological capability is that of pattern matching against historical data. Similarly, the term “smart cities” implies that cities can be smart in the same way that humans can be smart. What are commonly called smart cities are no more than urban cyber-physical systems. “Autonomous vehicles” do not have the capacity to be truly autonomous because they cannot make informed moral decisions or be self-governing. Autonomous vehicles are vehicular robots. The term “5th Generation” implies that 5G can procreate and produce offspring. This is simply not something that a networking standard can do. AI cannot truly be a partner or team member because partnership is a human quality that implies shared goals and choosing to help one another meet these goals.

Table 1 Anthropomorphized terms for technologies, their implied human capabilities, and their actual technological capabilities

Maintaining the distinction between tools and people is important in ethics and responsible design. It is only by being clear which category something belongs in that we can determine which are the tools that we can use to help us meet our goals and who are the people whose goals we should be meeting.

4.2 Development Objectives

Human-centered computing will also require a transformation in how we develop new technologies. This starts with a consideration of which human goals should be prioritized. Human-centered development focuses on meeting people’s goals within a framework that considers which overall objectives are ethical and responsible. One such framework that is gaining popularity is the Doughnut Economic Model. Developed by Kate Raworth (2017), this model rejects economic growth as the overarching goal and argues for balance. It strives to develop a generative and distributive economy that maintains a safe and just space for humanity supported by a strong social foundation and within an ecological ceiling (Fig. 7).

Fig. 7
A circle diagram of the economic model. The inner circle indicates social foundation, the second circle indicates safe and just space for humanity, and then the large circle indicates ecological ceiling.

Doughnut economic model (Adapted from Raworth, 2017)

Similarly, human-centered technologies should contribute to safety, justice, and a strong societal foundation.

4.3 Participants

Moving to truly human-centered computing also requires a radical change in the role that developers play in the process. Organizational goals can be met with relatively small and homogeneous development teams made up of and led by organizational members. Human-centered computing requires inclusion of diverse stakeholders in the process to make sure that the needs and constraints of all the people who will be affected by the changes are well understood (see chapter of Bennaceur et al.). Stakeholders may be individuals, advocacy groups, organizations, and communities. It is especially important to include underrepresented groups including those voices that have been historically marginalized, invisible, and silenced. Many communities use the phrase “nothing about us without us” (see, e.g., Charlton, 1998). This can be a helpful guideline to determine if we have the right people represented in the development effort.

Stakeholder representation in the conversation is important, but truly human-centered computing also requires authentic participatory co-design (Sanders & Stappers, 2008). Stakeholders should be at least equal partners, if not the leaders, in the effort. Stakeholders bring a deep understanding of the domain and problems to be solved. This knowledge is a form of expertise that is complementary to, not inferior to, developer’s technical knowledge. As such, stakeholders should be compensated for the time and effort that they devote to the project. True inclusion and authentic co-design imbues user and community leadership of the development effort with real decision-making power (Pines et al., 2020).

Developers must also make a long-term commitment to the community and to working with them to meet the human needs that they identify. This is quite different from the common extractive relationship in which developers extract knowledge from community members, use it for their own or for corporate benefit, and fail to return value to the community (Harrington et al., 2019).

4.4 Ethical and Responsible Computing and the Socio-technical System

Human-adjacent and human-aware computing assumed that the system of interest was a closed technical system developed to meet organizational needs. In contrast, human-centered computer requires that we understand, intervene in, and manage the broader multicomponent sociotechnical system (see Fig. 6). Developing and implementing new technologies is one possible intervention into a complex web of interacting components that, acting together, can return value to communities while meeting individual needs (Fig. 8).

Fig. 8
A diagram illustrates the components of a sociotechnical system. They include people, culture, technology, infrastructure, legal, and physical.

Some components of a sociotechnical system

The robust societal infrastructure for computing provides readily available networking, data storage, and software components. With just a smartphone and an app, individuals and communities can often leverage the existing infrastructure to meet their information technology needs.

But working with diverse stakeholders can be challenging. People are not just individuals. They are also embedded within friendship and family groups and may be members of formal organizations (e.g., employers, churches, schools, libraries). Meeting the community needs will require leveraging the strengths of the individuals, families, and communities themselves. Integrating innovations into sociotechnical systems will also be easier when they leverage the individual’s and community’s strengths, abilities, and assets rather than focusing on deficiencies. For example, public librarians and educators have expertise that can be leveraged in developing and delivering training in new technologies.

To be effective, technical solutions must also be consistent with, or even enhance, underlying cultural values. Local culture can be very powerful but hard to recognize. It includes visible artifacts (e.g., language, stories, ceremonies, buildings), underlying values, and unspoken assumptions that may only be seen when violated. Many developers are not trained to elicit information about culture or to guide people in identifying their needs and assessing potential solutions. Often, it is only by immersing ourselves in a community that we can start to understand it. Fortunately, developers can also partner with experts who are trained in user needs assessment and community engagement. These experts can be found in diverse domains such as urban planning, social work, ethnography, human computer interaction, public librarianship, anthropology, sociology, and others.

We also need to consider legal constraints, physical affordances, economic dimensions, and more. These complexities will affect the needs that are identified and the solutions that can be developed. Only by considering all of these components and how they interact to maintain the current system can we responsibly intervene. Designing and introducing an appropriate new technology will more likely be ethical and responsible and maintain a safe and just space for humanity supported by a strong social foundation. However, there may still be unforeseen and unintended consequences.

4.5 From Technology Development to Sociotechnical System Orchestration

There are no guarantees of success in meeting human needs, partly because people are embedded within systems that are so complex that their behavior is itself hard to predict. Human-centered computing involves intervening in a muticomponent system that includes people, institutions, technologies, the physical environment, etc. The system components are all interconnected, so we cannot change just one element without affecting the other parts of the system. In addition, useful interventions may be bundles of technical, social, economic, legal, and other elements, so we may need to change multiple components at once. It is important to remember that all of the components themselves are constantly in flux so the system also changes over time even if we do nothing.

No system will be perfect, but steps should be taken early on to increase positive effects and reduce potential harms and undesirable behaviors. To the extent possible, we need to identify dependencies and complementary assets (Winter & Taylor, 1996). Informative and easy-to-use interfaces should be built into every new technology because they play such an important role in enhancing human agency, autonomy, and respect. We also need to identify what else is needed to make the system a success and ensure that they are available. For technical elements, complementary assets may include training, local experts, Internet access, and even reliable electricity. For example, smartphone apps require connectivity and affordable data plans. Teenagers and young adults often act as local experts. Streaming music services also requires intellectual property agreements with music publishers and artists. Electric vehicles require an installed network of charging stations. Ensuring access to complementary assets helps improve the likelihood that the system will meet human needs.

Predicting the impacts of a new technology is also difficult because it is hard for us to foresee all the ways that it may be used. Creative thinking is enormously important in co-design, but many developers overemphasize potential positive outcomes and fail to explore possible negative effects. In addition, any system that includes people is subject to the vagaries of human behavior. People are very creative in the ways in which they navigate their worlds and may not share the technology developer’s goals. There are about 8 billion people alive today, and the law of large numbers suggests that some of them will come up with ways of using the technology that fall outside of your expectations.

Even without intervention, sociotechnical systems are also constantly changing. Components change and co-adapt over time. Nothing is static. It is important to build resilience into the system to allow rapid recovery to unexpected changes.

This complexity, unpredictability, and dynamism suggest that we need to consider how developers and their co-design partners will monitor and manage the larger system. It is impossible to truly control the system, but interventions can allow a form of orchestration to enhance positive effects and minimize negative ones. Continual intervention will help the system continuously meet the goals of safety, justice, and a strong community (Winter & Butler, 2021). Monitoring and managing means developing feedback channels and mechanisms throughout the life of the system. Initially, we can use them to understand implementation progress and assess the initial impacts of the system intervention. Later, we can use them to gather information for ongoing evaluation and management of intended and unintended effects. Building in communication channels will enable the community to sense, evaluate, and make corrections.

Access to the existing “as a service” and “in the cloud” infrastructure has further expanded our conceptualization of a computer to encompass a true information system. This more complex view includes people not just as users of tools developed by organizations but as active co-creators of multi-component sociotechnical systems. This revolution can enable digital humanism if we shift our focus from developing new technologies to orchestrating complex sociotechnical systems.

5 Conclusions

Sometimes, it is hard to tell when things have really changed and when they just look different. In the case of digital humanism and ethical computing, the shift from mainframe computers to client server architecture to ubiquitous computing is a fundamental change. These technologies reflect economic changes in the cost of production but are also associated with distinctly different affordances (Gibson, 1979). As chips got smaller, faster, and more affordable, computing power became commoditized and started to resemble a sort of utility like electricity or water. This computing infrastructure can be used by organizations to meet their own needs.

More interestingly, for the first time, this societal computing infrastructure also opens up the possibility of meeting human needs. However, in many ways, our thinking about technology and design is stuck in the old human-adjacent and human-aware models. In a sort of path dependency or imprinting (Marquis & Tilcsik, 2013), we remain focused on computing to meet organizational needs, and the opportunity for meeting human needs remains under-explored. Capitalizing on these new affordances will require new models for digital humanism. Human-centered computing provides helpful tools and techniques for designing, building, and orchestrating sociotechnical systems that meet human needs and strengthen communities.

Success in meeting human needs cannot be guaranteed, but there is a lot that we can do. Authentic co-design with diverse and equal partners including marginalized communities plays a central role in developing ethical, responsible, and human-centered technologies that enhance agency, autonomy, and respect. This radical transformation to human-centered computing will require that we transform how we think and talk about technologies to avoid category errors, move to a model of participatory co-design, and build in mechanisms for feedback and adjustment.

Technology is a tool that should be used to achieve people’s goals. Truly achieving these goals requires us to shift our focus from stand-alone technologies to co-creation and orchestration of the larger sociotechnical system in initial analysis, intervention design, and in continuous monitoring, management, and improvement. This will enable us to strengthen social foundations and meet human needs within an ecological ceiling. This transition will not be easy, but we finally have the tools we need and a pathway to ethical and responsible technologies.

Discussion Questions for Students and Their Teachers

  1. 1.

    How useful are the concepts human-adjacent, human-aware, and human-centered? Can a technology be in multiple categories at once?

  2. 2.

    What are the ways in which we treat computers as though they were people? What does this mean for ideas like providing robots as companions for the elderly?

  3. 3.

    What are some of the barriers that limit the adoption of user-centered computing and participatory design? How can they be overcome?

  4. 4.

    Are there times when user-centered computing is a bad approach? Why or why not?

Learning Resources for Students

  1. 1.

    Campbell-Kelly, M., Aspray, W., Yost, J., Tinn, H., and Con Diaz, G. (2023). Computer: a history of the information machine (4th ed.), Routledge.

    This fourth edition provides an overview of the history of the computing industry and the role of business and government in its early days. It is written in an engaging style and includes a good mix of technical details and historic information.

  2. 2.

    Cooper, N., Horne, T., Hayes, G.R., Heldreth, C., Lahav, M., Holbrook, J. and Wilcox, L. (2022). A systematic review and thematic analysis of community-collaborative approaches to computing research. In CHI Conference on Human Factors in Computing Systems (CHI ’22, April 29–May 5, 2022, New Orleans, LA, https://doi.org/10.1145/3491102.3517716.

    This conference paper provides an overview of recent participatory HCI research with communities. It identifies significant issues that often arise when doing this kind of work due to the power and position of the researchers. They provide suggestions for moving toward practices that center communities.

  3. 3.

    Dominelli, L. (2017). Anti oppressive social work theory and practice. Bloomsbury Publishing.

    This book provides a clear discussion of oppression and disempowerment. It provides compelling examples and provides guidance for working with individuals, groups, and organizations for greater empowerment.

  4. 4.

    Naughton, J. (2000). A brief history of the future: the origins of the Internet. Orion Books, London.

    This book tells the story of the development of the Internet including both technical and cultural details. It provides an overview of the people who were involved, the problems they were trying to solve, and how they came together to create the underpinnings of modern digital life.

  5. 5.

    Pine, K.H., Hinrichs, M.M., Wang, J., Lewis, D. and Johnston, E. (2020). For impactful community engagement: check your role. Communications of the ACM, 63(7), 26–28.

    This short and very accessible editorial outlines common problems in civic and community-engaged research. It outlines four new practices that computing professionals should add to their toolbox when doing community-centered responsible design.

  6. 6.

    Rogers, Y., Sharp, H., and Preece, J. (2023) Interaction Design: Beyond human-computer interaction, (6th Edition). John Wiley and Sons, Hoboken, NJ.

    The first chapter of this book provides a helpful overview of the various terms used in the field and how they relate to the role of people in the design process. Later chapters provide a great overview of the field with compelling examples and a good mix of theory and practice, including recent developments in “humans-in-the-loop.”