1 Introduction: “Societal sustainability”

Innovation in products, processes, marketing, and management has been central to the success of firms and nations both in manufacturing and service industries. The technology–market linkage has fueled a growth trajectory that companies and countries hope to ride to ever-rising levels of prosperity. There are, to be sure, challenges to be overcome such as ecological and social sustainability. In this paper, we address the dangers posed by the constant drive to innovate (and disrupt) to another dimension of sustainability, that of our institutions, political systems, and of civil society itself, which we term societal sustainability. The concern addressed here is the very survival of societies where the rights of individuals, personal and collective freedoms, an independent judiciary and media, and democracy, despite its messiness, are highly valued (Levitsky and Ziblatt 2018; The Economist 2018). Robotics and automation are replacing human labor and decision-making in a range of industries; search engines, monetized through advertising, have access to, and track, our interests and preferences; social media, in connecting us to one another often know more about us than we ourselves do, enabling them to profit in ways which may not coincide with our well-being; online retailers have not only acquired the ability to track and predict our buying choices, but also they can squeeze vendors based on their outsize bargaining power; and, in general, digital technologies have changed both the way we think and our sense of self (Rosen 2007; Prado 2017).

This paper acknowledges the benefits offered, and addresses the harm wrought, by the high tech giants (‘big tech’) in Information and Communication Technologies (ICTs), and related industries. The search for rapidly growing revenues, shareholder returns and stock prices drives firms to accelerate innovation without fully investigating the entire gamut of their impacts. Additionally, greater wealth accrues to the leaders of big tech, and inequalities within firms and societies widen, creating societal tensions and political ferment (Brynjolfsson and McAfee 2014; Bridle 2018). Exploring the ethical nature of the various challenges, and suggesting possible ways in which the threat posed to individuals and institutions might be ameliorated, could be of interest to businesses and educational institutions, particularly ones where the virtues of big data are extolled while the ethical challenges that arise are often ignored or glossed over. In light of the increased scrutiny of big tech, the need to look beyond immediate (mainly financial) benefits, and study the long-term impacts on individuals and societies is also of great relevance now, both for businesses and policy makers.

2 Ethical criteria

The product choices that big tech companies make obviously have great significance to individuals and to society at large, creating ethical issues that need to be confronted. Product-related decisions may have a direct and immediate influence (say, on users in terms of privacy and security) or indirect and future (user data is used to cull information affecting insurance premiums, creditworthiness, and so on). Big tech firms need to consider the impacts on users, particularly the costs to society itself, regardless of the profit potential. The need to adopt this broad perspective, encompassing all stakeholders, was acknowledged recently when over 180 CEOs of major firms signed a declaration to this effect (Gelles and Yaffe-Bellamy 2019). An ethical screen could help evaluate the extent to which individual and collective stakeholder interests are satisfied (Laudon 1995; Varlan and Tomozei 2018). Among the approaches which could help in making ethical decisions, as distilled by Capsim (2018) and the Markkula Center (2019), are those based on rights, justice, common good, virtue, and utilitarianism. We now briefly develop the principles underlying each of these perspectives with a view to suitably framing the ethical challenges facing big tech.

According to the rights approach, people ought to be treated as ends in themselves not as means or instruments to achieve desire results. The dignity of every individual and the freedom to make choices are to be carefully safeguarded. Ethics as justice calls for treating everyone fairly by applying a common, unbiased standard. In the common good approach, the community is the unit of analysis, and calls for acting in the best interests of society, especially those less able to fend for themselves. The virtue perspective asserts that certain universal values exist such as honesty, compassion, not harming others, integrity, and so on. All actions are evaluated through the prism of these values. Utilitarianism, more specifically act utilitarianism (De Lazari-Radek and Singer 2017), focuses on the consequences of decisions, maximizing the good done while keeping the harm inflicted to a minimum. This standard recognizes that few actions have purely beneficial outcomes, but that the latter should outweigh any ill effects which might arise. In the corporate context, utilitarianism offers a convenient way to evaluate the ethicality of decisions, in part, due to its greater amenability to measurement (Markkula Center 2019). The assessment of benefits and costs is often conducted with a view to creating greater shareholder value, using a monetary metric. The latter is generally easier to measure, which could lead to devaluing other stakeholders’ (customers, suppliers, local community) interests, and ignoring intangible costs (Kelman 1981; Lowry and Peterson 2011). In this paper, we evaluate the ethics of big tech using the utilitarian approach specifically from the stance of users, suppliers, and society itself. In making the ethical assessments, we fold in the common good, justice, and rights approaches, to complement the utilitarian perspective where applicable.

Table 1 lays out some of the consequences, that is, benefits and costs (monetary and otherwise) primarily from the viewpoint of users, but, where applicable, also from the perspective of other stakeholders. We discuss some of the issues that arise from this analysis, and will attempt to devise strategies by which social, political, ethical and other challenges may be addressed. Some actions could directly affect users (e.g., privacy violations) with little time lag; while, the consequences of other decisions might be indirect and involve a delay (addictive viewing by children, which could result in less social interaction). The table presents the benefits of developments in the field of big tech, which are divided into two categories: Direct/Present (DP) versus Indirect/Future (IF). For each type of benefit, we identify costs or negative outcomes, which are classified similarly.

Table 1 Big tech benefits and costs: long term and short term

3 Consequences

3.1 Direct/present benefits

We have mentioned some of the benefits such as increased, accelerated, multimedia access to information earlier. Other DP advantages are the instant access to entertainment anywhere through a host of music and visual streaming sites, games which can involve one or more players, taking photographs which can be sent to friends and family at any time, and so on. Social media not only connect individuals (and groups) but also can result in an extended network of like-minded people who engage with participants in their circle in a variety of ways. One can track down friends and acquaintances from the past, join common-interest groups, buy advertised products, share thoughts and opinions, follow and post replies to assertions made by others, alert one’s friends to events and recent developments, catch up with the latest news, and so on. In addition, our devices keep us connected to a much larger and, if we wish, ever-expanding virtual range of friends. We have a variety of activities to occupy us, mitigating feelings of boredom and a sense of being alone, even if it be expressed in the form of a “like” (Prado 2017). Sharing thoughts and feelings without face-to-face contact could also reduce stress especially when others empathize and share their own experiences. In a sense, eliciting appreciative responses to sharing details of our personal life could also make us feel better about ourselves, enhancing our sense of self-esteem (Kingwell 2017). In today’s “instant” society, not only can we find answers and communicate at the speed of thought, so to speak, but also we can buy anything that catches our fancy online. In addition, the websites themselves suggest items which might appeal to us, based on our previous, and inferred, preferences (Radovan 2013).

In the case of services such as search, email, social media, navigation, and other apps, the dollar price is perceived as zero, creating an obvious consumer’s surplus. The use of the “free” service becomes a ’no-brainer’ to the user. Scholars have pointed out, however, that “free” in this context should be taken to mean liberty or freedom to use rather than being related to price (Agar 2019). The mammoth profits earned by the likes of Facebook, Google, Amazon, Microsoft, and other big tech firms attest to how successfully these firms serve the needs of their clients (“client surplus”), their advertisers. To deliver more value to its clients, big tech needs to gather more and more data about its consumers, enabling it to track and even predict user behavior. The rising efficacy with which ads, news, and messages are delivered to users leads to an upward spiral of both consumer surplus and client surplus, while proving to be ever more lucrative to the firms fueling the ‘data revolution’ (Greene 2018).

3.2 Direct/present costs

The problems arising from the availability of user data are well known. Just as social media can connect one to friends and family, they can also use the data to micro-target individuals for commercial and political purposes. Third parties may gain access to an individual’s preferences as well as aggregate these preferences to generalize to an entire segment of users (e.g., senior citizens, women under thirty-five, recently naturalized citizens (Stahl et al. 2017; Sumpter 2018). The potential for foreign intervention in elections remains high (Rutenberg 2019), even if firms try to mitigate the threat (Satariano 2019). Search engines’ algorithms typically store results of previous searches, which can not only be a convenience, but can also be viewed as an intrusion into one’s right to privacy. Users have apparently made a sort of Faustian bargain in which they share their data and identities in order to gain access to providers’ services. As Finnemore (2018) notes, users are the crop harvested by big tech. The reluctance on the part of these mammoth corporations to be transparent or openly share what actions they undertake behind the scenes is what brings the issue of ethics into the picture (Jasanoff 2016). Of late, some of the big tech firms have issued statements assuring users that they are committed to maintaining the privacy of user data. However, critics argue that when it comes to a question of user privacy versus advertising revenues, firms are likely to dilute the privacy criterion (Mack 2014; Pichai 2019; Tufekci 2019; Wakabayashi and Chen 2019).

Some writers have argued that, as in a production/service economy, economies of scale and scope are critical in a digital world as well. The only difference is that, rather than applying to volume and variety of output, the economies now apply to the extraction, analysis, application, and monitoring of data with a view to modifying, predicting and even controlling behavior. If DA is the amount of data needed to provide users with the value needed to retain their attention and loyalty, any additional data (say DB) extracted helps increase producer surplus now and into the future. The “behavioral surplus” DB serves to align user behavior with big tech and their clients’ needs (Zuboff 2019a). Innovation in the digital economy contributes to rising (perceived) consumer surplus, greater client surplus (effectiveness of advertisements), and, above all, scale economies in data extraction and use. Leveraging data among a firm’s various products (present and potential) generates scope economies [e.g., using the same data to deliver targeted ads and to determine creditworthiness for a loan (Zuboff 2019b)].

3.2.1 Impact on cognition and attention; children as targets

While much of the concern over the accelerating use and influence of the “gig economy” has been directed to issues such as data privacy and security (and rightly so, as noted earlier), there are other, potentially harmful, consequences which merit a closer look. Take, for instance, the extent to which many people have become dependent on digital sources for information, social interaction, lifestyle choices (where to live, whom to date, how to care for an infant), and so on. Such an abiding trust in, and reliance on, digital technology verges on addiction and could significantly alter our cognitive processes, our social skills, and even our emotional wellbeing (Kingwell 2017; McFarlane 2017). Sagan (1974), Harari (2011, 2018), and other scholars have theorized that homo sapiens, while in the hunter-gatherer stage, developed differently from other species by cultivating the ability to observe, reason, and remember, and to share information about food sources, weather patterns, geographic features, and so on. This “cognitive revolution”, it has been argued, was the point at which humankind diverged developmentally from all other species. As Carr (2008) observes, we may be surrendering one of the defining characteristics which makes us human for the convenience of having technology (our creations and tools) assumes an increasing part of the process of cognition. There are early indications that attention spans are shortening, information is increasingly being sought for vicarious ends (e.g., following celebrity lifestyles, pornography, etc.), and we are losing the ability to treat technology as an extension of ourselves, and are becoming dependent on our creations (Prensky 2001; Kingwell 2017). Milner (2016) provides several instances of the latter in regard to the use of GPS. He cites numerous instances of people blindly following instructions provided by their GPS app even when they knew it was leading them astray, sometimes with fatal consequences. The author argues we are undoing the neural connections that enabled our ancestors to reason along spatial and temporal dimensions. In a sense, human society is fast developing into a “technopoly” (Postman 1993)—a society whose culture is shaped by technology rather than values. One technological development leads to another in a seemingly inexorable progression of new services to which most of us are drawn since they offer ever more convenience and gratification at little or no cost (Walsh 2018).

An even more serious threat to society is that children are being targeted apparently as part of a plan to create a loyal base for the websites concerned for extended periods. While some children’s programming may indeed inform and educate, the purpose of the underlying algorithms is to hold the user for as long a stretch of time as possible (Lafrance 2017). Apple, after soliciting apps to limit time spent on a device, has now taken over the task itself, no doubt realizing the lucrative and data-rich nature of the task. The company is also under fire for allegedly favoring its own apps over those of outside developers (de Looper 2019; Nicas 2019). You-tube operates a highly lucrative children’s site but stands accused of creating techniques to foster addictive viewing and even of installing inadequate safeguards against switching over to adult programming (Bridle 2018). Fears have been expressed about the effect that long-term use of the internet, social media, digital assistants, and other accoutrements of digital technology might have on children’s and adolescents’ behavior. The observed behaviors and effects include cyberbullying, depression, and sleep deprivation (Child Mind Institute 2017).

3.2.2 Social influence

The social impacts of ICTs are almost as deep and worrisome as the effects on cognition (Ross 2011). It is difficult to deny that social media (Facebook, Twitter, You-tube, and others) have, beside connecting us with others and enabling us to express our opinions, created a high degree of self-absorption and a craving for recognition (Rosen 2007). The resulting focus on oneself borders on narcissism, and has affected the ability of many people to interact in a healthy way with others at school, work, home, as well as in religious and civic organizations. In fact, the amount of time spent online not only means we have less time to spend with others, but also it could be changing the ways in which our brains are wired. In addition, linguistic abilities, the powers of reflection and introspection, and attentiveness to the task at hand may be adversely affected. Other, more obvious, problems include the ease with which hate speech and harmful content can be spread, and the extent to which bullying and divisiveness have entered common discourse (Ives 2019). Sites such as Facebook and You-tube are trying to monitor and control the posting of harmful content but it may be an uphill task given the need to track an ever-increasing number of participants who follow recommendations, pay for premium services, and buy sponsored products (Alba et al. 2019). In addition, the fact that “platforms” cannot, by law, be held responsible for content uploaded to their sites works in big tech’s favor (Wakabayashi and Chen 2019). While pro-democracy movements such as the Arab Spring were partly fueled by social media, the latter have also enabled the spread of misinformation, rumors, and dangerous ideas. In Myanmar, Sri Lanka, New Zealand, and other countries, Facebook, Twitter, and Google were seen as spreading extremist and violent content (Wilson 2019; Mozur 2018). Big tech is also routinely used by governments to disseminate information aimed at stifling dissent (Kingsley 2019; Rezaian 2019). It is, indeed, chilling to read that India’s democracy is being subverted using modern technology, in part, by shutting down access to the internet or cell phone communication more frequently than any other nation (Human Rights Watch 2016). Clearly, while many of big tech’s innovations have been beneficial, they have also been subject to ‘weaponization’ (Swisher 2019) by states, corporations, and other actors.

Social services, such as administering the food stamp program, are already being outsourced to big tech, with the express purpose of making them more efficient which typically results in reducing or denying services (Eubanks 2019). Policing and crime reduction through the use of facial recognition software are also well advanced and, despite their flaws, biases, and obvious threats to privacy, are being extensively implemented—and not just in authoritarian societies (Metz and Singer 2019; Newman 2019). Clearly, the rights of individuals are being jeopardized, often and ironically, under the guise of protecting free speech. It is also obvious that the common good is being sacrificed on the altar of efficiency, profits, and shareholder wealth (Chakhoyan 2018).

3.2.3 Technology, power, and government

As a result of their ability to tackle the provision of all sorts of services often at little or no obvious cost to customers, tech firms are viewed by many as being uniquely capable of accomplishing any task they undertake. Though few leaders of big tech would argue that governments are superfluous, their actions are directed to exercising greater influence over governments. For instance, lobbying and efforts to influence legislation have expanded over the past 10 years, rivalling those of more “traditional” industries such as energy and finance (Dellinger 2019). The fact that the U.S. federal government has little investment in the development of AI further increases its dependence on big tech (Webb 2019).

With an increasing part of the population, in many countries, getting their news from search engines and social media, the role of print, television, and other journalism has declined precipitously. This “squeezing out” of journalism has also meant that the ethics of the field are being loosened. No longer do sources have to be vetted and corroboration sought, or opinion separated from fact, which can harm individuals’ rights and imperil the common good. Indeed, the Fifth Estate, as Greene (2018) terms big tech, by displacing traditional media, is undermining civil society. It appears that the influence of, and potential for, misinformation and the sowing of chaos in politics, are likely to keep rising (Rutenberg 2019). For instance, Facebook appears to have thrown in the towel where curating of user uploaded news is concerned (Boyle 2019).

It is clear that big tech firms view themselves collectively as laying a legitimate claim to being good for society by efficiently serving a variety of users’ needs in an expanding range of industries. At this point, it appears that big tech is not only too big to fail but also too big to regulate. The heads of the largest tech companies have been referred to as tech oligarchs (Greene 2018), whose ambition is not merely to disrupt one industry after another but to change the world to accord with their mental models. It is paradoxical that technologies such as the internet, personal computer, and smart phone, which ostensibly enable greater decentralization, have now resulted in a higher concentration of power especially where mammoth firms like Google, Facebook, Amazon, Microsoft, and Apple are concerned, or in the hands of authoritarian governments. Centralization of power has meant that we may be players in a new kind of economy, one that some authors have termed surveillance capitalism (Zuboff 2019; Webb 2019) or a surveillance state (Pinker 2019). The sense of omnipotence that pervades the digital giants is such that they are now engaged in grandiose pursuits such as settling on other planets (where, presumably, they would make up the rules, not live by someone else’s), extending life indefinitely and perhaps achieving immortality, eliminating poverty, and so on. In the tech oligarch’s world, technology can solve any problem (Greene 2018).

Technology is, by its very nature, a political and cultural phenomenon (Winner 1986; Jasanoff 2016). For instance, the construction of certain highways with low overhead clearances in Long Island was meant to prevent low income individuals from living in those neighborhoods. The proliferation of multi-storied apartments and offices in cities creates a divide between urban residents and nature, the diffusion of the automobile resulted in a migration to the suburbs, television and the computer have tended to curb social interaction, and so on. With the passage of time and the increased emphasis on predictive data analytics combined with machine learning, real power has been rapidly accrued by big tech (Nicas et al. 2019), while apparent power still resides with the individual.

While on the subject of individual rights, one cannot ignore the issue of employment. In the tech industries themselves, firms try to minimize the head count of permanent employees to the extent feasible. About half of Google’s work force consists of contractors, vendors, and temps (“CVTs”), Uber treats its drivers as contract workers, and Facebook and You-tube have hired thousands of temps to review and screen uploads, thus taking us back almost to pre-union times when employees at lower income levels had little to no rights (Sheng 2018; Wong 2019). In their aggressive, male-dominated working environments, high tech firms have tended to tended to devalue women, particularly at firms’ higher echelons, much in the way that financial services firms have (Rangarajan 2018; Business Insider 2019). The relative lack of diversity stems from educational systems and corporate practices which give rise to ‘tribes’ (Webb 2019), possessing a homogeneity in cognition and values.

3.3 Future benefits

We have thus far reviewed some of the more immediate positive and negative outcomes (left half of Table 1) from the deployment and widespread diffusion of digital technologies. Our attention now turns to the more enduring benefits and costs, some of which have already begun to manifest themselves, associated with high tech (shown on the upper right half of Table 1). Brynjolfsson and McAfee (2014) observe that Information and Communication Technologies (ICTs) are so-called general purpose technologies (GPTs) which serve as a base or platform for the development or advancement of other technologies. Nearly, every product or service we use has some form of ICT embedded in it. With the advent of the internet of things (IoTs), such smart devices have become ubiquitous (Husain 2017; The Economist 2019). Cars, homes and home appliances, stores, airplanes, luggage, electric grids, and much more, have been fitted with sensors to increase our ability to control them to our ends. This can be achieved by touch, voice, with a gesture, or, potentially, even with the blink of an eye. The use of ICTs in industry has helped increase efficiencies in production and service firms alike (Kvochko 2013), making goods (customized, if needed) available at competitive prices, and enhanced convenience at work and in leisure activities (Deb 2014). The automation of routine and low/middle-skill tasks is another dimension of AI which is likely to help in mitigating the boredom and tedium associated with many manufacturing and service jobs in particular.

Perhaps, the most prominent development in the field is that of artificial intelligence (AI). Though the field of AI has developed in spurts, tending to ebb and flow over time (Lee 2018), it seems to have gathered a head of steam recently, and is likely to play an prominent part in our lives (Lee 2018; Webb 2019).Based on increasingly deep neural networks which introduce layers of data analysis capabilities, AI could be of the supervised or reinforcement varieties (Walsh 2018; Anderson 2019; Ramakrishnan 2019). In the former, the starting point is an algorithm to sort data and guide the analysis, with refinements being made in the process based on comparing predictions against actual outcomes. For instance, in predicting stock prices or the occurrence of a medical condition, if, based on a data set of tens of thousands of observations, predictions vary from reality, corrections are made autonomously, and applied to the next batch of data, and so on, until a close enough match between reality and predictions is achieved. In the case of reinforcement learning, no initial algorithm guides the analytic process, which depends on the machine learning from reams of past data and outcomes to predict outcomes from present data sets, and improving predictive capability in an evolutionary manner (Webb 2019). This kind of artificial narrow intelligence (ANI) is being used in stock selection, facial recognition, neighborhood surveillance to anticipate criminal activity, supporting medical diagnosis, identifying candidates for gene therapy, and so on (Adams 2017). Robots, which have been programmed to carry out specified tasks based on voice commands, are now being equipped with machine learning capabilities, so they can respond, based on reinforcement learning (their own as well of others in their cohort) to unanticipated requests. Self-driven cars are another ANI development which may be on the verge of introduction on an experimental scale. Ride-sharing is expected to increase, resulting in fewer cars on the road, and sharply reduced carbon emissions. The number of electric car models under development by the major automobile companies is an indicator of the likelihood of autonomous vehicles hitting the market, since electric cars are more amenable to being self-driven than traditional fossil-fuel-powered ones (Gardner 2016; Edwards 2019).

3.4 Potential negative outcomes

One of the likely immediate outcomes of automation, as with the development of efficiency-enhancing techniques in the past, is the possibility of rising unemployment. While this has been an ongoing feature of technological change historically, one can now envision as part of the “age of accelerations” (Friedman 2016) that up to 40% of existing low- and medium-skill jobs in manufacturing and service occupations will, by 2030, be performed by machines with minimal human intervention. Not only does this bode ill for present-day types of jobs, as a McKinsey (McKinsey Global Institute 2017) study notes, but it could mean that the jobs likely to be created in the coming years (which would normally be performed by humans) will also be taken over by sentient machines. Actions such as job-retraining undertaken by governments, fostering of entrepreneurship, and an increase in the supply of highly skilled workers may have minimal impact (Arogyaswamy and Hunter 2019). It has been noted by some experts that new technologies often result in the elimination of certain types of occupations, while creating a variety of new ones (White 2011; Gordon 2016). There is considerable evidence, however, that the new jobs created in the robotics/automation/AI revolution may be of the low-skilled, poorly paid kind, creating extensive underemployment. Current and emerging ICTs, are poised to transform societies in ways we have not hitherto experienced (Hirst 2014; Stettner 2018). The process might unwind over a few decades, but the writing is on the wall. If the tech oligarchs do not act in a manner respectful of social values and norms (e.g., paying their share of taxes, investing in worker retraining, accepting that they are not supra-societal entities), social and political instability could be the outcome. Civil society and democracy may prove to be unsustainable (Collier 2015; Ma 2018; Lanchester 2019).

While the impact on employment could well be economically damaging, AI built on 5G networks could change other aspects of life for the worse in the United States, China, and other countries to which this technology is transferred. There are serious concerns about the efficacy and reliability of machine learning. One of the pitfalls is that many problems are more complex than can be addressed by statistical analysis (Pearl 2019). Not only are the numbers of variables extracted from the data likely to be dauntingly large, it might also be difficult to attach a conceptual meaning to each of the variables identified. Even more relevant, if the list of variables changes over time, the predictive power of machine learning could become even less effective. An equally valid criticism is that systems would remain opaque in regard to the process by which predictions are made. A total dependence on statistical machine learning would make it impossible to explain why decisions are made and, even more serious, for human actors to exercise rational or moral discretion. Agar (2019) observes that following this path would create a new feudalism with users of Google and Facebook constituting the peasant farmers in an emerging cyberland. Ethically, the increase in the number and scope of interactive technologies raises the issue of how we should deal with distributed moralities (Floridi 2013). For instance, in attacking military targets with possible civilian casualties or in denying social services to certain individuals, the human–machine interaction could result in machines making decisions involving moral choices, even if it is done unwittingly. Similar moral choices might confront physicians whose diagnoses differ from those of machines, and, in fact, in any area where empathy, compassion, and morality are involved. That is, in a machine-learning world, human actors may not be the only ones involved in making ethical decisions.

Combining the added heightened threats to employment, privacy, security, decentralization of power, the exercise of individual and group cognitive capabilities, and the potential sidelining of human rationality and morality, it would appear that, unless action is undertaken in the immediate future, societies worldwide will face transformative, destabilizing change.

4 Options to consider

4.1 “Self-regulation”

As the number of uploads to social media and search websites proliferates, and new products are introduced at an accelerated pace, keeping pace with the millions of new items appearing every day could become an insuperable task (Satariano 2019). The use of AI to monitor content may not lead to much improvement either, given how minor variations in input could throw an AI system off track (Condliffe 2019 Employees of firms such as Google, Microsoft, and Amazon have occasionally attempted to influence corporate strategy (Dubal 2019), but not always effectively. Given the big tech culture of rapid and disruptive product introduction regardless of likely consequences, the belief in their own infallibility, and the lack of diverse, possibly dissenting voices from within, (which Heisler (2018) views as one of the biggest risks to the continued success of big tech), organic moderation may be a chimera.). Self-regulation, therefore, can have initial and limited effectiveness, but can quickly be overcome by the sheer magnitude of screening required, the demands of shareholders for growth and profits, acquiescent corporate cultures, and sheer hubris.

To deal more effectively with the top challenges confronting big tech such as balancing the risks and rewards of AI, data governance and security/privacy, and a decrease in public trust (Forbes Technical Council 2018), some companies are taking long-overdue steps such as the appointment of a Chief Ethics Officer (Swisher 2018), adopting a code of ethics (West 2018), and teaching high tech ethics (Singer 2018). The fact that over 30% of executives out of 1400 surveyed ranked ethics as one of their top concerns regarding AI (Forbes Insights 2019) speaks to the concerns arising from the expanding reach of big tech.

When confronted with evidence of the harm wrought by their innovations, big tech firms often respond that the consequences were ‘unintended’. As (Jasanoff 2016) notes, firms typically evaluate the gains primarily to shareholders and secondarily to other stakeholders. The possible negative ramifications of their decisions are generally not investigated, given short shrift, or, worse viewed as beneficial to the firm even if the outcomes are likely to be harmful to specific constituents or society as a whole. Some researchers posit that, as technology becomes an increasingly dominant part of culture, the line between intended and unintended consequences tends to get blurred (Webb 2019). Consequences, even if unintended, are not necessarily unforeseeable. Firms focused on desirable and profitable outcomes alone, while failing to anticipate or ignoring undesirable, less profitable ones, are ill-equipped to regulate themselves (Vogelberg 2018).

4.2 Regulation

One of the policy proposals which has gained ground recently both in the EU and the US is regulation of big tech. For instance, the General Data Protection Regulation, or GDPR (2018), adopted by the European Union (EU) is an attempt to ensure privacy of user data. Though its interpretation and enforcement could create complex challenges, it might be a precursor of more regulations to follow in the EU as well as in the United States. It needs to be noted, however, that numerous hurdles exist to regulating tech businesses which are constantly mutating. Foremost among these is the issue of how to regulate products which do not yet exist, and whose revenue-earning method cannot be anticipated. An often-mooted suggestion is that the bigger tech firms should be broken up as were oil and railroad firms at the turn of the twentieth century, and AT&T in the 1970s (Lohr 2019a). While the concept might be an appealing one from the anti-trust perspective, the difference here is that the tech giants have diversified considerably and the principle adopted to split them up could create even more problems (Morozov 2019). For instance, separating Google Search from You-tube, and both from Android could render each of these entities far less effective in meeting users’ needs. Given Google’s adoption of “free” services (advertising being the prime source of revenue), the costs to users could well rise, particularly since the synergies from sharing resources and information across platforms could be diminished. The interconnections among many of these firms’ products make it difficult to decide where one product (e.g., Messenger) ends and another (e.g. WhatsApp) begins (Stahl et al. 2017). Legal battles are likely to ensue if a breakup were proposed (Isaac 2019). Also, since Chinese firms, with the backing of their government, are engaged in a concerted effort to stake out a leadership position in the technologies of the future, it might not be seen as prudent to hobble big tech in the US and EU relative to foreign rivals, while possibly also reducing the level of service to users in these countries. In fact, American big tech firms often cite the national interest in arguing that they should not be tightly regulated (Roose 2019).

However, perhaps due to the rising dismay over the perceived indifference of big tech to the social, political, economic, and technological forces it has unleashed, the Justice Department, some state governments, and Congress have embarked on investigations, and lawsuits have also been filed (Smith 2019). The charges against major ICT firms center around their dominance in digital advertising and whether this violates anti-trust legislation. Though limited in scope (privacy, security, and social and political impacts are not specifically included), the very fact that regulation is under consideration in the U.S. [even if the efficacy of the process is in some doubt (McCabe 2019)], has cast a shadow over the immediate future of the largest ICT businesses (Levy 2019). While some CEOs of tech giants have welcomed regulation, partly due to the likelihood of barriers to entry being raised, and also because users typically sign off on any new conditions added to terms of service, the uncertainty over the direction of pending inquiries is a source of mounting concern (Wharton 2019). By building bridges to, and partnering with, governments worldwide, while anticipating and guarding against potential problems, big tech could (as Microsoft appears to have done) conform to societal norms and enhance shareholder value (Lohr (2019b); Ratnesar 2019).

From around the early 1970s, maximizing shareholder value has been the single most important force driving corporate strategies for publicly owned businesses, and digital technology firms have focused predominantly on this criterion of performance. Big tech stock prices and market values have soared since the early 2000s, and have contributed significantly to a long-running bull market, before the outbreak of the Covid-19 pandemic. It appears that the apparently irresistible rise in the values of big tech shares has declined somewhat (Kramer 2019). Increased monitoring by governments and activists, critical media coverage, concerns over accelerating automation, threats to privacy and security, the long-term impact of screen addiction on children, and so forth, might well be playing a part in investors’ uncertainty over the future of major tech firms (Wursthorn 2019). Adding to shareholder anxiety concerning the prospects for firms like Apple, Google, Amazon, and Facebook, is the appointment by the E.U. of a “Digital Czar” who has vowed to expand investigations to include data appropriation and misuse, influence in elections, destabilization of society, and a whole panoply of other practices (Stevis-Gridneff 2019). Shareholders’ concerns over the threats facing big tech might well convince these firms to review the ethical problems associated with their actions.

4.3 User involvement and activism

The satisfaction of users’ needs while guaranteeing that their rights are not violated is best done by users themselves. Granted, users are fragmented in their expectations and often unclear about their rights. However, if the bulk of the user population remains indifferent to the kind of slippery slope individuals and societies are on now, no amount of regulation or “self-regulation” can be effective. Whether it is achieved through activism, concerted action by NGOs, or through consumer boycotts, or adverse publicity, users need to ensure that the aggregated, hidden, and lasting harm done to individuals and to society is mitigated to the extent possible. As behavioral economics suggests, we might be willing to accept and discount future losses, however heavy they might be, to experience immediate gratification (Hardisty et al. 2012). Awareness of the perils of technologies which not only have multiple impacts but also are hydra-headed has to start from the toddler stage which is when many parents introduce their offspring to electronic entertainment and communication. For those who have become addicted to high tech services, weaning them off such behavior is far more challenging. Digital addiction afflicts millions of people and may require treatment as a psychological condition, requiring the type of attention given to alcoholism or drug addiction (Glatter 2018; Gregory 2018). Privacy violations, security breaches, and posting of false and harmful information also need user (bottom-up) action to complement voluntary corporate and regulatory efforts. The role played by activists such as Lady Kidron in galvanizing public opinion (Singer 2019) to restrain the power of social media particularly where children are concerned is an instance of the type of action needed. To achieve any traction in the effort to restrain the overweening ambitions of, and the looming risks associated with, big tech, activism of this sort has to snowball, first, to create awareness, and then, to mobilize for action.

5 Conclusion

The ability to store, analyze, and act based on immense quantities of data combined with the advances in machine learning, presents us with unprecedented opportunities to strive for the betterment of humanity in many ways. Artificial intelligence (AI) based on ever-deeper neural networks has the capability to transform medical care, revolutionize transportation, enhance security using sensory recognition, provide customized education, and so on. Its impact on the world economy alone could amount to an astounding $17 billion or more. While initially dependent on human inputs and algorithms, deep learning could lead to machines which teach themselves based on the data which they are fed. As the scale and scope of artificial narrow intelligence proliferate apace, we may, in a few decades reach the Singularity of Artificial General Intelligence, and Superintelligence—a stage at which we may have little control over goals or decision-making, and even if we did, would hesitate to second guess our creations. We may, however, be nearing a’ pre-singularity’, the point at which decisions on how much control to cede and what moral limits need to be in place, are taken out of our hands. As IoTs and AI become integral to our everyday lives, our powerlessness relative to big tech and authoritarian governments may become irreversible. It is imperative we reflect fully on the extent to which we are vesting our technologies with power over our lives both now (addiction, impacts on cognition, the welfare of children, etc.) and into the future. The immediate benefits should not blind us to the extent to which individual rights, social justice, and the common good are likely to be harmed. Complicating the assessment of the societal ramifications of technology in the long run is that the AI/5G race between the big tech firms in the United States and China is one in which neither side is likely to pause lest the other gain an immediate advantage. The fact that China’s big tech firms (Baidu, Alibaba, and Tencent are the most prominent) are viewed as arms of the state, adds a tinge of nationalism to the urgency with which AI is being developed. The impact on the world of the 5G/AI revolution could well be as significant as the start of the nuclear age. In addition to regulation, the use of an ethical calculus, and safeguarding users’ and other stakeholders’ interests, corporate and national leaders need to negotiate and set boundaries for how AI will be used (Scharre 2019), rather than engaging in a race to gain the most financially, militarily, and politically.