Keywords

Introduction to Web2 and Web3

We begin this chapter with a hypothetical question: Would you let a stranger live in your home and eavesdrop on everything you and your family and friends say?

Most readers will instinctively answer, “No!” The thought of a stranger invading the privacy of our homes invokes scenes from George Orwell’s novel, Nineteen Eighty-Four. Digital personal assistant devices like Amazon’s Alexa, Google’s Assistant, and Apple’s Siri devices, however, do just that—they are always listening. While the companies who sell these devices promise that their products only process information when the devices detect their wake-up word, all of these devices have false accepts, meaning they record conversations even though the person did not invoke the wake-up word. Whistle-blowers and employees report that humans have access to these recorded conversations. Apple contractors, for example, said that they routinely hear drug deals, medical details, and people having sex from Siri devices (Hern, 2019). Humans working for Google read sample recordings, including false accepts (Aten, 2019). If we think false accepts are isolated incidents, consider the US Federal Trade Commission’s 2023 enforcement actions against Amazon’s Alexa and Ring devices. Amazon agreed to pay $30.8 million for violating users’ privacy, including through misleading data retention practices, overbroad employee access to user data, and inadequate cybersecurity practices (Nahra & Evers, 2023).

These stories of digital assistants serve as powerful examples for the context of this chapter. The stories highlight a defining attribute of the version of the Internet known as Web2. Whereas Web1 was the era of the Internet that enabled web browsing and online shopping (e-commerce), Web2 introduced easy content generation for users via social media applications like Facebook, Youtube, and Twitter. Web2 is the dominant version of the Internet used for economic and social activities.

With Web2, individuals rely on centralized platform providers to access online services. Many centralized platform providers collect user data in exchange for free or low-cost services, apply artificial intelligence (AI) to predict behavior, and then sell these predictions to other organizations. Amazon’s e-commerce site, Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, Twitter, WeChat, YouTube, and any other applications that display advertisements work this way. The economic model is called surveillance capitalism and it eviscerates user privacy (Zuboff, 2019).

Identity theft is another common occurrence with Web2. Our online relationships with centralized platform providers are managed with accounts and passwords, which are stored—along with our personal data—in the provider’s centralized databases. These databases then become prime targets for cyberthieves. The United States (US) Bureau of Justice Statistics estimated that 23 million US residents were victims of identity theft in 2021, costing victims more than $56 billion (US Bureau of Justice Statistics, 2021).

Given Web2’s downsides of privacy invasion and identity theft, why do users willingly give these centralized platform providers their personal information? Information Systems (IS) scholars have investigated this question.

IS scholars study how digital information systems are designed and used by individuals, organizations, and communities. In the context of privacy, IS scholars examine information privacy, defined as “the ability of individuals to control the terms under which their personal information is acquired and used” (Culnan & Bies, 2003, p. 326). IS research escalated when the threats to information privacy skyrocketed during the rise of Web2 in the 2000s. IS scholars have conceptualized and examined information privacy in terms of an individual’s information privacy concerns and have examined why these concerns do not prevent individuals from disclosing personal information with centralized platform providers; the phenomenon is called the privacy paradox. As we discuss in this chapter, IS scholars have four common explanations for Web2’s privacy paradox: privacy calculus, privacy fatigue, trust, and lack of choice.

We, along with other IS scholars, take a different approach to user privacy. We believe that the root cause of information privacy leakages stems from the design of Web2. Unauthorized security breaches—including identity theft and intentional sharing of user data under the surveillance capitalism model—largely occur because the software and user data is controlled by centralized platform providers. Web3 aims to fix those issues of Web2. Web3 is the era of the Internet that is based on decentralized infrastructures.

Let’s begin with the why of Web3. Many people believe that the decentralization of economic and social activities is the best way to increase individual’s privacy, power, and cybersecurity (Allen, 2016; Nakamoto, 2008; Preukschat & Reed, 2021; World Economic Forum, 2020). Decentralized activities discourage abuses of power and ideally promote more inclusive participation, unity around decisions, and individual empowerment, freedom, and privacy. Since most of our modern economic and social activities happen online, decentralized activities need decentralized digital platforms, applications, and governance—thus the creation of Web3 (Lacity & Lupien, 2022).

Bitcoin was the first Web3 application, launched in 2009. Bitcoin was invented by Satoshi Nakamoto―a pseudonym used by an unknown person or persons who remains unknown to this day. Nakamoto imagined a world where people could safely, securely, and anonymously transfer value directly with each other (1) without using government-issued currencies; (2) without relying upon trusted third parties (TTPs) like banks or brokers; and (3) without the need to reconcile records across trading partners. Bitcoin has its roots in Libertarian and Cypherpunk values, which aim to create social and political change by circumventing governments and large financial institutions through privacy-enhancing technologies (Lacity & Lupien, 2022).

While most people are familiar with Bitcoin, which is a cryptocurrency, we highlight Ethereum because Ethereum is the most commonly used platform for deploying decentralized applications. Whereas Bitcoin is just a peer-to-peer payment system (it does not do much else), Ethereum was designed to allow anyone to use it as a platform for deploying decentralized applications. Launched in 2015, Ethereum is a network of dispersed computers, with no master computer in charge, meaning no single company or individual can alter the decentralized application or the transactions that are stored on its distributed ledger, also called a blockchain.Footnote 1 The distributed ledger is a time-stamped, permanent record of all valid transactions that have occurred within the Ethereum network. The ledger also stores copies of distributed applications (software). The ledger is replicated on all the computers in the network, which is why it is called distributed. Anyone with access to the Internet can view it (see https://etherscan.io/). No PII is stored on Ethereum’s distributed ledger.

Users can be confident that the Ethereum network will only allow valid transactions. Like all Web3 applications, two validations are always performed. First, users prove they own the asset with the private key stored in their digital wallet (explained later in the chapter). Second, the computers on the Ethereum platform check the shared distributed ledger to make sure the user has enough funds in their wallet to cover the transaction, thus preventing the double spend.

Keeping with the values of Web3, Ethereum is not owned or controlled by any company. Changes to the Ethereum software are based on meritocracy, meaning that ideas are evaluated based on their quality. Anyone can suggest improvements by submitting an Ethereum Improvement Proposals (EIPs). The whole Ethereum community (computer operators, developers, and users) can vote on the proposal based on its merit. Any person with access to the Internet can review the proposals (see https://eips.ethereum.org/). By 2023, over 1000 EIPs had been submitted, with more than 80 finalized. The non-profit organization, Ethereum.org, coordinates the process. Ethereum.org is led by a core team in Bern Switzerland with contributions from thousands of people across the world (https://ethereum.org/en/about/#).

In addition to Web3’s increased privacy, decentralized applications also improve cybersecurity. Decentralized applications that run on platforms like Ethereum are resilient to cybersecurity attacks because the attack surface is diffused across many locations. Over 500,000 computers operate the Ethereum platform in 80 countries, each with its own identical copy of the ledger (Liu, 2023). The only way to infiltrate the platform is to commandeer more than 50% of the computers. So far, the Ethereum platform has never been overtaken!Footnote 2

Some concerned law enforcement agencies have argued that Web3’s privacy enhancing features invite more criminal activity than Web2 applications. So far in Web3’s development, the percentage of crime in the Web3 market has not been significantly different than the percentage of crime in Web2. Both markets see between 1 and 5% of illegal activity (Chainanalysis, 2023; Hopper, 2023; United Nations, 2023). The expectation is that Web3’s privacy enhancements will reduce opportunities for identity theft that exist today with Web2.

While it is still early days for Web3, education is an important driver of adoption. This chapter educates readers on how Web3 enhances information privacy (and cybersecurity) compared to Web2. By the end of this chapter, readers will be familiar with the Web3 concepts of digital wallets and distributed ledgers. We illustrate concepts by comparing Web2 versus Web3 applications for browsing, financing, storage, and visiting virtual worlds known as metaverses. While our Web3 application examples focus on anonymity, many services require confidentiality, meaning that data is viewable by authorized parties. Web3 applications are emerging that allow for confidential (not anonymous) transactions that enhance privacy while ensuring user credentials are valid.

Information Systems (IS) Scholars’ Approach Privacy Research

First, we explain how information system (IS) scholars approach privacy research. IS scholars primarily conceive of—and study—privacy in terms of digital information privacy. Our field studies how computers are used to collect, process, store, and retrieve personal identifiable information (PII) that can describe, characterize, identify, or otherwise verify an identifiable legal person or a group of people (AICPA/CICA, 2020). A person’s name, home address, identification (ID) number (like a national ID or an employee ID), criminal record, healthcare record, gender, age, and religious affiliation are examples of PII. PII also includes the individual’s data associated with their online activities, such as their computer’s unique Internet Protocol (IP) address, email address, logon account name, and website cookies that remember an individual’s online activities.

Privacy Paradox

Hundreds of IS studies on information privacy have been published. Many IS scholars have surveyed individuals and their information privacy attitudes, privacy concerns, and information sharing intentions. Overall, individuals are deeply concerned about online privacy (Bélanger & Crossler, 2011; Mitchell & El-Gayar, 2022). Pavlou, 2011; Rath & Kumar, 2021; Smith et al., 2011). This concern, however, does not prevent individuals from disclosing PII online. This inconsistency between attitude and behavior is called the privacy paradox (Li et al., 2017; Pavlou, 2011; Zhu et al., 2021).

The privacy paradox has been found to exist in general Internet use and in specific instances of online applications (e.g., Dinev & Hart, 2006). IS scholars have studied the privacy paradox in the contexts of social media (e.g., Mosteller & Poddar, 2017), online shopping (e.g., Li et al., 2017), online reviews (e.g., Mosteller & Mathwick, 2014), healthcare applications (e.g., Zhu et al., 2021), and mobile applications (e.g., Pentina et al., 2016).

Although the IS literature is too rich to summarize adequately here, in general, IS scholars theorize that the privacy paradox can be explained by either a rational decision-making process called privacy calculus or by an emotional decision-making process called privacy fatigue. IS scholars also explain the privacy paradox with additional variables such as trust and lack of choice (see Fig. 6.1).

Fig. 6.1
A chart presents an overview of the privacy paradox and explanations of the privacy paradox. The privacy paradox includes privacy concerns about sharing and disclosing P P I with online parties. The four explanations of the privacy paradox include privacy calculus, privacy fatigue, trust, and lack of choice.

An overview of Information Systems research on information privacy (Image credit The authors)

Privacy Calculus

Ackerman (2004) was one of the first scholars to theorize that users perform a privacy calculus to determine whether the benefits received from revealing personal information are worthwhile. This theory assumes a rational view of the individual’s decision-making process. In the context of social media, researchers have found that users consider the benefits of enjoyment, affirmation, and connection against the risks of privacy concerns (Mosteller & Poddar, 2017; VanMeter et al., 2015; Zalmanson et al., 2022). With respect to posting online reviews, users consider the benefits of pleasure associated with posting a review, gaining knowledge, feeling connected, and promoting one’s opinions against the risks of privacy concerns (Mosteller & Mathwick, 2014). In terms of trusting the Internet for e-commerce, researchers have found that an individual’s personal Internet interest outweighed their privacy risk perceptions in the decision to disclose personal information. The authors conclude, “These findings provide empirical support for an extended privacy calculus model” (Dinev & Hart, 2006, p. 61).

Support for the privacy calculus theory is found across cultures. For example, one survey of 106 American and 120 Chinese millennials found support for privacy calculus theory in the context of users’ intentions to download and deploy mobile applications that require access to personal information. Individuals in the study valued the informational and social benefits provided by the application more than they valued their privacy (Pentina et al., 2016). In another study of 422 US and 889 Italian users, both samples found support for privacy calculus theory, although the Italians had lower propensity to trust and higher privacy concerns than the Americans (Dinev et al., 2006).

Privacy Fatigue

In contrast to the privacy calculus theory that focuses on rational explanations of the privacy paradox, privacy fatigue theory focuses on human emotions.

Many studies show that individuals experience privacy fatigue, defined as a user’s feelings of exhaustion, cynicism, helplessness, and powerlessness to protect their data privacy, often to the point where individuals feel defeated (Tian et al., 2022). According to the theory, privacy fatigue leads individuals to disclose personal information.

In one study, data from 324 Internet users found that high levels of privacy fatigue were positively related to intentions to disclose personal information. The authors concluded, “repeated consumer data breaches have given people a sense of futility, ultimately making them weary of having to think about online privacy” (Choi et al., 2018, p. 42).

One literature review of 18 academic studies on privacy fatigue summarized the antecedents and consequences of privacy fatigue (van der Schyff et al., 2023). Privacy concerns, lack of knowledge, information overload, loss of control, and fear of privacy invasion were the major antecedents (precursors) of privacy fatigue. Self-disclosure, privacy burnout, privacy resignation, and mistrust were the major consequences/outcomes of privacy fatigue.

Trust

IS scholars have also theorized that trust may offset privacy concerns. IS scholars for decades have found trust to be a core variable for explaining individual uses of information systems. The IS discipline does not have a standard definition of trust, although IS scholars generally agree that trust entails the presence of a human subject who forms a trust perception about the object of the subject’s trust, such as another person, organization, or technology. Under circumstances of risk, trust is a subject’s psychological belief that an object of trust will discharge their obligations as expected in a specific context (Lacity et al., 2023a). Initially, in the mid-1990s, the concept of trust helped scholars understand individuals’ willingness to use the Internet for shopping (e.g., Jarvenpaa & Todd, 1996). In the years that followed, trust has been examined in many other contexts including blogging, e-government, e-healthcare, mobile applications, and social networks (Lacity et al., 2023a). Trust is more generally understood as a key determinant of people’s willingness to use and rely on IS (Lacity et al., 2024; Schuetz et al., 2023).

Privacy concerns, the willingness to disclose PII, and trust have complex relationships. Some IS studies found direct relationships while other studies found moderating or mediating relationships.

A direct relationship involves two variables that are closely associated. For example, one study of 369 respondents found that high levels of Internet trust were associated directly with a willingness to disclose PII (Dinev & Hart, 2006).

A mediated relationship involves three variables, with the mediating variable explaining how the other two variables are related. For example, one study found that users who are concerned about privacy will disclose personal information if they trust the website (Mosteller & Poddar, 2017). In this example, trust mediates the relationship between privacy concerns and the willingness to disclose personal information.

A moderated relationship also involves three variables, with the moderating variable affecting the strength of the relationship between the other two variables. For example, one laboratory experiment with 667 individuals found that privacy concerns moderated the relationship between privacy assurance mechanisms (like a Website’s policy statement) and the user’s trust in the website (Bansal et al., 2015).

IS scholars also study institutional safeguards. Institutional safeguards are practices and policies centralized platform providers use to increase users’ trust in and willingness to disclose PII (Bansal et al., 2015; Boritz & No, 2011; Guo et al., 2021). Institutional safeguards that increase trust include assurance services, escrow services, privacy seals, third-party endorsements, and guarantees. Institutional safeguards that increase information security include encryption, certificate authorities (to prove you are at a legitimate website), and tools that authenticate users, such as multifactor authentication (Mitchell & El-Gayar, 2022). Some centralized platform providers use CAPTCHAs, which are tasks to prove the user is human and not a software robot, such as asking a user to identify a word that appears within a blurry image. Some IS scholars argue that institutional safeguards are not enough; organizations should also create a culture of privacy that begins with top management (Culnan & Clark-Williams, 2009).

Lack of Choice

Lack of user choice is another explanation for the privacy paradox. As Onora O’Neill, philosopher and prior president of the British Academy, observed about why people rely on institutions they mistrust, “We cannot opt out of government, or the legal system, or the currency even if we have misgivings” (O’Neill, 2006 transcript of BBC Radio Broadcast). Similarly, the privacy paradox exists in part because users have no choice. Users are forced to consent to a centralized platform’s privacy policy or they can’t use the application. Once users agree to a privacy policy—which is often pages long and filled with legalese—they cannot easily backtrack. Individuals who try to protect themselves by deleting their accounts cannot do so because they don’t have create, read, update, and delete (CRUD) rights—the centralized platform providers do. Individuals, for example, cannot directly delete their Facebook accounts because Facebook has the CRUD rights; individuals must trust that Facebook will execute their requests.

Readers interested in learning more about IS scholarship may consult several literature reviews on information privacy (e.g., Bélanger & Crossler, 2011; Boritz & No, 2011; Rath & Kumar, 2021; Smith et al., 2011; van der Schyff et al., 2023).

The four explanations of the privacy paradox—privacy calculus, privacy fatigue, trust, and lack of choice—are all by-products of Web2’s centralized design. Web3 bypasses the need for centralized platform providers, and thus reduces and even eliminates the need to disclose PII for many types of applications.

Web2 and Web3 Explained

Readers are likely familiar with how Web2 applications work for online searching, shopping, banking, data storage, social media, and other services. Users access a Web2 application with an account and password and must disclose any additional PII required by the centralized platform provider. An online banking application, for example, in order to verify consumer identity at a high threshold, may require a national ID (such as driver’s license or passport), a credit report, and proof of employment in addition to an account and password. Readers probably understand that the bank governs the software that provides the service. The banks software processes user requests and stores transactions on a database which is, again, governed by the bank. Most Web2 applications work this way (see left column of Fig. 6.2).

Fig. 6.2
A table with 2 columns and 3 rows presents the user access and P I I data required to connect and transact, location and governance of software and data, and economic model details of Web 2 and Web 3.

Fundamentals of Web2 and Web3 (Image credit The authors)

Web3 applications work differently; they aim to protect anonymity, the ability to conceal a person’s identity. Individuals can choose to be totally anonymous, pseudonymous, or identifiable (Smith et al., 2011). Rather than accounts and passwords, users access Web3 applications with a digital wallet—no PII is required for many Web3 applications. Rather than a centralized platform provider storing transactions on a centralized database, transactions are copied and stored on tens or even hundreds of thousands of peers computers, meaning no single computer is in control (see right column of Fig. 6.2).

Let’s take a closer look by comparing Web2’s and Web3’s user access and PII data required, location and governance of software and data, and the primary economic model of each.

User Access and PII Data Required

For most of the history of the Internet, users access online services by creating an account and password with the centralized platform provider. Users need an account and password because the Internet was initially designed without an identity layer.

The Internet traces its roots to ARPANET, a computer network designed by the US Defence Advanced Research Projects Agency (DARPA) to share information among researchers who already knew and trusted each other. As more computers were added and as other networks emerged (ARPANET was not the only one), a standard way to connect computers was needed. Two DARPA scientists―Vint Cerf and Robert Khan―did just that when they developed the Transmission Control Protocol/Internet Protocol (TCP/IP) in the 1970s (Lacity & Lupien, 2022).

TCP/IP became a standard in 1982 and it is still the Internet’s primary protocol used today. Every device connected to the Internet has a unique IP address. For example, the IP address for the University of Arkansas server is 130.184.0.0/16. While TCP/IP provides a way to identify the machines connected to the Internet, the standards do not verify the individuals who are sending messages from those machines. Governments and organizations needed to know who is using devices, leading us to the first era of identity on the Internet, known as the centralized identity model. Centralized identity models are account based, requiring users to create logon IDs and passwords. Accounts and passwords date back to the 1960s when multiple people were sharing the same computer (McMillan, 2012). Web2’s user access is a legacy of this history.

Web2’s accounts and passwords. Web2’s centralized identity model gives centralized platform providers control over a user’s data (Preukschat & Reed, 2021). Even when users “delete” an account, all they have done is revoke their access privileges, as it is up to the centralized platform provider to decide when an account is deleted from their databases. Also, online identities are not portable across webpages (Allen, 2016). The proliferation of accounts and passwords is another limitation of the centralized model. By 2015, the average United Kingdom (UK) Internet user had 118 online accounts; by 2017, the average US Internet user had 150 online accounts (Caruthers, 2018).

More recently, some organizations invite users to access multiple sites through a single account managed by a centralized platform provider such as Facebook, Google, Amazon, LinkedIn, and others called the federated identity model. While the federated model reduces the number of accounts users need to manage, it increases the amount of PII collected and used by centralized platform providers―resulting in fewer organizations now having much more PII.

Web3’s digital wallets. Privacy experts have struggled for years to come up with a better way to establish identities and relationships while preserving privacy, leading to the Web3 solution for Internet identity: the decentralized identity model. Web3 aims to empower individuals to control their own identities, credentials, and assets. Web3 replaces usernames and passwords with peer-to-peer connections via digital wallets. To transfer money or other assets, two people must each have a digital wallet (see right column of Fig. 6.2).

While digital wallets are easy to use, there’s a lot of sophisticated technology under the hood of a digital wallet. A digital signature is the most important feature to understand because if a user doesn’t protect the private key that is used to generate a digital signature, the user risks losing/revealing all their digital assets. Digital signatures ensure the individual submitting the transaction (sender) is authentic, the transaction was not tampered with in transit, and the receiver cannot later deny getting paid (Lacity & Lupien, 2022).

Here’s how a digital signature works: A digital wallet generates a unique pair of numbers that are mathematically related, called a private–public key pair. A private key is a sort of super-safe password created and managed by the digital wallet. Here is an example of a private key:

  • DDA78BA47C7D3A1A49AA02E6C1CF7A30691603827E7DACE3C 4EE63CA0D26DAE2

The key above is a very large number, expressed in hexadecimal, which is why we see letters A through F. A super-secure algorithm uses the private key to create its mathematical mate, called a public key (also called an address or public key address). A typical public key address looks like:

  • 0×77300C71071eCa35Cb673a0b7571B2907dEB77C7

Although the private key will always generate the same public key address, it’s nearly impossible to guess a private key if only given the public key address. Today’s digital computers would take millions of years to randomly guess a private key that matches a public key address (Sharma, 2017). That’s the power of cryptography, defined as “a method of protecting information and communications through the use of codes so that only those for whom the information is intended can read and process it” (Techtarget.com).

When the user wants the wallet to transfer value out of it, the digital wallet automatically uses the private key to sign digitally the transaction. The digital signature works in such a way that all the computers on the platform can be confident that ONLY the wallet with the private key associated with this public key address could have generated the transaction.

The private key is stored ONLY in the user’s digital wallet. When a transaction takes place on the decentralized platform, ONLY the public key address is stored on the distributed ledger.

Users must remember two important things about digital wallets. First, digital wallets enhance privacy. Parties can connect to each other without revealing PII. For many Web3 applications, two parties minimally need to share public key addresses. As we saw from the example above, the public key address is just a large number; it doesn’t tell us anything about the individual! Moreover, a user’s digital wallet generates hundreds of key pairs. So, unlike a bank account number that is used over and over again, a Web3 digital wallet can generate new public key addresses for each transaction to help enhance privacy.

Second, wallet users must protect their digital wallets! Most Web3 heists happen at the vulnerable access points of digital wallets where private keys are stored. Once a hacker steals a private key, they control the assets and can easily transfer funds to their own digital wallet. Because users store their digital wallets on their own devices, users should:

  • keep very few digital assets in “hot” wallets that are connected to the Internet;

  • keep most digital assets offline in a “cold” wallet; keep the cold wallet backed up on multiple devices;

  • print wallet recovery keysFootnote 3 on paper and store them in a vault. Or better yet, divide recovery keys across multiple pieces of paper and store parts in different secure locations;

  • keep wallet software updated.

Soon, social recovery key sharding will be another common approach. With sharding, a user instructs the wallet to divvy up the recovery keys among people the user trusts, and pieces of the key are stored in those people’s wallets. More backup and recovery methods are being developed because protecting the private keys are paramount to the entire Web3 model.

Location and Governance of Software and Data

For any online application, we need software to process transactions and a record of every transaction.

Web2’s centralized software and databases. With Web2 applications, every party manages its own systems of records (software and databases, including ledgers that track debits and credits). A monthly bank account statement is an example of a report from a bank’s ledger. A bank statement lists all the receivables coming into an account (credits) and payments made (debits) out of an account. The summation of all transactions over time results in the account balance. Bank customers must review and reconcile any differences between the bank’s records and the customer’s own records. The bank has the power in this relationship because banks govern the official records accepted by regulators. The same is true of other centralized platform providers that store user transactions on their centrally controlled databases.

Web3’s distributed ledger. Web3 uses a different bookkeeping method, called triple-entry accounting, in which every transaction has three entries: the debit in the sender’s digital wallet, the credit in the receiver’s digital wallet, and the public receipt stored on a shared distributed ledger. Anyone with access to a Web browser can view the ledger, but all they will see are the sender’s public key address, the receiver’s public key address, and the amount of the transfer—no PII is stored on the distributed ledger.

The computers in the network constantly reconfirm the ledger to make sure no party tampers with the records after-the-fact. If anyone cheats, the other computers in the network automatically ignore it. Parties no longer need to reconcile records because every party agrees “this is what transpired.”

Economic Model

Web2’s primary economic model is surveillance capitalism, which was explained in the introduction. Here, we focus on Web3’s economic model.

Web3’s token economics. Web3 is based on a new economic model called token economics in which individuals earn digital tokens for engaging in desired behaviors. On the right column of Fig. 6.2, we see a network of computers, with each computer operating an identical copy of the software and storing an identical copy of the ledger. Why would someone use their computer for this purpose? Because they are paid to do so in the form of digital tokens! In the Bitcoin network, for example, computer operators are called “miners” and each computer competes to validate and add the next set of transactions to the shared ledger. The winning computer earns a fee in the form of bitcoins. So that’s one part of token economics—using tokens to reward strangers from all over the world to operate software and to store a copy of the distributed ledger.

Tokens are also used for governance. Many Web3 applications have completely decentralized governance through a decentralized autonomous organization (DAO). A DAO is a software program that runs an entire organization automatically based on codified rules. The idea of a DAO is to create a completely independent entity that is exclusively governed by the rules that everyone can see. Holders of governance tokens vote on decisions.

Token economics also applies to many asset classes. Digital tokens can represent fungible assets, like bitcoin or digital versions of fiat currencies (called stable coins). Digital tokens can represent unique assets, called non-fungible tokens (NFTs), like artwork or an event ticket. Digital tokens may be transferable to others, such as trading NFTs. Digital tokens may be non-transferable, like tokens that represent credentials (e.g., a university diploma) or voting rights. Non-transferable tokens are sometimes called “soul bound” tokens because they are tied to a single person.

Web2 and Web3 Application Examples

In this section, we learn how any user can earn digital tokens by watching online advertisements, making loans, selling land in a metaverse, and renting excess computer storage.

We examine four examples of Web2 and Web3 applications. For web browsing, Chrome is a Web2 version and Brave is a Web3 version. For borrowing and lending, traditional banks use a Web2 model and Aave is a Web3 version. For metaverse, Meta is a Web2 version and Decentraland is a Web3 version. For file storage, Dropbox is a Web2 version and Filecoin is a Web3 version. The first three decentralized applications are deployed on Ethereum: Brave, Aave, and Decentraland. We also cover Filecoin, which uses a different decentralized platform, to show that Ethereum is not the only decentralized platform in use. Since most people are familiar with Internet search engines for web browsing, we begin this section by comparing Google’s Chrome (a Web2 application) with the Brave Browser (a Web3 application).

Web Browsing: Chrome vs. Brave

Web2’s Chrome. Google’s Chrome is a Web2 application for web browsing. Released in 2008, users access Chrome for free. Google monitors user activity and applies AI to target advertisements presented as search results. In 2022, advertisers paid Google over $162 billion for placing ads in search results (Oberlo, 2023).

Web3’s Brave. In contrast to Chrome, the Brave web browser shifts a high percentage of advertising revenues from a centralized platform provider to individuals. Brave blocks all advertising and web tracking to protect privacy, but users can activate a feature that directly compensates them for watching ads online. Users can earn basic attention tokens, called BAT. After downloading and configuring the Brave browser, the user can earn BAT for clicking on a webpage, worth about USD 0.02 in June of 2023. While this is small, it quickly adds up to create seeds of disruption.

When a user watches an advert, 70% of the BAT that the advertiser credited to the application is transferred to a user’s digital wallet and 30% goes to Brave Software, the company that maintains the platform. The only thing stored on Ethereum’s public ledger is the advertiser’s public key address (debit), the individual’s public key address (credit), Brave Software’s public key address (credit), and the amount coming from and going into these addresses.

By July 2023, Brave had over 58 million active monthly users and advertisers from 187 countries (Brave.com/transparency), which was about 2% of Chrome’s user base. In 2021, the New York Times rated the Brave web browser as the best privacy browser (Chen, 2021).

Borrowing and Lending: Traditional Bank vs. Aave

Readers are familiar with how to access banking services online. We’ve already discussed that banks control the software and databases that record our money transactions. Let’s look closer at lending money to and borrowing money from a traditional bank.

Web2’s borrowing and lending with a traditional bank. Lenders deposit money in a bank in return for earning interest, but the interest rates they earn on deposits are paltry. In May of 2023, the US national bank average for interest payments on savings accounts was only 0.36% (Bond, 2023). If you deposit $1000 in a savings account, after a year, you will only earn $3.60 cents in interest! So why do it? Some depositors like to keep cash on hand in case of emergencies, and a bank is a relatively safe bet. In the US, banks must be insured by the Federal Deposit Insurance Corporation (FDIC) to get a bank charter. The FDIC insures an account holder up to $250,000 per insured bank. So, from a lender’s perspective, traditional banking services offer a relatively safe place to deposit money, but the financial returns are small.

Banks use our deposits to loan money to borrowers. Borrowers pay relatively high interest rates to the bank. In July of 2023, the average borrowing rate was 6.95% (Bankrate, 2023). If a borrower borrows $1000 for one year at 6.95%, the borrower will pay the bank $69.50 in interest. The difference between what the bank pays to depositors and receives from borrowers is called the spread. In our little scenario of depositing and borrowing $1000 for one year, the spread is huge!

Banks loan out more money to borrowers than they retain in deposits. It’s called fractional reserve banking and it is allowed because it helps to expand the economy. But there are risks! If a large portion of depositors suddenly demand their money back in cash, the bank could go bankrupt. It’s called a run on the bank, and the history of banking is littered with them. Additionally, borrowers may default on their loans, which happened in droves during the 2008 Global Financial Crisis (Lacity & Lupien, 2022).

Web3 applications for finance, called Decentralized Finance (DeFi), work differently. They require 100% (or more) in reserves, meaning the DeFi application cannot loan out more money than it keeps in reserves. DeFi applications are also non-custodial, meaning that only the users can move funds, i.e., there is no central company or government that can lock users out or deny them access to their assets.

Web3’s Aave. Aave is an example of pure DeFi―it is a completely decentralized set of applications that deals only in cryptocurrencies. All decisions are programmed (no human mortgage broker) and the software is published so people can be confident the system is processing transactions as expected. Aave allows lenders to earn interest on their cryptocurrency deposits and enables borrowers to take out cryptocurrency loans. No PII is needed; a user just sends or receives digital tokens from their digital wallet to an Aave liquidity pool address to loan or borrow money.

The CEO and founder of Aave was a Finnish law student who became fascinated with how Ethereum could be used to disrupt traditional finance. Aave, originally called ETHLend, launched in 2017 on the Ethereum platform. On July 7, 2023, Aave had $8.6 billion worth of crypto deposited (see https://aave.com/).

Loaning rates varied from 0.01% to 2.77%; borrowing rates varied from 0.61% to 3.97%. The company, also called Aave, makes money from a small percentage of a loan, about 0.09% (de Isidro, 2023). The spread is much smaller with DeFi than with traditional banks.

Like many Web3 applications, Aave is governed by a community through a DAO. Holders of AAVE governance tokens may vote upon Aave Improvement Protocols (AIPs). AIPs are published on GitHub (https://github.com/aave/aip). AAVE tokens can also be staked to the safety module to provide a type of deposit insurance. As another example of token economics, individuals who stake AAVE tokens earn rewards and fees (Lacity & Lupien, 2023).

Aave is just one of many DeFi applications. Readers are also encouraged to investigate Uniswap, Chainlink, SushiSwap, PancakeSwap, and Maker. On July 7, 2023, the total DeFi market was worth $41 billion (see https://www.tradingview.com/).

Metaverse: Meta vs. Decentraland

For our third side-by-side comparison of Web2 and Web3 applications, we examine metaverse. A metaverse is computer-generated environment one visits with an avatar, a digital representation of ourselves (see Fig. 6.3b for Mary Lacity’s Decentraland avatar). A metaverse is an immersive experience, particularly when users access the virtual world with virtual reality (VR) headsets.

Fig. 6.3
Two screenshots of the address for the plot of land owned by the British Blockchain Association and Mary Lacity’s avatar inside B B A’s building.

Web3 applications—Decentraland: a Location of British Blockchain Association (BBA) in Decentraland metaverse, located at (24,–28); b Mary Lacity’s avatar visits the BBA in Decentraland (Image credit The authors)

Future generations may earn most of their income and spend much of their money in the metaverse. New jobs will emerge, such as virtual real estate agents, virtual fashion designers, virtual security guards, virtual teachers, and others we cannot yet envision (Lacity et al., 2023b).

As the metaverse potentially evolves into one interoperable meta-universe, we must seriously question who we trust to operate it. Do we trust one or a few centralized platform providers to create, control, and govern the access, digital assets, and transactions via privately owned infrastructure and databases (Web2)? Or do we trust decentralized crowds to create and govern the metaverse (Web 3)? Web3 metaverses aim to protect privacy better than Web2 metaverses.

Web2’s Meta. Worldwide interest in metaverse escalated in October 2021 when Facebook’s CEO, Mark Zuckerberg, announced that Facebook was changing its name to Meta. In the video announcement, Zuckerberg said, “I believe the metaverse is the next chapter for the Internet.” Zuckerberg had been investing in metaverse prior to this announcement. Facebook bought Oculus, the company that built the Oculus Rift VR headsets, in 2014 for $2 billion. At the time, Facebook promised Palmer Luckey—the creator of Oculus Headsets, that Facebook would not introduce advertisements (ads) on Oculus headsets, but Facebook did introduce ads in 2021 (O’Flaherty, 2021).

Meta’s latest Quest VR headsets collect a massive amount of PII on hand movements, eye movements, facial expressions, audio data, payments, places visited, and Meta keeps user-generated content such as photos and videos. Readers are encouraged to read Meta’s “privacy” policy.Footnote 4

Web3’s Decentraland. Decentraland is a virtual world launched on Ethereum network in 2017. Unlike Meta’s metaverse applications that are accessed with virtual reality headsets, users access Decentraland with a web browser, so the user experience is less rich. It is, however, privacy enhancing.

To enter Decentraland, a user must first create an avatar; a user decides the amount of PII to reveal through their choices of designing and naming their avatar. Once an avatar is ready, the user is free to explore the different plots of land.

A location in Decentraland is depicted as an (x, y) coordinate on the map of the entire space. The center square is at address (0, 0). Figure 6.3a shows the address for the plot of land owned by the British Blockchain Association (BBA), on plot (24, −28). The BBA built a two-story building on its plot of land. Mary Lacity’s avatar is “inside” BBA’s building in Fig. 6.3b.

If users want to transact in Decentraland, they need a digital wallet and some of Decentraland’s token called MANA. Anyone can buy virtual plots of land with MANA. However, virtual plots of land near the town center can cost over $2 million worth of MANA (Howcroft, 2021). Users can buy and sell virtual goods and services with MANA; transactions are stored on Ethereum. Decentraland has completely decentralized governance through a DAO.

What kind of metaverse will we create? Web2 has the advantage of a clear path to revenues and it’s hard for Web3 communities to compete with Meta’s $10 billion investment in metaverse. Because of the deep funding, Meta’s VR applications provide vastly more immersive three-dimensional experiences compared to Decentraland’s two-dimensional experiences. As Forbes contributor Alison McCauley writes, “Web3 communities are still looking for business models that reduce the cost of decentralization, which inherently shifts the expense of the network to the people who use it” (McCauley, 2022).

If privacy is our aim, however, Web3 metaverses are superior. Web3’s metaverses share the vision of individuals owning and monetizing their identities (via avatars) and digital assets; of freely coming and going across virtual worlds; of securely executing transactions peer-to-peer with low transaction fees, of having a voice in the governance of the applications; and protecting the privacy of all (Lacity et al., 2023b).

File Storage: Dropbox vs. Filecoin

Today, most readers use a cloud-based file hosting service, such as Dropbox, Box, Google Drive, or Microsoft OneDrive. These Web2 cloud services allow users to upload files that can be accessed from any device over the Internet after providing a username and password (and perhaps another authentication method). As with all Web2 applications, the centralized platform providers control the software and data. Let’s compare Dropbox (a Web2 application) with Filecoin (a Web3 application).

Web2’s Dropbox. Dropbox Inc. is the company that built and operates Dropbox. Founded in 2007 in San Francisco California, it grew to one million users by April of 2009. Today, Dropbox has over 700 million users, many which use Dropbox for free. Dropbox Inc. earns about $2 billion in annual revenues for users who pay fees for additional storage. Depending on the amount of storage needed, Dropbox costs about $10 a month for up to 2 terabytes of data storage (https://www.dropbox.com/plans).

Who can see your files on Dropbox? According to the Dropbox website, “Like most major online services, Dropbox personnel will, on rare occasions, need to access users’ file content (1) when legally required to do so; (2) when necessary to ensure that our systems and features are working as designed (e.g., debugging performance issues, making sure that our search functionality is returning relevant results, developing image search functionality, refining content suggestions, etc.); or (3) to enforce our Terms of Service and Acceptable Use Policy” (https://help.dropbox.com/security/file-access#). Here again, users are relying on trust placed in a centralized platform provider to refrain from viewing/using their private content, even though technically Dropbox Inc. can view all content.

Web3’s Filecoin. Most of us have excess storage capacity on our computers. What if we could get paid for renting out excess computer storage? That’s the question answered by Protocol Labs, the founder of Filecoin. After six years of development and testing, Filecoin was launched in October of 2020. According to Filecoin’s website, “With Filecoin, anyone can participate as a storage provider, monetize their open hard drive space, and help store humanity’s most important information.” The main benefits of Filecoin are low costs, data resilience, and censorship resistance. (Peaster, 2023).

As with all Web3 applications, users need a digital wallet to connect to the Filecoin network. Instead of Ethereum, Filecoin uses the Interplanetary File System (IPFS) as its decentralized platform. IPFS was launched in 2015 by Protocol Labs. As of June 2023, there were nearly 500,000 computers in the IPFS network (Peaster, 2023).

Storage renters and storage providers enter into deals; they find each other on the decentralized filecoin marketplace. Storage renters pay a small fee in the form of filecoins to storage providers. It costs about the equivalent of $0.38 per month for 2 terabytes of data storage―which is much cheaper than Dropbox (Qian, 2023). Storage renters encrypt data before sending it so that the storage provider cannot read the contents. How can a storage renter trust a storage provider? Each deal is posted to the distributed ledger, while other computers in the network constantly verify that storage providers are storing files correctly.

Conclusion

To recap, this chapter has shown that IS researchers have found a privacy paradox: individuals are deeply concerned about information privacy, yet they routinely disclose PII to centralized platform providers. We covered four explanations of the privacy paradox: (1) privacy calculus where individuals weigh privacy concerns and risks against the benefits of disclosing PII; (3) privacy fatigue where individuals are emotionally exhausted from trying to protect PII; (4) trust in the centralized platform provider that encourages users to disclose PII; and (4) a lack of user choice―individuals must disclose PII as required by the centralized platform provider―or they cannot access the services. All of these explanations pertain to Web2 applications, where centralized platform providers govern the software and data.

Next, we introduced readers to Web3’s privacy enhancing approach to online applications. With Web3 applications, users can transact anonymously, meaning individuals can choose to be totally anonymous, pseudonymous, or identifiable. We compared Web2 and Web3 versions of web browsing (Chrome vs. Brave), borrowing and lending (traditional bank vs. Aave), metaverse (Meta vs. Decentraland), and file storage (Dropbox vs. Filecoin). We showed that the Web3 versions are superior in terms of protecting privacy.

While our four Web3 application examples focused on anonymity, many services require confidentiality, meaning that data needs to be viewable by authorized parties. Modern life necessitates that we prove our identities and credentials to others for jobs, airline tickets, border crossings, driver’s licenses, apartment rentals, banking, and more (Cameron, 2005). Web3 applications can accommodate confidential transactions on public decentralized platforms, but it requires learning more concepts such as decentralized identifiers, verifiable credentials, and zero knowledge-proofs, which are beyond the scope of this chapter. If these Web3 technologies become widely adopted, we will be able to replace our physical wallets with digital wallets that contain digital versions of our credit cards, memberships, licenses, and other credentials. We will possess and control who sees our digital credentials, which will be another milestone in protecting information privacy (Lacity et al., 2023c).