Keywords

AI (artificial intelligence) is flourishing. A centralized AI (CAI) is an AI working on the basis of centralized management of personal data (PD). Its operator controls many individuals’ PD and the CAI exploits that PD to intervene in their behaviors. On the other hand, the attention economy (AE) is the social state in which economic activities are driven by the need to attract people’s attention. CAIs and AE jointly give rise to digital Leninism (Heilmann, 2016) and surveillance capitalism (Zuboff, 2019), diffusing misinformation and biases and distorting behavior of people. This damages not only democracy (freedom of thought, conscience, speech, and choice) but also other social goods (value creation by PD), as shown in Figure 13.1.

Fig. 13.1
A diagram. A I based on centralized management of P D and economy driven by attention of the people lead to digital Leninism, surveillance capitalism, and behavior distortion, which further cause damage to the freedom of thought, conscience, speech, and choice, and value creation by P D.

The danger of centralized AI and the attention economy. Source: Design by author

Digital Leninism is autocratic administration using digital technology in which CAI is the major technology fitting autocracy and is utilized to implement the ideology of Lenin rather than that of Marx, Stalin, or Mao. China’s social credit system is a typical example. Unlike commercial credit services—such as the Ant Group’s Sesame Credit—this national credit system is inescapable for the Chinese people. They are banned from long-distance travel if their credit scores are bad, they may be exposed in electronic billboards if they commit traffic violations, and so forth. The Chinese government has also employed CAIs for face recognition, etc., to oppress ethnic minorities and democratization movements by picking out the target people in Beijing, Shanghai, Xinjiang, Hong Kong, and so on.

Surveillance capitalism makes more massive use of CAIs to monitor and manipulate unaware individuals’ behaviors for the sake of commercial benefits. For instance, an algorithm developed by the American retail company Target to predict female customers’ pregnancy and delivery date from their purchase records successfully identified a pregnant high school girl and sent her coupons for baby clothes and cribs, all while she was unaware that she was being monitored (Duhigg, 2012). Or another example: The British consulting firm Cambridge Analytica illegally collected 87 million people’s data through Facebook’s Friend API and allegedly used a CAI in order to manipulate swing voters’ voting behaviors in the U.K. Brexit referendum and the U.S. presidential campaign, both in 2016, to support Brexit and Donald Trump, respectively (Confessore, 2018).

Both digital Leninism and surveillance capitalism accompany behavior distortions. Fake news, echo chambers, and filter bubbles have distorted beliefs and behaviors since antiquity, but information technologies—AI technologies in particular—have diversified and refined these distortions. For instance, deep fake technology may make it impossible for viewers to ascertain the authenticity of video footage.

CAIs and AE thus not only threaten freedom of thought, conscience, speech, and choice of action, but also impede value creation by PD. The threat to freedom entails a threat to democracy, as the former is the foundation of the latter. Value creation by PD is impeded because it is restricted by centralized PD management and biased by attention distortions.

Worse still, it is impossible for humanity to jointly confine CAIs and AE, because they create winners. Namely, centralized PD management and behavior distortions may eventually confer upon some companies and countries huge profits and power. In contrast, international collaboration to avoid nuclear wars and global warming is logically possible, because they do not create winners.

The only way to reduce CAIs is to replace them with another technology creating larger value. Both public and private service providers will voluntarily move from CAI to the alternative technology if doing so is to increase their benefit. Some autocratic governments may insist on CAIs, but such a new technology would prevent CAIs’ further global spread.

Personal AI (PAI) can serve as such an alternative technology to displace CAI. Each individual will own his or her PAI, which is exclusively dedicated to him or her, manages all his or her PD, and makes full use of it to intervene in his or her behaviors more deeply and carefully than other technologies such as CAI, assisting his or her living and working activities and behavior changes—by far the best ever personal service. PAIs create much larger value for individual users and thus entail much larger profits for businesses than CAIs, because PAIs fully utilize PD. Economy of scale holds in this context, assuming a mediator, which aggregates knowledge necessary for personal services and provides it to many PAIs.

Due to the full utilization of the users’ PD, however, PAIs could be much more dangerous than CAIs. Some strict governance of PAIs and the mediator is indispensable in order to establish their social receptivity so as to displace CAIs.

On the other hand, AE is inevitable, because humanity’s bounded rationality renders attention a necessarily scarce resource. Each individual should be able to better manage the authenticity and diversity of information he or she accesses, however, by means of graph documents together with his or her PAI’s assistance. As discussed later, graph documents are documents in the form of typed directed graphs with explicit semantic structures to facilitate composition, comprehension, and learning.

The remainder of this paper shall show that decentralized management of PD (DMPD) serves as the common foundation for PAIs and graph documents, which jointly support freedom and democracy while optimizing well-balanced value creation by PD.

Decentralized Management of Personal Data

Value Maximization

In most cases, PD’s added value is maximized by decentralizing its management (DMPD) to the data-subject individuals, as shown in Figure 13.2.

Fig. 13.2
A block diagram. It presents that decentralization leads to value maximization, P D aggregation to data subject leads to utility, and no centralized management by others leads to security with no massive abuse.

Decentralization maximizing value. Source: Design by author

First, PD’s utility is maximized by aggregation to the data subjects. PD’s quality as aggregated at the individual data subject, as in Figure 13.3, is larger than PD scattered across many data controllers. Note that the data controllers do not have to share the same ID of each data subject for the sake of this aggregation. If each data controller just provides each data subject with the piece of his or her PD it holds, all his or her PD will be aggregated at his or her hand. Note also that this comprises no privacy concerns, because PD is disclosed to none other than the data subject himself or herself. Once his or her PD is aggregated at his or her hand, he or she can fully utilize it both for himself or herself (primary use) and for many others (secondary use).

Fig. 13.3
An infographic. It presents a target diagram with data controllers such as hospitals, supermarkets, and fashion pointing to the individual data subject.

Aggregation of PD to the data subject. Source: Design by author

Secondly, security and privacy are ensured by avoiding centralized management of PD. Decentralizing the management of individuals’ data prevents massive abuse of many people’s PD. In summary, DMPD maximizes the added value of PD by aggregation to raise its utility and decentralization to ensure security and privacy.

Note that centralized PD management is necessary for some public purposes which are not obviously beneficial to the data-subject individuals. Some examples are taxation, public health (such as contact tracing for pandemics), public security, and criminal investigation. Overwhelmingly more often, however, DMPD creates much larger value than centralized management.

Personal Life Repository

The author has developed a decentralized personal data store (PDS) called the Personal Life Repository (PLR) to realize DMPD (Hasida, 2013, 2019, 2020). PLR is a software library to embed in personal and corporate apps, as shown in Figure 13.4.

Fig. 13.4
An illustration presents the P L R cloud is connected to the app P L Rs. It includes the ontology for normalizing and coordinating data, decentralization and serving billions of users free of charge, and most human to human collaborations.

Personal Life Repository (PLR). Source: Design by author

PLR allows the users (individuals and organizations) to share their data (possibly containing personal information, business secrets, etc.) with each other through the PLR cloud. The PLR cloud is a collection of online storages such as Google Drive and OneDrive. DMPD is implemented through end-to-end encryption, by which each data-subject (individual or corporate) has full control over which part of the data to disclose to whom.

PLR apps (apps embedding PLR) can provide stable services to billions of users at no more than the app maintenance costs. The app providers need not pay for the PLR cloud, because PLR users manage their own regions of it. The users’ costs are also low if they use nearly free public cloud storages such as Google Drive—which, in most cases, they do.

By supporting data sharing among users, PLR supports almost any kind of human-human collaboration, including those supported by enterprise systems and SNSs. Public clouds are used as a PLR cloud by default; they usually permit rather few API calls per unit time, but this is enough to support collaborations among people because each person responds to others much less than once a second on average.

It is often quite easy to develop a PLR app by preparing ontologies and stylesheets. PLR uses ontologies to normalize and coordinate data. The user interface for entering and browsing data validated by ontologies are automatically generated by stylesheets rather than hard coded.

PLR has been employed in a real service as part of school education. Figure 13.5 shows how PLR is used to manage and utilize learners’ extracurricular data. More precisely, students in Saitama prefectural high schools enter and accumulate data about their extracurricular activities with a PLR app, disclose the data to the school affairs support system operated by Saitama Prefecture, and their teachers use the data to compose their school recommendations to universities and employers.

Fig. 13.5
A flow diagram. The student enters and collects the extracurricular activity data using a P L R app, and discloses the data to the school affairs support system, then, the teacher uses the data to compose their school recommendations.

Management and utilization of extracurricular activity data. Source: Design by author

The author’s research group is currently conducting or preparing several demonstration experiments to use PLR. One such experiment concerns infant medical checkups in Arao City, Kumamoto Prefecture, Japan. The city office will let parents use a personal app embedding PLR to compose documents (such as interview sheets) about their children and share those documents with the city office. As the parents own the document data, they can then use it for purposes outside the scope of infant medical checkups. For instance, they may use such data to compose other documents to submit to the city office, or access services provided by private businesses, including clinics and hospitals.

Personal AI

DMPD does not mean that each individual must do anything special. Instead, a personal AI (PAI) is exclusively dedicated to each individual user, manages and utilizes all his or her PD, and thereby intervenes in his or her actions more deeply and carefully than other technologies, including CAIs: It provides the user with the best personal services, such as selecting the best-suited products, personalizing individual services, or assisting behavior changes for better performance in study and business, as shown in Figure 13.6.

Fig. 13.6
An illustration presents the personal A I. It includes the following services. Childcare, learning, purchase, work, asset management, healthcare, nursing care, insurance, estate administration, and inheritance.

Personal AI (PAI). Source: Design by author

As discussed earlier, however, some strict governance of PAIs must be secured. Otherwise, one’s PAI may fully exploit one’s PD and inflict severe damage, either for the benefit of its provider or due to some bugs. If PAIs are to replace CAIs, they should be properly governed so as to benefit all stakeholders, including individual users, providers, and societies.

Purchase Support

The most profitable application of PAI is purchase support. As shown in Figure 13.7, for instance, suppose you visit a tailor, get measured, and store the measurement data in PLR. The catalog-maker (which we will later call ‘knowledge mediator’) collects information (measurement, color, material, etc.) about ready-to-wear clothes from apparel makers and compiles a RTW catalog. Your PAI downloads the catalog and recommends some clothes to you by matching your PD against RTWs in the catalog without disclosing the PD to others. If any recommended RTWs appeal, you purchase them. The payment goes to the catalog maker, who transfers it minus their commission. Parts of this commission will be given to the tailer, the PAI provider, and perhaps some others, because they contributed to the catalog maker’s commission income.

Fig. 13.7
An infographic. P A I of the user gets the measurement data from tailor, downloads catalog, and sends payment to knowledge mediator that shares commission with tailor after the purchase. The apparel maker provides product information to knowledge mediator and receives payment minus commission.

Purchase support. Source: Design by author

The commission for this purchase support is huge, because it may apply to all the services directly involving you either as a service recipient or as a service provider. You are a service recipient not only in your private life but also in your work. You are a service provider in your work. The total cash flow involved is more than 110% of GDP on average, because household consumption usually accounts for more than 60% of GDP and the labor share is typically a little more than 50%. In addition, economists estimate the value of non-paid services, such as housekeeping and childcare, to lie around 30% of GDP, making the entire value of the services directly involving individuals more than 140% of GDP. Hence, the total commission is probably about 15% of GDP.

Life Guidance

Suppose you bought honey from Alibaba and diapers from Amazon, as shown in Figure 13.8. Using your purchase data, your PAI would be able to advise you not to give honey to your baby because honey may cause infant botulism, a deadly illness affecting babies younger than one year. This is a merit of aggregating PD to the data subject (more precisely, to his or her PAI). Amazon provides an “Amazon Anshin Mail” service in Japan (“anshin” means security), with which they would send you this advice via e-mail if you happened to buy both honey and diapers from Amazon, but that fails to work if you bought them from different retailers, which is probably more often the case.

Fig. 13.8
An infographic. P A I with the knowledge mediator provides living guidance to the user based on the purchase data from shopping sites such as Alibaba and Amazon.

Living guidance. Source: Design by author

General Behavior Support

Your PAI may be able to urge you to do something useful even when you are reluctant. For instance, the PAI could persuade you to go to a physical checkup by making a reservation at a clinic, as shown in Figure 13.9. It may also support other behavior changes, such as improving health literacy, daily habits, and so forth.

Fig. 13.9
An infographic presents the interaction between P A I and the user. P A I encourages the user to attend a physical check-up.

Encouragement for attending a physical check-up. Source: Design by author

PAI’s Added Value

How large is PAI’s added value in comparison with that of CAI? Figure 13.10 shows how service providers may employ PAIs instead of CAIs as their digital customer contact points. Suppose service providers P1 . . . Pn have used their CAIs as their digital customer contact points, and the knowledge in these CAIs are K1 . . . Kn, respectively. If the service providers use each customer’s PAI instead of the CAIs as their digital customer contact point, then the functionality of this PAI will subsume K1 . . . Kn and the PAI will be able to access and aggregate all the types of PD (D1 . . . Dn) which P1 . . . Pn can access, respectively.

Fig. 13.10
An infographic. It presents that if the service providers, P 1 to P n, use P A I as their digital customer contact points, it incorporates F 1 to F n, and aggregates and utilizes all types of P D, D 1 to D n, which is accessible by P 1 to P n, respectively.

PAI as one-stop digital customer contact point. Source: Design by author

The PAI would thus generate much larger value than the CAIs, because it potentially provides as many as (n + 1)2 types of services, compared with only n types of services by P1. .. Pn, as shown in Figure 13.11. For instance, the PAI could recommend products using Amazon’s recommendation engine and Alibaba’s purchase data.

Fig. 13.11
A tabular illustration has columns for knowledge of P A I, titled K 1 to K n, and other knowledge. It includes healthcare using dietary data, healthcare using work data, and study guidance using sleep data.

PAIs create much larger value than CAIs. Source: Design by author

Knowledge Mediator

Some system, which we called a catalog maker and will call a knowledge mediator hereafter, is considered necessary which aggregates various sorts of knowledge and provides the aggregated knowledge to PAIs of many individual users, as shown in Figure 13.12. This is far less redundant and far more efficient than many PAIs of many people aggregating knowledge independently from each other. Note that the knowledge mediator enjoys economy of scale, in the sense that the cost for serving each of PAI users is approximately the cost for the knowledge aggregation divided by their number. So does the PAI provider, of course, because the cost for serving each user is approximately the cost for PAI development divided by the number of the users. The knowledge mediator and the PAI provider (who may or may not be identical) together constitute a platform to intermediate between PAI users and providers of goods and services.

Fig. 13.12
An infographic. The knowledge mediator aggregates different types of knowledge, K 1 to K n, from providers, P 1 to P 1, respectively, and provides the aggregated knowledge to the P A I of individual users.

Knowledge mediator aggregates knowledge for PAI. Source: Design by author

Neither the knowledge mediator nor the PAI provider need centralized PD management because they need not access any PAI user’s PD in order to serve him or her. As part of knowledge aggregation and PAI development, they may have to collect and analyze some (not all) PAI users’ PD to acquire general knowledge for personalization (knowledge about what types of goods and services fit what types of users, among others). Yet this does not qualify as centralized PD management, because this general knowledge identifies no particular user.

Although the knowledge mediator and the PAI provider do not directly intervene with any individual user, they must be somehow governed so as to maximize the merit while controlling risks of PAI to the user and the society. Later discussed will be a decentralized governance of PAI to this end.

Displacement of CAIs

As discussed before, global collaboration to reduce CAIs is impossible, because CAIs—unlike nuclear wars—will create winners. As shown in Figure 13.13, however, it is probably possible to let service providers (both public and private) voluntarily shift from CAIs to PAIs because PAIs offer more advantages. If PAIs spread to some extent, then so does DMPD, because the former is based on the latter. DPMD enables decentralized governance of PAI as not only government agencies but also research institutes, universities, private companies, NPOs, etc. could easily collect personal data and check PAIs’ behaviors for the sake of value cocreation balanced among people, businesses, and societies. This would improve PAI’s social acceptability. As this cycle, illustrated in Figure 13.13, turns, more service providers employ PAIs instead of CAIs.W

Fig. 13.13
A diagram. Global collaboration to restrict C A Is is impossible, and P A I creates much greater value than C A I leading to more servicers using P A I, spread of decentralized P D management, decentralized governance of P A I, and value cocreation among people, businesses, and societies.

PAIs displacing CAIs. Source: Design by author

Human-AI Interaction

PAI may be implemented anytime soon, possibly based on LLMs (Large Language Models) such as GPT. A mediators’ knowledge aggregation could be the training of some LLM, and each individual user’s PAI could download or remote-access and use that model in services to the user.

The interaction between the human user and such an AI (not only PAI) typically communicates natural-language plain-text data, but this interaction will be more efficient if more semantically explicit data are used here instead, where ‘semantically explicit’ means that the mapping between the data and their meanings is easy. For instance, Microsoft Bing can present search results in the form of tables and charts, which are easier for users to comprehend than plain texts. On the other hand, LLMs generate program codes better than natural-language texts, because programming languages are formal languages, which encode semantics more explicitly than natural languages do.

The human-AI interaction should be optimized by communicating the most semantically explicit data for both people and AI. The author considers graph documents (Hasida, 2016, 2017) are such data. Graph documents are documents in the form of diagrams or graphs with explicit semantic structures. Figure 13.14 shows a graph document explaining why graph documents should replace traditional text documents.

Fig. 13.14
A diagram represents a graph document. It explains the semantic structures and useful features of the graph documents compared to the text documents.

A graph document. Source: Design by author

Graph documents are labelled directed graphs validated by some ontologies. Nodes in these graphs are instances of classes defined in the ontologies and contain basic content such as text, image, and video normally corresponding to simple sentences or noun phrases. Links therein are triplets which are instances of properties in the ontologies and encode semantic relationships between their end nodes. These relations are typically discourse relations, as in Figure 13.14.

We consider people and AI (possibly PAI) should interact by collaboratively composing graph documents as in Figure 13.15, because graph documents are probably the most semantically explicit data for both people and AI. In fact, graph documents are easier than text documents for people to compose, as Zhang (2020) (a master’s thesis at the author’s lab) demonstrated that collaborative composition of graph documents is more productive than collaborative composition of text documents. Graph documents, like program codes, are considered also more tractable for AI than text documents.

Fig. 13.15
An infographic. It presents that a graph document that is semantically explicit helps the human with better critical-thinking skills and A I with better inference and learning.

Human-AI interaction via graph documents. Source: Design by author

The graph documents in Figure 13.15 are stored in PLR. This is both to safeguard the documents and to utilize them to develop and govern (improve) AIs, as discussed later.

The composition of graphs (graph documents, argument maps, concept maps, mind maps, etc.) improves critical-thinking (CT) skills (Twardy, 2004; Álvarez Ortiz, 2007; Barta et al., 2022). As argument mapping improves CT better than concept mapping and mind mapping, graph documents are in this respect probably more effective than the latter two—unlike them, argument maps and graph documents are both typed by ontologies (and are hence semantically explicit). As I show in Figure 13.15, however, argument maps cannot be used for general human-AI interaction because the ontology behind argument maps is too small to address general document content.

Graph documents are thus probably the best sort of data to mediate human-AI interaction. As a matter of course, however, various other sorts of data (tables, charts, etc.) may be incorporated or integrated in graph documents in order to improve semantic explicitness.

The author expects graph documents not only to enhance society-wide productivity, but also to protect and strengthen democracy in at least two other respects: First, their semantic explicitness and the CT improvement of the general public would curb misinformation and reduce biases. Second, graph documents could mitigate wealth disparity, as the CT gain tends to be larger for people with low CT skills.

Decentralized Governance

Figure 13.16 depicts, among other aspects, the decentralized governance of PAI and other personal services. Not only government agencies but also other organizations can monitor and audit behaviors of personal services by analyzing PD collected from the individual service users via mediators, in order to maximize those services’ added value while balancing the value distribution among individuals, businesses, and global/local societies. It is vital that multiple auditors check services in parallel, and that they monitor one another by checking each other’s analysis results, thus establishing and maintaining their social trust: A PD-oriented decentralized system for governing personal services.

Fig. 13.16
An infographic presents the flow in 3 layers, service, service governance, and meta-governance. Data generators or service providers lead to P A Is of individuals via aggregation of P D, followed by mediators via anonymization, service designers or auditors via data cataolg + analysis results.

Open citizen science. Source: Design by author

The service auditors (and also designers) in Figure 13.16 require not only PD generated by individual users, but also PD generated by services in order to analyze the interaction between them. A regulation is therefore necessary to guarantee some data portability encompassing the PD generated by the services, which is stronger than the data portability in the GDPR.

At any rate, DMPD enables a decentralized system for statistical analysis of many people’s PD. This system, open citizen science, is useful not only for development and governance (improvement) of services including PAIs, but also for many other purposes encompassing policy making, public health, machine learning, medical science, political science, psychology, sociology, and so forth. In this connection, note that some mediators in Figure 13.16 are both data mediators (providing service designers/auditors with data-analysis results) and knowledge mediators (providing PAIs with knowledge, which is some sort of data-analysis result). 

Conclusion

PLR supports the decentralized management of PD (DMPD) of up to billions of individuals at extremely low cost together with high security and privacy. Accordingly, it will help both PAIs and graph documents spread worldwide. DMPD also allows individuals to provide their aggregated PD for the sake of decentralized governance of PAIs and other personal services. PAIs will displace CAIs because this governance will allow them to far more greatly benefit all stakeholders. On the other hand, graph documents facilitate verification and enhance the diversity of information users can access, securing freedom of thought, conscience, speech, and choice based on scientific grounds. In summary, DMPD supports freedom, democracy, and well-balanced value co-creation, as depicted in Figure 13.17.

Fig. 13.17
A diagram. Decentralized P D management leads to open citizen science which then leads to P A I and graph docs. The graph docs are interconnected with P A I and human intelligence which then together leads to containment of C A I and A E. P A I is connected with decentralized P D management.

Decentralized management of personal data (DMPD) supports freedom, democracy, and value co-creation. Source: Design by author

There are a few issues to address in order to implement this agenda. First, service providers should understand that PAIs are more profitable than CAIs. If this is the case, then PAIs and DMPD will jointly spread, establishing decentralized governance of PAIs, improving their social receptivity, and mostly displacing CAIs. Second, graph documents should also spread together with DMPD, as AI providers could more easily understand their commercial merit than the merit of DMPD. Lastly, some security technologies—such as digital signatures—are necessary to jointly secure the authenticity of information.