Introduction

We are currently living in a time of digital transformation often referred to as digital turn or rise of artificial intelligence in the context of the Society 4.0. Originally, Society 4.0 was associated with changes in the industrial and production sectors, with the potential to reshape the entire social sphere, much like previous technological revolutions. However, it is becoming evident that this technological evolution is affecting all levels of society, not just industry. Modern technology has expanded beyond research, development, and manufacturing, permeating public and private life to the extent that it appears to be creating a society centered around the interconnection of technology, people, and Big Data.

The current nature of society is inextricably linked with information and communication technologies. We could therefore speak of living in a time that is largely organized by means of the analysis and processing of Big Data. Digital technologies are anchored in a broad social, political and ideological context—a context which to a substantial extent defines the age we live in (see e.g. Allmer, 2017).

This integration of technology, AI, people and data presents new ethical and political challenges and dilemmas in its implementation. On one hand, technologies and AI are radically transforming our environment, and on the other hand, often without us realizing it, they are also reshaping us; it determines our lives. This “digital turn” is currently challenging established dichotomies in modern society, such as subject/object, public/private, consumption/production, mind/body, work/leisure, culture/nature, and more (Chandler and Fuchs, 2019, p. 2). Now, we can speak of a digital civil society that needs new elaboration and reflection.

Initial excitement about scientific discoveries and innovations is often tempered by concerns about unintended consequences. Obstacles may include regulatory constraints and economic considerations for transitioning new technologies and AI from the laboratory to practical use. It is widely accepted that there exists a gap between technological potential and implementation due to economic, legal, and organizational factors. However, the introduction of technological innovations is typically driven by their presumed benefits for individuals, social groups, or society as a whole. Any potential negative consequences are usually deemed acceptable as long as they do not directly violate established legal or social norms and can be offset by positive effects in the relevant domain. As technology and AI advances rapidly and plays a greater role in society, the impacts on individuals’ lives and social subsystems must be carefully considered (Matochova et al., 2019, p. 229; Kowalikova et al., 2020, pp. 631–636).

One of the primary consequences of this transformation is a departure from the traditional material production and services of late capitalism, shifting the focus towards data production. This shift has been extensively analyzed in the context of “digital capitalism,” as presented by Fuchs and Mosco in 2016. This change in the economic landscape places a significant emphasis on data generated by users, moving the economic sphere from the physical to the virtual realm, impacting individuals’ orientation within the technological sublime.

The virtual world has become the stage for the process of “datification” of the universe. We can speak about datafication of knowledge in general. It subsequently becomes the platform for the commodification of this data, a topic addressed by Mayer-Schönberger and Cukier in 2014. Data undergo analysis, often utilizing algorithms, artificial intelligence, neural networks, and deep learning, with the objective of introducing new services and business models.

The commodification of data is a process where data becomes a commercial commodity, increasingly prevalent in the digital age. This process begins with the collection of vast amounts of data from various sources, such as social media, online searches, mobile apps, and sensors in smart devices. These data are then analyzed and processed using sophisticated algorithms to extract useful information, such as user preferences, behaviors, and trends. Subsequently, this data is sold or exchanged among businesses for purposes like targeted advertising, product development, customer service improvement, or market trend forecasting. The commodification of data also raises privacy and ethical concerns, as individuals’ personal information becomes a tradeable commodity without their explicit consent. Therefore, while data commodification brings business benefits and innovations, it also requires careful regulation and ethical management to protect individuals’ rights.

From the standpoint of political economics and Critical Theory, this represents a novel phenomenon. In this new digital economic landscape, the central focus is on data and their generation, as discussed by Bridle in 2018 and Ross in 2019. This marks a distinct strategy for capital accumulation through the public sharing of data. However, a dilemma arises: when data are generated by users themselves, it becomes challenging to determine who the actual producer is and who holds ownership rights over the data. These data are often claimed as private property by large corporations, transforming them into information commodities that are rooted in knowledge, ideas, communication, and their broader cultural context, as argued by Keen in 2019. This problem brings up a big question about the nature of work and the working process. We need a redefined concept of work in general.

Hence, it is imperative to employ a critical approach to unveil the concealed mechanisms behind the processes of digital data commodification. This will enable the formulation of normative principles that establish a legal framework to govern these emerging phenomena (Rakowski and Kowalikova, p. 32, 2020). Reactions to the dynamics of ongoing change range from efforts to stabilize the environment through new control mechanisms and increased monitoring to the adoption of change and the restructuring of familiar interpretation frameworks. Some individuals may also experience feelings of helplessness and alienation in the face of these changes (Veitas and Weinbaum, 2017, pp. 1–2).

The constant emphasis on risks in the public domain, along with efforts to significantly mitigate them, disrupts our sense of ontological security. In advanced societies today, individuals are more likely to face health risks such as overeating rather than famine, suicide rather than physical attacks, and old age rather than infectious diseases (Harari, 2018, p. 397).

The ongoing discussion regarding the future societal transformation brought about by digital advancements takes into account not just the significant and positive impacts of these technological developments but also the potential adverse outcomes. Novel materials created through these technologies may pose risks to political and social systems. According to certain authors, they might even give rise to global existential threats to human civilization.

There is a phenomenon known as technochauvinism, as described by Broussard in 2018, which revolves around the belief that technology always represents the optimal solution to any problem and is inherently superior to traditional or non-technological methods. However, this perspective can result in the neglect of non-technological alternatives or the dismissal of valid criticism of technological progress. One of the most significant ways in which technology can contribute to social inequality is through uneven access to technology itself. Even when technology is accessible, individuals lacking the essential skills and training to effectively utilize it may find themselves at a disadvantage. This can lead to the emergence of a digital divide, where disparities in access to technology further widen existing social inequalities.

Furthermore, technology can perpetuate biases and discrimination prevalent in society, and it may also pose threats to individual privacy and civil liberties, especially for marginalized groups who might be subjected to increased scrutiny and surveillance. These issues have the potential to reinforce and solidify preexisting social inequalities.

Hence, it is crucial to delve into the examination of how the unintended consequences, often referred to as externalities, of technological advancements impact society’s well-being. It is imperative to identify the secondary effects of these changes, both on a social and political level, and to contemplate how contemporary social institutions can adapt and evolve to address these challenges, as emphasized by Bowles in 2021 (p. 32). Scientific and technological solutions may give rise to conflicts among diverse societal interests and objectives, all of which play a role in shaping the development and implementation of innovations. These conflicts can manifest as social disputes stemming from various interpretations of the perceived threats to society. An analytical perspective from the realms of social and political philosophy and sociology can offer a valuable contribution in this context.

Big Data has seamlessly woven itself into the fabric of our lives, primarily through its capacity for real-time personalization across a myriad of services. It wields substantial influence over our choices, spanning from entertainment preferences such as the movies we watch and the music we listen to, to decisions concerning travel destinations, accommodations, social interactions, and even financial choices. Nevertheless, this pervasive technological integration has raised legitimate concerns about privacy, discrimination, and the presence of biases in these processes, as discussed by Bridle (pp. 142–143).

Some theorists argue that these developments embody a sort of technological determinism, emphasizing the idea that technology operates with a degree of autonomy. However, a more optimistic viewpoint suggests that responsible technology usage, ethical considerations, and education can empower individuals to effectively navigate this complex technological landscape, a perspective exemplified by Adam Greenfield in 2017.

In this context, it is important to recognize the role of algorithms and new technologies in shaping our daily reality. Often, we use these technologies without understanding how they work or the algorithms behind them. As a result, our social reality becomes simplified, leading to a world of computational dominance. This raises questions about responsibility, ethics, awareness, and education in managing the impact of technology on society (Bridle, 2019).

In conclusion, the rapid advancement of technology, especially AI and big data, presents both opportunities and challenges for society. How we navigate these changes will depend on our ability to strike a balance between harnessing the potential benefits and addressing ethical, regulatory, and educational considerations. The impact of technology on our lives is profound, and it is essential to approach it with a nuanced understanding of its implications.

The goal of the text is to provide a comprehensive overview of the social impacts of the use of artificial intelligence (AI) and to lay the foundation for further discussions and research in this critical area. The text aims to address the compatibility of technological advances in AI with democratic values and social justice. It emphasizes the need for an interdisciplinary approach to studying these social impacts and advocates for collaboration among technical experts, ethicists, lawyers, and social scientists. In addition, the text underscores the importance of establishing appropriate regulations and ethical guidelines for AI use to create a society that benefits from technological progress while ensuring justice and protecting individual rights.

Methods

In formulating the article on the social impacts of artificial intelligence, the research methodology incorporated several key scientific methods, including a comprehensive literature review, and ethic and policy analysis. Firstly, the article extensively employed a literature review to establish a foundational understanding of the existing research landscape on the subject. By synthesizing findings from a wide range of academic sources, the authors ensured that their work was informed by the latest developments and perspectives in the field. In addition, the article integrated a thorough policy and legal analysis to assess the regulatory frameworks surrounding AI use. This involved scrutinizing existing policies and regulations, identifying potential gaps, and proposing recommendations for enhancing legal frameworks. The authors critically examined the ethical and legal implications of AI, contributing to the formulation of guidelines and regulations that align with democratic values and social justice. Together, these methods ensured a robust and multidimensional exploration of the social impacts of artificial intelligence, fostering a comprehensive understanding of the subject matter.

The critical methodology of this article is based on the Critical Theory of Technology and the philosophy of information, emphasizing digital transformation and the application of artificial intelligence. It focuses on interdisciplinary analysis, including sociology, anthropology, political science, and economics, to explore the social influences and structures affected by technological innovation. The approach combines philosophical and sociological theories to reveal the hidden mechanisms of datafication and commodification of digital data, while considering the ethical and political aspects of technological development. The analytical framework includes critical reflection of current digital and technological phenomena, examining user behavior, and assessing the social norms and values associated with technology.

Social risks of artificial intelligence

Examining the relationship between society and technology is a complex, interdisciplinary task that demands different perspectives and methodologies, including elements of sociology, anthropology, political science, economics, and other disciplines. Such transdisciplinary research includes the perspective of social influences, structures, and interactions, the analysis of the social consequences of technological innovation, the study of user behavior, and the examination of societal norms and values associated with technology. The study also explores the interaction of technology, culture, tradition, and social identity, including the economic consequences of technological innovation (the impact of technology on GDP growth, labor productivity, job creation, and competitiveness; analysis of investment in research and development, technology transfer, and technology trade).

On one hand, innovation and automation of production and services increase efficiency and productivity, which positively impacts GDP growth and job creation. On the other hand, the same process leads to changes in the employment structure, resulting in unintended negative consequences. (Gruetzemacher and Whittlestone, 2022) The political perspective involves examining the interaction of technology and the political system, decision-making processes, cyber security, internet regulation, the influence of technology giants on politics, and privacy and civil rights issues. With new technologies, individuals’ personal data is collected, processed, and used for profit, thereby threatening individual privacy and personal freedom. The possible distortion of public opinion and influence on elections increase the risk of political manipulation and the weakening of democratic processes (Zuboff and Schwandt, 2019).

The social risks associated with the use of artificial intelligence are primarily related to ethics, privacy, and social inequalities. AI algorithms can mirror and reinforce existing social prejudices and discrimination. If training data contains biases, AI algorithms can internalize and reproduce these biases, manifesting in various areas, including employment, crime, and finance. The use of AI involves the collection and analysis of vast amounts of data about individuals, potentially compromising their privacy and security. A lack of transparency in how AI algorithms operate can lead to mistrust and a sense of loss of control. At this level, the unintended consequence of AI usage, with significant security implications, includes the manipulation of public opinion, cyber attacks, or the development of autonomous weapons. Managing these social risks is crucial for the sustainable and ethical use of artificial intelligence, necessitating the creation of ethical guidelines and a transparent and responsible approach to AI. In the domain of the social impacts of technology, it is essential to recognize that discrimination perpetuated by AI algorithms can exacerbate existing inequalities (e.g., Noble, 2018). It must be expected that certain social groups will be negatively affected, particularly in areas like employment, housing, or crime. Digital inequality plays a significant role in this process. The issue of digital inequality encompasses disparities in access, use, or the ability to use modern information and communication technologies, affecting individuals, communities, or entire regions and countries. It is reflected in social and economic inequality, encompassing physical unavailability of technology, lack of access to relevant and quality content or services, limited digital competence in technology and internet use, safety rules, and the ability to search and verify information.

Furthermore, technological systems, including algorithms and artificial intelligence, can influence decision-making in social programs and assistance for the poor and marginalized populations. Some of these systems can exacerbate poverty by misidentifying the needs and social conditions of individuals. The digitization of public services and social programs can lead to social exclusion and the marginalization of those lacking access to modern technologies or lacking sufficient digital skills. This reinforces the importance of ethical oversight and transparency in the development and implementation of technological systems affecting social policy (Eubanks, 2018).

Current ethical, political, and social issues

If we focus on the ethical and political issues that arise in the context of new technologies, artificial intelligence, data collection, and algorithm application for users navigating the digital world, we can define the following problems.

With the increasing amount of personal data being collected and analyzed, there is concern about its misuse. Questions regarding privacy protection, regulation, and transparency are key issues that national states and international organizations must address. Cooperation with multinational corporations that collect personal data should be an integral part of this. (Chandler and Fuchs, 2019) Artificial intelligence algorithms may be burdened with bias, leading to unfair decisions and discrimination. This applies, for example, to decision-making in employment, credit scoring, and criminal justice. The values and ideologies of the technology designers are embedded in the algorithms. Avoiding these problems assumes a reflection of social reality itself. (Coeckelbergh, 2020) With the increasing use of AI in critical systems such as autonomous vehicles or medical devices, the importance of security also grows. There is concern about the potential misuse of AI for cyber attacks. Here, expert teams should play a role in preventing these threats, but the challenge lies in the constant development and growing threats.

Developers and organizations creating AI systems must address issues of ethics and responsibility. This includes deciding how systems will behave and how they will be used. It is assumed that a certain ethical concept will be embedded in the algorithms. However, the challenge is that technology must compete in the job market, so it tends to align with external market needs. A possible solution is the democratization of technologies. (Coeckelbergh, 2020) Policymakers and legal professionals are trying to adopt regulations and standards for artificial intelligence to ensure its safe and ethical use. However, it is a challenging task as AI technology is rapidly evolving. The implementation of new technologies cannot do without a philosophical framework, facing a big problem described in Moore’s Law: every eighteen months, the performance of computing circuits doubles, implying that technologies develop at an exponential rate. It means that technologies evolve faster than legal frameworks guaranteeing our safety. The potential of new technologies cannot be fully realized. The development of normative frameworks in the form of specific laws is logically slower than the development of technologies themselves. The development of artificial intelligence and automation can affect employment and the job market. Some professions may be threatened, while others may emerge. (Makridakis, 2017, Zarifhonarvar, 2023) Use of Artificial Intelligence in the Military—Military use of AI brings complex ethical and security questions. There is concern about autonomous weapons and possible misuse of AI in military conflicts. (Ord, 2020) Accessibility and Equality of Access, Ownership of Data—The question of access and equality of access to AI technologies is also important. It is necessary to ensure that the benefits of AI are available to the widest possible range of people. The implementation of new technologies cannot do without a philosophical framework, facing a big problem described in Moore’s Law: every eighteen months, the performance of computing circuits doubles, implying that technologies develop at an exponential rate. It means that technologies evolve faster than legal frameworks guaranteeing our safety. The potential of new technologies cannot be fully realized. The development of normative frameworks in the form of specific laws is logically slower than the development of technologies themselves. (Allmer, 2017, Ashok et al., 2022).

Results

In general terms, we will draw on a methodological approach that was established relatively recently: the philosophy of technology (one of the first publications exploring this perspective was Simon, 1969; for more information see e.g. the overview in Berg-Olsen et al., 2009). This approach assesses the significance of technology and AI, its ethical dimensions and its other impacts on society; in many ways it draws on the earlier approach taken by the philosophy of science. Criticism of various social divisions associated with modern technologies is the domain of a subdiscipline known as ethics of technology, which became established as a fully fledged area of philosophy during the 20th century. One of the best-known forms of this subdiscipline (and one which besides the ethical dimension also drew on the perspectives of social and political philosophy) can be found in the first-generation Critical Theory of Society; this approach will be at the core of our article—but in the current context of new communication and information technologies. (Feenberg, 2009).

The analysis of the digital transformation and implementation of AI

One of the leading representatives of the Critical Theory of Technology, Herbert Marcuse, noted that one problem of technology is that the expanding industrial base and the conditions imposed by the social order of technocracy are suppressing human individuality in favor of standardized efficiency. A similar approach has been taken towards the emergence of a new modern rationality, which has accompanied the development of technologies during the era of industrialization and which represents the basis of mass production as well as impacting upon other social relationships (Adorno and Horkheimer, 2007). The second generation of Critical Theory draws on these analyses, e.g. Habermas’s rejection of technological neutrality (Feenberg 2014). This approach rests on the assumption that economic and social growth is determined by scientific and technological progress, which in the final instance is a political problem, because political problems are reduced to technical problems and their solution is delegated to experts rather than politicians.

A new type of technology and AI has arrived, which is difficult to cognitively reflect with. One can no longer perceive what the new technologies are capable of—we perceive them as black boxes. The whole/totality of these technologies creates a new kind of sublime that we are unable to interpret (like in Kant and Lyotard’s theory of art). This has the effect of transforming the social subject itself. However, this phenomenon cannot be interpreted by technological determinism; we need to extend the Critical Theory of Technology with a new framework that will offer a certain cognitive map in this infinite diversity. It will therefore be based on Fredric Jameson’s theory that will be combined with the Critical Theory of Technology, philosophy of technology and sociology (Jameson, 1992). Of course, the main goal will be to trace the contradictions between the individual, technology and society.

The representatives of the contemporary Critical Theory of technology have pointed out that the current digital world is witnessing a similar process of alienation to that which was previously identified using the methodology of classical critical theory. For example, the theory developed by Allmer (2017) uses tools enabling shared data to be subjected to critical investigation from the perspective of economic and power relations. Although it appears that data are handled by users, in reality these data are owned by large corporations. Allmer claims that user data are exploited for the accumulation of capital—which, in this digital environment, creates social divisions (Allmer, 2017, p. 21). In his view, the fact that this principle of capital accumulation has migrated from the material environment of physical commodities into the digital world reflects the gradual evolution of commodification.

However, the commodification of public goods (such as data) brings with it numerous complications: for example, the practice of digital reproduction accentuates the privatization of data (see e.g. Rakowski and Kowalikova, pp. 29–30, 2020).

Our use of the critical theory of technology means that we will analyze contemporary societal phenomena using a dialectical method with respect to society as a whole—yet for the purposes of our analyse, in order to understand contemporary relations within society, it is necessary to proceed via an analysis of technologies. It is therefore essential to select a mediating framework positioned between technology and society. The Critical Theory of Technology approaches such analyses using an interpretative framework according to which the asymmetry of power relations is incorporated into the actual design of technologies. Technology is understood as a reflection of societal relations, and for this reason it cannot be viewed in neutral terms (this is the fundamental paradox of empirical analysis). From this perspective, technology cannot be designed outside the societal context. The goals of technology thus correspond with the goals of its own production process (Rakowski and Kowalikova, pp. 30, 2020).

In this way, the Critical Theory of Technology draws attention to the socially conditioned construction of technology and the impacts of technology on society. Critical theory explores the dialectic of substance and phenomenon, as well as focusing on societal reality which manifests the specific historical activities of humans (Allmer, 2017, p. 25). However, a problem arises if we ask what precisely should be considered a phenomenon, and how structures should be interpreted in the context of digital capitalism (see also Rakowski and Kowalikova, p. 30, 2020). How should we interpret the position of an individual in the context of technologies, power relations of technologies, mediation between humans and technologies, or the ideology of technologies? These are among the fundamental interpretative questions explored in this study.

Although Critical Theory is essentially value-burdened, in our opinion it should conduct its analysis in a neutral manner: technology should neither be adored nor demonized; we need to be able to identify both good and bad aspects of technology, and only in this way will we have the tools to transform it, i.e., to democratize its latent functions. This approach represents our methodological innovation. It means that we want to use the critical theory of technology for the purpose of analysis and description—not to exploit its value-burdened nature in normative criticism (Rakowski and Kowalikova, p. 31, 2020).

Following Andrew Feenberg (2009), we can distinguish two main approaches in the theory of technology. The first is instrumental: technical tools are viewed as neutral resources that only serve societal goals, helping to achieve efficiency. This approach is purely functional; technology is detached from the context of political ideology. The second approach is substantive: it denies that technology is neutral, and accentuates its negative impact on humanity and nature. According to Allmer (2017), a third approach—critical and dialectical—is needed. This approach constitutes an interpretative framework according to which technology cannot be separated from its use: technology is already defined before it comes into existence, and it emerges into a specific value context, thus contributing to the maintenance of existing social relationships (Allmer, 2017, p. 38). In this study we draw on Allmer’s approach, but our methodology—in line with Feenberg—takes a non-deterministic approach to technology. We do not view technology merely as a set of devices or a sum of rational goals; in our approach, the nature of technology is also affected by factors such as public opinion—i.e., a normative requirement of democratic instrumentalization (see Feenberg, 2009, p. 146). We thus view technology in connection with specific social discourses—experts, norms, institutions (Rakowski and Kowalikova, pp. 31–32, 2020).

In our view, this approach needs to be further elaborated. It frequently happens that technology becomes imbued (whether consciously or unconsciously) with specific values, and a hermeneutics of technology should be capable of interpreting these values. Technologies contribute to the formation of the principles according to which we live—yet on the other hand, technologies can, to a certain extent, represent either our own values or the values of others. Although there exists a tendency to view technology and politics as separate domains, in our opinion technology is not a neutral resource: on the one hand it has its own value (and it can reflect various private intentions), but on the other hand its course of development can be determined by society. Applying this methodological approach, this study thus views technology as an outcome of numerous factors: the meaning of technology is only defined once it is used in the societal context.

So, the methodology will primarily draw on three types of methodologies and methodological approaches:

  • analysis of the political dimension of technologies and interpretation of social relations applying the critical theory of technology (see Andrew Feenberg, Cristian Fuchs or Thomas Allmer);

  • application of the philosophy of information to questions of knowledge/gnoseology of the social universe as a Big Data project (as elaborated in the work of the Italian philosopher Luciano Floridi), using analytical tools developed by us to explore contemporary information and communication technologies;

  • we will also focus on the general concept of the datification of knowledge, which expresses the current trend in which knowledge is converted into digital form and subsequently analyzed. This advantageously benefits those who have access to algorithms and artificial intelligence capable of analyzing this vast amount of data. A didactic tool in this context can be so-called computational thinking.

These three distinct traditions need to be integrated, because analysis of contemporary phenomena such as Big Data is lacking in the first approach. On the other hand, Luciano Floridi’s theory of information (2014) does not incorporate a political analysis of (new) technologies which would offer more than a mere description of new forms of the postmodern, information-filled world. Such an analysis would not allow us to take a critical view of the contemporary digital world and to explore the negative aspects of the transformation of data into capital, the instrumentalization of data, innovations in business models, and similar issues. One innovative aspect of our methodology is that our analysis will encompass the political context of technologies while also taking into consideration how technologies are transforming social subjects.

Challenges in the analyses

This innovative methodological approach will be one of the main contributions to the contemporary debate:

  • to identify the most important elements in the earlier and contemporary critical theory of technology which are appropriate for the analysis to be conducted, to elaborate and apply our own interpretative method integrating various theories and methodological concepts from the field of digital technologies with the topic of digitalization;

  • to analyze the divisions arising from the use of selected new technologies and AI;

  • to reflect on the construction of contemporary reality as influenced by selected modern technologies and AI;

  • to delineate the roles played by these selected technologies and AI in society;

  • to investigate the social context of the risks associated with selected modern technologies, their mutual relationships and complexity derived from using AI.

Power, politics, and data

These problems are complex and require collaboration between technology companies, governments, academic institutions, and society as a whole. It is likely that we will continue to grapple with these issues for a long time, and it will be necessary to seek permanent solutions. In our reflection, we can see how social disparities, similar to classical contradictions in the material world, are now appearing in the digital realm. It is necessary to critically examine shared data from the perspective of economic-power relations. Although users seem to handle data, they are actually owned by someone else who truly decides how to deal with them. This is certainly disconcerting, and it is necessary to explore its impact on users.

Through user data, capital accumulates easily, turning this digital environment into an arena of struggles where class and social disparities emerge. The transfer of the principle of capital accumulation from the material environment of commodities to the digital world is part of the evolution of commodification. However, commodifying public goods (such as data) brings a host of complications, including the politicization of privacy protection. Therefore, it is necessary to create new forms of capital, ideally involving the user who constantly produces data in this digital production. Following the vocabulary of Critical Theory, this phenomenon can be labeled as digital ideology and digital exploitation.

Discussion

Numerous present-day conflicts between individuals and the online environment arise from the reconfiguration of the way data is interpreted. It is clear that a similar contradiction to that seen in the physical world emerges here, wherein an unfair market position arises based on knowledge, as well as access to interpreted data—algorithms and artificial intelligence provide advantages.

Several challenges stem from the way technology influences and shapes our perspective of the world, often subtly and not immediately obvious to those experiencing it. In a setting where algorithms are not transparent, knowledge is transformed into data, and imbalances exist in technology’s creation and design, it becomes imperative to explore how individuals perceive and understand their surroundings.

This highlights the challenges and complexities that arise in our relationship with technology, especially when it comes to how technology influences our perception and understanding of the world. (Bridle J, 2019) We can identify several problems.

Technology plays a significant role in shaping our experiences and interactions with the world around us. It influences how we perceive, understand, and engage with our surroundings. Technology acts as a filter, influencing how we access and process information. This filtering can be subtle and may not always be apparent to individuals. Technology can prioritize certain information while obscuring or diminishing others, shaping our perspectives and knowledge base. The inner workings of algorithms and data-driven systems are often opaque and lack transparency. This lack of transparency makes it difficult to understand how technology makes decisions or shapes our experiences. The opacity of algorithms can lead to biases and unintended consequences. Traditional knowledge is increasingly being transformed into digital data, enabling it to be processed, analyzed, and manipulated by technology. This datafication of knowledge has both positive and negative implications. It can facilitate access to information and enable new forms of analysis, but it can also lead to the loss of contextual nuances and the prioritization of quantifiable data over qualitative insights. Technology is often designed and produced by specific groups or organizations, leading to power imbalances and biases in how technology operates and what it prioritizes. The designers and producers of technology have significant influence over how it shapes our experiences and understandings. Given the complexities introduced by technology, it is crucial to examine how individuals’ understanding of the world is affected. The epistemic position of the subject, in terms of their knowledge and understanding, is shaped by the technological landscape they inhabit. Understanding the impact of technology on individual epistemologies is essential for navigating the increasingly tech-mediated world.

In summary, points underscore the need for further examination of how technology influences our knowledge and perception of the world, especially in the context of opaque algorithms, data-driven knowledge, and disparities in technology design and production. It highlights the importance of being aware of these influences and their potential consequences.

If we decide to bypass the Critical Theory of Technology and attempt to find a tool through which the subject of knowledge could defend itself, we should take an educational approach and contrast the “datafication of knowledge” with the term “computational thinking.”

Computational Thinking is currently a vital skill for navigating the ever-changing landscape of technology, applications, and the vast realm of big data. Educational institutions, from primary to secondary schools in modern countries, recognize the significance of Computational Thinking. It provides students with the adaptability and the ability to view the natural world as a series of logical operations that can be programmed through software. It encompasses attitudes and skills that empower individuals to identify and tackle complex problems, fostering a mindset of flexibility. Graduates equipped with Computational Thinking possess versatile thinking, making them more competitive in the labor market, which is gradually being shaped by automation and robotization, known as Industry 4.0. Graduates no longer perceive technology as a mysterious black box they merely use; instead, they actively engage with it by interpreting, utilizing, and modifying it. (Rakowski et al., 2023).

Computational Thinking acknowledges the computational aspects of the natural and technological environment that surrounds us. It enables adjustments in a rapidly evolving world, bringing about significant innovations to both individuals and society as a whole. It offers a set of problem-solving approaches aimed at making computers solve specific tasks. Within the realm of technological innovation, this is considered a fundamental skill necessary for meeting the increasing demands of the fourth industrial revolution. These abilities encompass a range of cognitive faculties that transform intricate real-world problems into solvable forms that can be handled by a machine without additional human intervention.

To design algorithms or programs capable of performing computations and to comprehend the underlying processes of natural information, a distinct form of thinking is essential. Computational Thinking encompasses various modes of thinking and problem-solving skills that can be honed through practice and teamwork. It represents a rich set of interdisciplinary abilities applicable to a wide array of subjects in both the natural and social sciences. It does not reflect the way computers think, even though we can program them to mimic this approach; instead, it comprises various human problem-solving abilities resulting from the study of computation’s nature. Computational Thinking draws on skills such as creativity, interpretation, and abstraction, coupled with the capacity to think mathematically, logically, and algorithmically, scrutinizing details while inventing novel methods to enhance processes. Computational Thinking harmonizes these diverse modes of thinking, serving as a dependable tool for designing algorithms (Rakowski et al., 2023).

Conclusion

The study has identified that the use of artificial intelligence carries a variety of political and social impacts, influencing both human and online environments, while also transferring societal contradictions from the material world to the digital realm. These impacts include, for example, changes in political power through political culture, which can strengthen or weaken the positions of governments, businesses, civil society, and individuals. Furthermore, there is a shift in social structure, as digital technology alters how people communicate with society. In addition, social values are changing as digital technologies influence the perception and evaluation of the world around us. It is also evident that a digital class is emerging, producing data but lacking access to these data.

The study has also revealed that these changes can lead to political and social conflicts. These conflicts include tensions between democratic values and data collection, where digital technologies can jeopardize the privacy and freedom of individuals. Other conflicts arise between market economy and data sharing, where gathering information about people can lead to discrimination, ethical dilemmas, and cognitive biases. There are also conflicts between individual rights and public well-being, as monitoring and influencing individuals’ behavior may disrupt their freedom.

In response to these conflicts, the study recommends the implementation of political and social measures aimed at strengthening democratic values and protecting human rights. This includes better regulation of digital technologies, supporting civil society in advocating for democratic values online, and educating the public about the political and social consequences of digital transformation. As shown, there is a need to democratize technology. On one side is ethics, embedded in algorithms and artificial intelligence, and on the other side are civil society initiatives that must exert pressure on norms.

The study also proposes three areas for further research. The first concerns the impact of digital transformation on various social groups, such as minorities, women, and people from economically disadvantaged areas. The second area involves exploring the political and social mechanisms leading to conflicts in human and online environments. The last area focuses on finding new solutions to political and social conflicts in both of these environments.