The quest of wisdom is ancient and virtually undertaken by all streams of human intellectual development and is a cornerstone of human civilisation. However, perhaps due to its inherent connotation with ‘correctness’, ‘success’ and ‘morality’, in other words, all that is aspirational, its academic study is fraught with chaos and difficulty. We do not have a common definition of wisdom, neither do we have an established methodological protocol to analyse or measure it. The most important reason for this deficiency is the issue of subjectivity and bias. No two people share a common opinion on what would be a wise decision. In the management domain, a solution to this issue is sought, quite literally in numbers. That is, there is the concept of what we call organisational wisdom, which despite elements of morality, leadership, knowledge transfer, carries with it the importance of the decision record, or in layman terms, the experience. You could call it knowledge, you could call it experience, or even organisational culture, as long as there is a collective agreement, the organisational wisdom construct will streamline the multiple wisdoms based on the records of previous decisions that could be normalised in the working patterns and decision making. The importance of organisational wisdom lies in normalising the aspirations—moral and business—with the practical realities at the workplaces. This again does not mean a commonly agreed definition of organisational wisdom. We do not have a blueprint of a wise organisation either. But the argument of safety in numbers is somewhat holding in the organisational wisdom concept as well.

The AI systems in their generative forms have, however, swept these elements of subjectivity, biases and numbers away, simply on the basis of the sheer volume of the data lakes and warehouses, on which their algorithms work to make choices. For a generative AI tool like ChatGPT that has access to the global search engines and data in some of its most voluminous capacities, the decision records and decision making, by the looks of it, should be ‘wise’. The artificial wisdom concept is built on this notion of evolving AI systems into wise systems. Artificial wisdom describes the development of advanced AI systems that can make decisions that are not just based on pre-programmed rules or learned patterns, but also on an understanding of the context, ethics and moral principles. This comprises two parts—the datasets and the generative, predictive modelling building on these data sets—making them independent from human intervention in a gradual progression.

Measuring artificial wisdom is complex and subjective. It involves assessing aspects of decision-making accuracy, ethical considerations, common sense reasoning, amongst others. Decision accuracy is the ability of the AI system to make decisions that are consistent with human expectations and lead to desired outcomes. Ethical reasoning is the ability of the AI system to take into account ethical considerations and make decisions that align with societal norms and moral principles. Common sense reasoning is the ability of the AI system to apply general knowledge and understanding of the world to new situations. Adaptability is the ability of the AI system to modify its behaviour and decision-making processes based on new information and changing circumstances. It is imperative to remember that no single metric can fully capture the concept of artificial wisdom, and different metrics may be more or less relevant depending on the specific application and use case of the AI system.

The wisdom question in many ways is at the heart of the AI debate—can AI systems become human like? The human cognition in its deep complexities is a tricky combination of explicit information, tacit knowledge and creative imagination to take into account the reality and foresight. In looking at the staggered increase in the AI penetration in human commerce and society in decision making ranging from selecting a coffee flavour to the direction of a nuclear missile, the wisdom question needs careful reckoning.

An important insight on the artificial wisdom can be drawn from a questioning on the lines of standardisation, replication and evolution. Can the AI systems standardise a wise choice in every selection, or every judgement? How can AI replicate the wise choice patterns as the datasets keep building up? And most importantly, how can an AI system evolve its ‘wise’ choices from the paradoxical question of the universality of wise judgement across space and time?

Often the present generative AI systems are afflicted with the issues arising from responding to a traditional evolutionary view around the progression from data, information, knowledge to finally wisdom in human judgment and its corresponding connotations in relation to objective reason and subjective morality simultaneously. This is particularly important in the generative AI system's algorithmic reasoning around practical rationality, factual correctness and moral reasoning. Examples around this debate would include the issues in Google’s Bard, or the answers of ‘real sounding’ fake references of publications on academic topics in ChatGPT or the currently trending deepfake technology. In these cases, we do not have a clear line between data, information, and knowledge as perceived by these AI systems in their responses from a rational and moral standpoint.

The artificial wisdom question can, however, not be simply discounted, on account of these mechanical issues. With the rapid development of the AI technologies, these AI systems are already in need of a moral compass that can stabilise the information gleaned from the datasets. The replication of human cognition is, as has been argued before, not limited to providing the ‘best’ or the explicitly wise choice, but in providing a perceived most adequate or optimal solution.

In this line of argument, artificial wisdom can be seen as a means of redistributing power within organisations and societies, as it has the potential to automate and standardise decision-making processes, reducing the influence of individual decision makers. This can lead to more consistent and objective decisions, as well as the elimination of human biases and emotions in decision making.

On the other hand one can also argue that, the distribution of power in AI-based decision-making systems is not automatic, and is shaped by the design and implementation of these systems. The development and deployment of AI systems are often influenced by the interests of those who control the technology, such as corporations, governments and other organisations. As a result, the deployment of artificial wisdom can also reinforce existing power structures, if the AI systems are designed and implemented in ways that reflect the interests and perspectives of those who hold power.

Who is responsible for the decisions made by AI systems? How can we ensure that they align with human values and societal norms? These are the questions that must be addressed as the field of artificial wisdom continues to develop.

It is the debate on morality where the wisdom question spins into different directions. One of the major reasons for these multiple divergent views could be because of the culture-based differences in moral perceptions in human decision making that leads to diverse decisions, judgments, choice making amongst humans. Amongst the generative AI systems, the customisation of user experiences also takes in these heuristics that may not necessarily be algorithmic for standardisation purposes, thus asking the basic evolutionary question of determining, what is a wise decision? Is this ‘wise decision’ wise everywhere and at all times? The human–AI decision-making dyad therefore, needs a careful reconsideration from a wisdom perspective to bring in the hitherto unquantifiable moral and contextual aspect into the decision-making processes in AI systems and their relationship with their human users in their engagement and customisation.

The current rise of generative AI systems heralds the obvious question of artificial wisdom. It is an exciting challenge for the researchers and practitioners to find a common and beneficial ground in this debate. Whilst there are many potential benefits, it is important to consider the ethical and philosophical considerations as well. As AI systems continue to play an increasingly important role in our lives, it is critical that we work towards ensuring that they make wise and ethical decisions that align with human values and societal norms.