Introduction

The decision-making process (DMP) plays a critical role in organizations. A good DMP is required to ensure their proper operation, profitability, and efficiency. An appropriate DMP demands “right” and timely information to support a successful election and with the least possible uncertainty (Citroen, 2009, 2011; Power, 2002). Of course, the types of decisions vary according to its level within the organization, the complexity of the situation, and the criteria used to ensure the effectiveness of the desired end (Hegarty & Hoffman, 1987; Power, 2002). This has given rise to tools, technologies, and methodologies called decision support systems (DSS). DSS provided a way to handle the challenges in the DMP within the current dynamic market environments, where data and information are abundant and crucial to the success of organizations (Nooraie, 2012).

Nowadays, the technologies are readily available and its use is increasing as organizations to look after a better understanding of their activities and pursue a deeper knowledge of their competitive environment (Citroen, 2009; Power, 2002). This, in turn, has led to an evolution in the way management makes decisions, relying more on objective evidence obtained from data analysis and the proper use of information (Pfeffer & Sutton, 2006; Tort-Martorell et al., 2011).

Despite all that, the question of whether organizations know what is, and where to find, the data and information pertinent to support different types of decisions continues to be a relevant one. This is particularly important in those extraordinary situations involving non-routinely decisions because they represent challenges for organizations that might find themselves devoid of the technological means and/or the experience required to support the decisions made under unusual conditions (Citroen, 2009, 2011; Frishammar, 2003).

The aim of this article is to review the chronological evolution of the DMP and the role that information and decision support technologies have played in that process from 1950 until recent times. Consequently, the use of objective evidence for information-driven decision-making will be analysed with an emphasis on how this translates into successful actions intended to achieve the organization’s objectives. 

This article is structured as follows: “Evolution of the Decision-Making Process” covers a literature review of the evolution of the DMP in organizations, “Chronology of Information Technology that Supports the Decision-Making” discusses the chronology of the technology developed for supporting decisions, and “Discussion on the Evolution of the Role of Information in the DMP” presents a discussion on the role of information and information technologies in the DMP. Finally, some conclusions are given with the closing ideas of the paper.

Evolution of the Decision-Making Process

To understand the elements involved in the DMP, first, some definitions are required. In 1947, the noun “decision process” was reported from the perspective of organizations by Herbert A. Simon, a pioneer of scientific administration based on decision-making. Simon argued that the organization is a reflection of its decision-making (Simon, 1947).

A decision can be defined as the moment in which, through a continuous process of evaluation of possible alternatives, inherent to a given target, the appropriate choice takes place driven by the expectation associated with a course action (Harrison, 1999). Other definitions include the particular commitment that leads to action, often associated with resource allocation for a purpose. Therefore, the DMP is immersed in actions and variables under an integrated approach that starts by identifying a stimulus for action whose output is associated with a commitment to act accordingly (Mintzberg et al., 1976). On this basis, decision-making is a dynamic, complex, and potentially ambiguous process that occurs under uncertainty and risk (Power, 2002).

Over several decades, most authors have agreed that the decisions are the result of a dynamic process through which a goal is achieved. Thus, the DMP is an integral and critical part of the organization’s management aimed at choosing, among a set of possibilities, the alternative that may lead to resolve a situation in a satisfactory way for all stakeholders (Bross, 1953; Citroen, 2009, 2011; Drucker, 1967; Power, 2002; Simon, 1947, 1955, 1957, 1960).

Naturally, the way to tackle DMP has evolved through time, adapting to the needs, challenges, and technologies of every age (Buchanan & O’Connell, 2006). In the following sections, we present a chronological review starting in 1950 and progressing through decades.

From 1950 to 1959: a Rational Approach to Bounded Rationality

In this decade, the DMP was understood as a system through which information flows and started the use of statistical tools for the design of decision models. Two main criteria were introduced: “maximize expected profits” and “minimize the maximum risk” as well as the concept of “sequential decision” for planning each stage of complex decisions (Bross, 1953).

In 1953 Irwin D.J. Bross proposed a decision model based on data and statistical principles, distinguishing the real from the symbolic world and the importance of measurements as validation element (Bross, 1953). Thus, data quality began to be considered an important issue. In addition, the first reported use of the terms “individual decision,” “administrative decision,” and “group decision” is found in (Bross, 1953). Later, the term “management by objectives” was coined and refers to “finding opportunities rather than focusing on problems based on the pursuit of the organization’s mission.” Later, this approach would be called “business strategy” (Drucker, 1954).

On the other hand, the rational behaviour of the decision maker was discussed and the term “bounded rationality” was coined. It was proposed to model human behaviour as a social agent that act influenced by emotional impulses rather than rationality. In consequence, the DMP was rationalized from the perspective of finding a mechanism of choice leading to the adoption of “satisfactory” decisions of existing needs, rather than optimal solutions according to the classic posture of rational behaviour (Simon, 1955, 1957).

It is noteworthy that during this decade, it was evidenced an important research activity that greatly expanded the field of application of game theory, which laid the foundation for the study of decisions in environments that interact and the understanding of the human cooperation. The dilemma “social choice and individual values,” the dimensions of uncertainty for decision-making, and the dynamic of the group decision theory were studied (Luce & Raiffa, 1957). The same happened with the influence of factors such as leadership, authority, guidance risk policy, and the interests of stakeholders on decisions. It was an attempt to understand the mechanisms used by individuals, groups, organizations, and society to make decisions (March, 1957).

At the end of this decade, the axiom of choice was raised under the domain of probability theory, which states that in a group of many items, the probability of selecting an item over another is not affected by the presence of other elements. This phenomenon was called “independence of irrelevant alternatives” and allowed the DMP to be modelled from an approximately rational approach and provided the basis for modelling the tendency of consumers to prefer a product or brand, laying the basis of “individual choice behaviour” (Luce, 1959).

From 1960 to 1969: Systematization and Hierarchization of DMP

This decade bound together the DMP with problem-solving. The problems were classified into “structured” or “unstructured” for decision-making (Power, 2002; Simon, 1960). In addition, theories and concepts related to the underlying judgment of psychological processes and choice were presented. The similarity between alternatives in choice behaviour was introduced (Restle, 1961).

In the middle of this decade, Charles Kepner and Benjamin Tregoe proposed four rational processes for problem-solving and decision-making: (1) assessment of the situation, (2) problem analysis, (3) analysis of decisions, and (4) analysis of potential problems (opportunity) (Kepner & Tregoe, 1965). Their now classical method gives a set of systematic procedures to identify the root cause of a problem and find a solution. These procedures are based on critically analysing data, information, and experience.

A few years later, Peter Drucker lead the development of a systematic DMP based on clearly defined elements addressed through a sequence of steps: (a) classification and definition of the problem; (b) specification of the response to the problem; (c) establishing what is right against what is acceptable in the context of meeting the conditions given by the environment; (d) build on the basis of the decision, the action to carry out; and (e) test the validity and effectiveness of the decision. This increased the effectiveness of executives in decision-making (Drucker, 1967).

In this decade, the scope of the decision was extended to all areas of the organization, including the idea that decisions are made by individuals and groups at all levels of the organization. Decisions were classified into four categories, represented in a pyramidal hierarchical scheme associated with the organizational levels: (a) strategic planning, whose decisions are addressed by senior management; (b) management control, whose decisions are aimed at controlling the proper development of the efforts undertaken; (c) operational control, whose decisions seek to control the effectiveness of the organizational actions; and (d) operational performance, whose decisions are related to those made in daily work of the functional units focusing on the implementation of strategic decisions, functional tactics, and operational activities (Power, 2002).

At the end of this decade, the techniques of flow diagrams and decision trees were developed. A discussion regarding the cost of imperfect sampled information versus the worth of perfect information was presented, providing an approach for deciding under uncertain real-world complex conditions. These advances have been extensively used since then (Raiffa, 1968).

A last important development of this decade is the SWOT analysis model (Strengths, Weakness, Opportunities, and Treats) proposed by Learned et al. The method, based on achieving a strategic adjustment between the internal capacities and the external possibilities of an organization, is useful for making decisions and prioritizing actions in complex situations (Learned et al., 1969).

From 1970 to 1979: Early Stages of Computer Aided Complex Methods

At the beginning of this decade, the term “groupthink” was proposed to explain a process that can lead to making wrong and irrational decisions by groups in which, through an apparent consensus, make decisions influenced by peer pressure, affecting rational judgment, efficient thinking, and the evaluation of the situation to solve (Janis, 1972).

Tversky introduced a general theory of choice that became the basis for the development of decision models sustained on a process of covert removal (Tversky, 1972). The idea is to evaluate the different alternatives taking into account a number of aspects, and the use of an iterative selection process. The procedure proceeds; thus, an aspect of each option is evaluated at the time, beginning with the most important one. When an option does not fulfil the established criteria, it is eliminated. This process is reiterated until only one alternative remains. This model of choice by aspects solved the main problems concerning the assumption of independence of irrelevant alternatives.

In parallel, “the garbage can model” was presented as an alternative to the normative models of rational choices. It proposes making decisions despite the conditions of “organized anarchy” by assessing the problems and their solutions as choice opportunities (Cohen et al., 1972).

Many developments of the early years of this decade represented a paradigm shift. The organization is no longer seen as a set of isolated elements but as a complex system of interrelated elements. This new paradigm considers that humans bring their skills and knowledge to the growth of the entire company and decision-making is considered an essential management skill (Drucker, 1973). Other approaches, based on intuition and creative strategy rather than on the rational and analytical component, emerge. The idea is that the role of the manager immersed in the organization chaotic environment is to be fast, creative, and adaptive (Mintzberg, 1973).

Vroom and Yetton developed a model to explain how leadership style influences the degree of participation of subordinates in decision-making. This model was presented as a decision tree to be analysed by the leader according to the magnitude of various types of problems that should be delegated as tasks that lead to their resolution (Vroom & Yetton, 1973).

In the middle of the decade, Mintzberg et al. drew attention to the fact that non-routine decisions, namely the ones more common at the highest level of the organizational hierarchy, are frequently taken by unstructured DMPs (Mintzberg et al., 1976). Furthermore, they detected a lack of attention to these types of decisions. Considering that DMPs were dynamic, highly complex, and dependent on a conceptual framework, they identified a gap between the decision process and the organizational structure. The reduction of this gap is essential to improving the functioning of the organization.

In line with the advent of the computing era, the classification of decisions as “programmed” was used for repetitive decisions, while “unscheduled” refers to those unstructured decisions that require complex processing of information. Along with this line, four interdependent phases were presented for DMPs: intelligence, design, choice, and revision (Pomerol & Adam, 2004).

The idea of bounded rationality was maintained. It suggests that the mechanisms of human rational choice involve using their information processing capabilities to search for alternatives (Makkonen, 2020). A satisfactory solution is then found by calculating the consequences, in the presence of uncertainty, of each choice. Bounded rationality sustained that human behaviour for fully rational decision-making was conditioned by the complexities of the environment and by the limited capabilities of the computational resources available at the time. The theory opened up new horizons in the mathematical modelling of decision-making (Simon, 1978a, b).

By the end of the decade, Preference Trees, or the “Petree”, emerged as an evolution of the elimination-by-aspects model and maintain the basic principles of covert elimination but represented hierarchically in a tree structure (Tversky & Sattath, 1979).

The organizational behaviour model of Mintzberg consolidates the hierarchical principles of the DMP. This model describes the parts of organizations, ranging from the “core operations” in which the activities for the realization of the product or service take place, a “middle line” for the intermediate chain of command, the “strategic apex” formed by senior executives, the “technostructure” represented at the level of the middle line that was not part of the operational structure, and the “support staff,” also located at level of the independent middle line of the operational base (Mintzberg, 1979).

Mintzberg’s model is based on the idea that the company must have an internal consistency that would allow it to face the competitive conditions in the external environment. The model also identifies the flow of information at the different levels: operating work, vertical information and of decision-making, and staff information. (Mintzberg, 1979).

By the end of the decade, the “prospect theory” is developed as an alternative model to the theory of expected utility for decision-making under risk. Prospect theory models how people make decisions in situations of uncertainty present in the real world. It proposes a model of choice in which instead of assigning a value to the final outcome, it is assigned to the profits and losses, replacing the probabilities by decision weights (Kahneman & Tversky, 1979).

From 1980 to 1989: The Beginning of the Information Age

Earlier in the decade, interest was centred on the study of the DMP in unstable environments and in studying how to manage the risk associated with decisions (Fredrickson & Mitchell, 1984). There was further interest in the cognitive implications influencing DMPs and the way in which inherent tasks are performed under uncertainty (Schwenk, 1988). A greater emphasis on the use of information and the technology for decision-making is evident; its importance to gain competitive advantage appears as a key aspect in the near future (Porter & Millar, 1985).

With regard to the progress of hierarchical approaches for multi-criteria decisions, this decade witnessed the managerial application of the analytical hierarchy process (AHP), a mathematical technique developed at the end of the 1970s. AHP proposes a prioritized structure that facilitates ranking alternatives according to their degree of fulfilment of several predefined conditions to quantitatively achieve a consensual group decision. AHP has received criticism, and multiple fixes were proposed in the years to come, such as the REMBRANDT method, developing it into a rather well-established technique due to the simplicity and intuitiveness of its application (Basak & Saaty, 1993; Arnott & Pervan, 2005; Burstein & Holsapple, 2008; Saaty, 2008).

Likewise, with the widespread adoption and scope of decision support systems (DSS), a classification framework, including communications-driven, data-driven, document-driven, knowledge-driven, and model-driven DSS, was used to better explain their application domain. Moreover, it was recognized that DSS could be designed to support decision-makers at any level in an organization. In the particular case of DSS for strategic decisions, they had to be designed taking into account their compatibility with the type of strategic decision-making models used in the organization and their ability to handle and share intersubjective and consensual information in a flexible way (Shrivastava, 1983; Power, 2002, 2007; Burstein & Holsapple, 2008).

By the end of this decade, March conducted an analysis of the use of information systems for decision-making in the presence of ambiguity, uncertainty, and incomplete data (March, 1987). He concluded there was a gap between decision theory and information engineering. It is not surprising that in this context, some researchers like Simon thought that it was very common for organizations to be faced with situations in which the best strategy for making decisions in complex environments was to rely on the good judgment of its managers. According to Simon, a manager of good judgment has completed a psychological process of acquisition and improvement of “intuition.” Managers’ intuition is understood as their ability to create mindsets that unconsciously automate a quick and rational response, but with the inherent limitations of available information (Simon, 1987).

Another study conducted in different companies identified several types of strategic decisions and their influence at the departmental level. It also established that the influence of senior management on all decisions made by companies was moderate; each department makes, almost independently, their own decisions. The study also evaluated the influence of departmental attributes in the types of strategic decisions, concluding that environmental scanning represented the largest source of influence for product-market decisions, while technological and managerial decisions were influenced by hierarchy and access to resources (Hegarty & Hoffman, 1987).

From 1990 to 1999: Creating Organizational Knowledge and Integrating Its Components

A framework integrating the multiple developments made during the eighties helped to consolidate the foundations of DMP and to provide guidelines for future research. Theoretical and empirical arguments allowed the identification of three factors that influence the strategic DMP: environment (uncertainty and complexity), organizational (related to the structure and characteristics of the organization, personnel, key work equipment, performance, and strategies), and other specific (impetus, urgency, and risk) (Rajagopalan et al., 1993).

Negotiation appeared as an important management area and Bazerman and Neale established the principles for decision-making during the negotiation process, based on the correct use of information and on the opponent’s study (Bazerman & Neale, 1992).

The introduction of the concept of “knowing organization” (KO) had a high impact. The idea is that organizations with the ability to use the information to gain a better understanding of their activities and their environment achieve a competitive advantage by making better decisions and having clearly defined courses of action. The model proposed to represent the KO consists of three concentric layers of information: interpretation (sensemaking), conversion (knowledge creation), and processing (decision-making), respectively. Each inner layer takes as its input the output of its outer layer to progressively focus the information towards the organizational action courses (Choo, 1996).

This decade was also characterized by greater research in the DMP from a heuristic approach rather than rational. An exploratory study that analysed mental models identified several key elements of the DMP, among them self-learning to adapt quickly to changing environments (Krabuanrat & Phelps, 1998).

Until the late 1990s, despite the advances in strategic management, research on DMPs in small businesses was scarce. A study found that small firms base their decisions more on intuition than on conventional rational approaches. Notions of rationality were applied only to collect external information to support such decisions. This was explained by the innovative nature of these types of companies, which can take greater risks in their enterprises (Brouthers et al., 1998).

At the end of this decade, new elements related to the organizational context which influences the DMP were discussed, such as national culture, the corporate governance structure, the role of information systems, and the need for a more integrated approach (Papadakis & Barwise, 1998).

From 2000 to 2009: Breakthrough of Information and its Management

In the early 2000s, research on DMPs continued to be interested in the use of information to reduce uncertainty. The mechanisms of data collection and verification were empirically studied. Two types of information that benefited decision-making were identified: “soft” (related to the subjective and qualitative aspects) and “hard” (objective, systematic, and quantitative). The importance of acquiring information from external sources as a mechanism to achieve better organizational alignment with the environment was highlighted, together with the fact that its search must be done through a structured but flexible process (Frishammar, 2003).

Despite the huge amount of data available and the technological advances such as data warehousing and data mining, a critical study emphasized the existence of problems that limit the capability of organizations to have the information needed to handle the internal and external complexity and dynamism. This study argues that the root cause of this situation was the lack of clear information requirements for organizational management. Specifications for the technologies were provided as guidelines for managing the information requirements in organizations (Lohman et al., 2003).

Throughout this decade, evidence-based management (EBM) was developed. EBM was defined as “the conscientious, explicit and judicious use of current best evidence in making decisions” that emerged as a branch of “evidence-based medicine,” a widely praised movement that reached clinical practice as well as healthcare management (Stewart, 2002). In the general DMP context, EBM encourages the adoption of a determined and committed approach to collecting the data necessary to make informed and intelligent management decisions. This trend was slow to grow due to the difficulty in transferring the EBM fundamentals from the clinical field to management, especially with respect to the characteristics of what is considered evidence in each case and the particularities of each organization (Learmonth & Harding, 2006; Pfeffer & Sutton, 2006; Rousseau, 2006).

In the middle of this decade, the development of complex systems for computer learning to assist in the acquisition of skills that lead to making good decisions was also published. Such systems were designed to train professionals in decision-making in order to change unstructured and multivariate environments in order to provide theoretical and practical knowledge with framed routines in solving real problems. It differentiates between learning focused on decision-making in businesses with respect to the methods and tools to support business decisions, since the latter does not give the decision, but supports the decision-maker (Collan & Lainema, 2005).

Another development was the introduction of the stakeholders in the organization’s DMP. The diversity of stakeholders provides the ability to perceive multiple dimensions and interconnections. In addition, then the DMP becomes a mechanism to understand stakeholders needs and to address ethical concerns (Janczak & Thompson, 2005; Kiker et al., 2005).

The second part of the decade was characterized by a growing interest in further developing the methods for making group decisions based on multiple criteria and attributes. Some of the most significant contributions were based on the prior operational research approaches of multi-criteria decision analysis (MCDA), fuzzy logic, and game theory. MCDA methods evaluate several alternatives and compare them with a number of criteria for selecting the best path of action based on aggregation rules while resolves the potential conflict found in the analysis performed. Moreover, under uncertain and imprecise conditions, fuzzy sets are used along with MCDA to provide techniques for modelling, aggregating, selecting, and categorizing preferences and alternatives. Those advances contributed to optimizing the evaluation of management alternatives with respect to multiple and ambiguous criteria, preventing the deviations due to individual preferences or to inherent limitations of the human capabilities when it comes to processing such amount of heterogeneous scenarios (Kiker et al., 2005; Hamilton-Wright & Stashuk, 2006; Burstein & Holsapple, 2008).

Throughout this entire decade, management experimented a rapid change and one of the areas where this change was greatest was DMP. Researchers and organizations tried to find newer and better ways to make decisions through innovative ways of managing information. Information was seen as a valuable asset for the organization. The scope of information taken into account in the DMP expanded to include non-financial and external data. Likewise, large companies started to invest in developing infrastructure and technological solutions to integrate their systems and get the most out of the available data, as well as becoming more “transparent.” Transparency begins to be considered a help in making better decisions and a way of gaining a competitive advantage. The major challenge foreseen in this decade is the development of advanced analytics to extract knowledge from data and information, with special attention to risks. Learning from similar situations in the past helps to ensure that organizational goals are placed before the goals of the business units, as well as to developing a better understanding of individual customers’ needs to offer them tailored products and services (CIMA, 2008; McKinsey Global Survey, 2008a, b).

This decade also brought a new paradigm: the use of prospective-retrospective. The idea is similar to the “pre-mortem” approach, as opposed to the “post-mortem” one. This method comprises group techniques for identifying, in advance, the risks and problems that may arise in a project prior to its inception. This prior evaluation of scenarios and anticipating potential failures allows the DMP to be strengthened and to avoid impulsive decisions (Klein, 2004, 2007).

Additionally, Davenport states that very few organizations focus on a systematic analysis of their DMP. According to Davenport, attention should be given to the DMP in order to “re-engineer” and/or improve it. His framework covers all organizational components (technology, information, organizational structure, methods, and personnel) and proposes four steps to improve the decision-making: (1) prioritization of key decisions; (2) characterization of the decisions and elements involved; (3) intervention through the design of roles, systems, processes and behaviours necessary for DMP improvement; and (4) institutionalize decision tools and assistance. In the same manner, managers should beware of analytical models that they do not understand, maintain broad perspectives for the decision-making, and evaluate the quality of the decisions made regarding outcomes, the DMP, and information (Davenport, 2009).

At the end of this decade, an interesting study was conducted on the role of information in strategic DMPs, comprising an analysis of the value and the quality of information, the strategies to prevent overload of information at the executive level, and the changes experienced by management due to information and communication technologies. This study highlighted the importance of information and how different technological advances have facilitated and improved the acquisition, availability, and analysis of information useful in supporting DMPs. Moreover, Citroen proposes a model that includes the preparation, analysis, specification, limiting, and assessment stages that would lead to rational decision-making, concluding that information helps reduce uncertainty and provides better conditions for rationality (Citroen, 2009, 2011).

From 2010 to Date: Better Decisions in the Time of Big Data

This decade is characterized by an even stronger relationship between information technology (IT) and DMPs. An important line of research was developed that aimed at driving IT management decisions from a business perspective. That is, in investigating the relationship between the IT function and the value of the business that it generates measured through business indicators such as benefits, costs, and customer experience. There was also great interest in developing technological solutions embracing problems of various domains with an interdisciplinary approach, trying in this way to reproduce human decision-making (Bartolini et al., 2011).

Similarly, the results of a large survey conducted during the early years of this decade showed a statistically significant direct relationship between data-driven decision-making and company performance. Company performance was measured in terms of productivity (return on assets, return on equity. and asset utilization) and market value (Brynjolfsson et al., 2011).

Teal developed a conceptual framework for strategic decision-making under uncertainty that tries to bring together contributions from psychology, creativity, and management, including the temporal dimension (past, present, and future) (Teal, 2011).

In parallel, Kester et al. proposed a general model for DMPs of new product portfolios. The model foundation is that the portfolio decision-making is the result of the systematic interaction between three elements: evidence, power, and opinion. Depending on the balance among the three elements, decisions will be better or worse (Kester et al., 2011).

Following the idea to extend the evidence-based medicine to general management, Tort-Martorell et al. (2011) distinguished between internal and external evidence, and argued that organizations should concentrate their efforts on internal evidence-based management, which is easier to implement than the external one and has proved effective as one of the pillars of TQM, excellence models, or six sigma. This led to the use of scientific methods to gain knowledge and of data quality and analytics as important elements.

Malakooti’s model for DMP is based on evaluating and ranking the alternatives of the possible actions derived from the decision, and in specifying four dimensions used by people to make decisions. The four dimensions, each defined by two opposite types, are information processing (concrete or abstract), alternative generation (adaptive or constructive), alternative assessment (moderate or bold), and decision closure (organized or flexible). A web-based questionnaire was used to test the model. The conclusion claims the proposed dimensions are reliable and have a low correlation with each other (Malakooti, 2012).

Recent research conducted at the University of Cambridge has led to what is currently known as “total information risk management” (TIRM) (Borek et al., 2011). TIRM is a holistic framework of concepts, methods, and techniques developed to systematically manage the effects of uncertainty arising from the quality of information on the objectives of the organization. It is based on the evaluation of information from all possible sources and types, using as a reference the widely accepted precepts of risk management of the ISO 31000 standard (Borek et al., 2014).

Current trends in DMP are very much related to the rapid changes in analytics and big data developments. In order to determine whether data-driven decision-making improves business performance, a joint team from the MIT Center for Digital Business, McKinsey’s business technology office, and collaborators conducted a survey to test that hypothesis. The methodology involved structured interviews with executives at 330 companies about their organizational and technology management practices and gathered performance data from their annual reports and independent sources. Despite the broad spectrum of approaches found regarding data-driven decision-making, this study concluded, with statistically significant evidence, that “the more companies characterized themselves as data-driven, the better they performed on objective measures of financial and operational results” (McAfee & Brynjolfsson, 2012).

Provost and Fawcett conducted a critical study on the relationship between data science, big data technologies, and information-driven decision-making. Their idea was that understanding and embracing the inherent relationship between these concepts would allow the field of data science to achieve its full potential for improving business performance through better information-driven decisions. They concluded that there are two types of decisions that can benefit from data science: (1) those for which “discoveries” are made within data and (2) decisions that repeat at a massive scale, and so decision-making can benefit from even slight improvements in accuracy based on data analysis. They also remarked on the current and future relevance of automated decisions performed by computer systems, concluding that big potential lies in applications such as adaptive advertising, high-frequency trading, and credit scoring and fraud detection (Provost & Fawcett, 2013). In general, an excellent source of information on big data and analytics is the research conducted at the Massachusetts Institute of Technology (MIT). A special collection of papers on “making better decisions” have recently been published by the MIT Sloan Management Review. Among them, we would like to highlight (Hogarth & Emre, 2015; Posner, 2015; Schoemaker & Krupp, 2015), which we believe provide an overview of the current situation.

One trend is to look for DMP alternatives that would lead to the broadening of perspectives and making smarter and faster decisions. The idea is to evaluate all scenarios, evident and subjacent tendencies, assess emergent technologies, and use them for critical and constructive discussions that would lead to gaining knowledge and better decisions (Schoemaker & Krupp, 2015).

Another is to take advantage of the access to large amounts of data to make better predictions on which to base decisions (Hoerl et al., 2020). In essence, it is about basing decisions on statistical findings and developing decision models supported by data. However, empirical evidence shows that the increasing amount of data available makes the analysis more complex, hindering the proper communication of analytical results to decision makers, who do not fully understand these results. To overcome the problem, the author proposes a method of “simulated experiences,” which would allow executives an intuitive interpretation of statistical information (Hogarth & Emre, 2015).

Moreover, the explanation of the psychological mechanisms that lead us to decide in the way we do remains an open subject of great interest. Work has been done on “psychological distance,” the balance between “exploitation” and “exploration,” active decisions versus ruled decisions, spontaneous decisions versus deliberated decisions, and the improved perception of competence in decision-making, thanks to the willingness to seek advice, among others. The findings of B. Posner (Posner, 2015) suggest that a greater understanding of the psychological phenomena would allow the creation of strategies to address the DMP more effectively.

Finally, most parties agree that DMPs can be significantly improved by combining both data-driven decision models and critical and creative thinking. An appropriate balance between the exploitation of decision models and human managerial skills is required to understand their benefits and limits (Makkonen, 2018). This would allow what will happen to be predicted more accurately, as well as influencing directly the desired outcome to making it happen, and also use predictions to influence indirectly the courses of action for achieving specific goals (Biecek, 2018; Rosenzweig, 2014).

Chronology of Information Technology that Supports the Decision-Making

Through the analysis of the evolution of the DMP, it was shown that much of its progress is closely related to the evolvement of the information technology and DSS. In turn, DSS also underwent strong development that resulted in a great variety of advanced methods allowing and encouraging more complex analysis to make better decisions (Citroen, 2009, 2011).

As with DMP, we are going to review briefly the major milestones in the evolution of data-based technologies in businesses at intervals of decades. The review starts with the arrival of computing, which clearly represented a paradigm shift in terms of how to manage businesses, as it opened up a wide range of possibilities and opportunities at the organizational level (Citroen, 2009, 2011; Power, 2002).

Before 1960

These years were marked by the beginning of the computer age. The first advances at the hardware and software level started with the implementation of linear programming in experimental computers by George Dantzig of Rand Corporation in 1952, the start of the System Dynamics Group at the Sloan School Massachusetts Institute Technology (MIT), and the first steps in developing the first data-driven Decision Support System (DSS) conducted at the MIT Lincoln Lab (Citroen, 2009; Power, 2007). During these years, Hans Peter Luhn coined the term Business Intelligence (BI) in a visionary article that appeared in an IBM scientific publication. In it, he discussed the problems of acquisition, dissemination, storage, retrieval, and transmission of information in organizations. Indeed, he foresaw an automated way to communicate using the electronic devices available at the time, considering the organizational changes experienced after the arrival of computing. He also predicted an increased demand for information, which would require methods to manage it in order to address the new challenges of decision-making (Luhn, 1958).

From 1960 to 1969

This decade saw remarkable advances in interactive computer systems. In the early sixties, the first developments in programming language and database management systems marked an important milestone. However, the construction of information systems on a large scale was still an expensive affair. By the mid-sixties, the development of more powerful computer systems by several research groups from both the academic and business world allowed the development of Management Information Systems (MIS) aimed at providing managers of large companies with structured periodic reports based on information from accounting systems and transactions (Citroen, 2009; Molloy & Schwenk, 1995; Power, 2002, 2007).

Other remarkable advances were made in human–computer interactions. Many were due to the work and vision of Douglas Engelbart, which among other improvements in the interface and general interaction with the computer promoted the development and incorporation of aid accessories as the mouse (Engelbart, 1962). He also designed the first integrated online system “hypermedia-groupware system,” called oN-Line System (NLS), which allowed meetings supported by computers, teleconferencing, file sharing, digital libraries, hyper-email, online communities, etc. to be conducted (Power, 2007).

During this decade, a new type of information system, a precursor of DSS and referred to as Management Decision Systems (MDS), was developed and implemented. In addition, researchers at Stanford University developed the SPSS statistical software package. One of the ideas behind it was to use statistics to transform data into information useful for promoting decision-making. Additionally, the Ph.D. thesis of Scott Morton marked a milestone in computer display systems and how computers and analytical models could lead the organization to make key decisions (Scott Morton, 1967; Scott Morton & Stephens James, 1968; Power, 2002, 2007).

All these research contributed to important developments in terms of graphical user interface (GUI): operating systems with multitasking and multiuser approaches, the MEDIAC model to decisions support on marketing management through a dynamic programming approach (Power, 2007), the development of information systems based on models to guide decision-making on new products through better marketing strategy (Urban, 1966, 1970), and conducting experiments on a programmed system for computer-assisted decision-making (Ferguson & Jones, 1969).

From 1970 to 1979

These years were characterized by the development of more complex computer-assisted methods aimed at solving problems of decision-making in organizations by means of supporting individual managers rather than the organization as a whole. (Power, 2007).

Scott Morton’s research at the beginning of the decade produced the first steps in the implementation, definition, and research test of a model-based DSS (Gorry & Scott Morton, 1971; Scott Morton, 1971); furthermore, he was the first to use the term of DSS in a scientific journal. Simultaneously, T. Gerrity developed a system for managing the portfolio, laying the foundation for DSS in this field (Gerrity, 1971), while John Little identified four criteria: robustness, ease of control, simplicity, and completeness of the design of DSS, which remain relevant in assessing modern DSS (Power, 2002).

Other relevant advances of the early seventies were the first enterprise resource planning (ERP) system developed by SAP and the design of a complete set of network communication protocols currently known as TCP/IP (Power, 2007).

The middle of this decade brought great interest and significant developments in management information and planning systems and computer-assisted decision-making, all of them supported by ever faster hardware improvements (Power, 2007). Examples of such advances were the first OLAP (online analytical processing) and the appearance of VisiCalc (Visible Calculator), the first spreadsheet or more powerful computing devices such as the minicomputers from Digital Equipment Corporation (Arnott & Pervan, 2005; Power, 2007; Burstein & Holsapple, 2008).

All this led to the formal birth of personal DSS and a surge in interest in the idea (Alter, 1977; Citroen, 2009; Power, 2002, 2007; Sprague & Watson, 1976). At the end of this decade, Peter G. W. Keen and Michael Scott Morton’s book provided greater understanding and guidance to the design, analysis, implementation, evaluation, and development DSS (Keen & Scott Morton, 1978). Research led by J. F. Rockart at MIT into the definition of management information needs required by the chief executive officer (CEO) through the method of Critical Success Factors (CSF) was a breakthrough in academia. The main proposition of Rockart’s paper was the solving of problems of managing large amounts of information by focusing on what is really significant for businesses and decision-making (Rockart, 1979).

From 1980 to 1989

This decade marked the widespread acceptance of DSS from both the academic and practical point of view (Citroen, 2009). The personal DSS of the 1970s gave rise to systems intended to assist in organizational DMPs, comprising the Intelligent DSS (by considering artificial intelligence and expert systems), Executive Information Systems (powered by database theory and OLAP), and Group DSS (incorporating aspects from social phycology and group behavioural process) (Arnott & Pervan, 2005; Burstein & Holsapple, 2008).

This happened in part because of the many publications from the seventies and Steven Alter’s book, which expanded the conception and consolidated the description and identification of the DSS (Alter, 1980; Power, 2007), and in part because it was the right time which heralded the wide availability of hardware with the fast expansion of PCs and the beginning of globalization.

Along the same line as setting the conceptual framework of DSS (Power, 2002), Bonzek, Holsapple, and Whinston’s book showed the significant influence of Expert Systems technologies in developing DSS and identifying the essential components that are common to all DSS (Bonzek et al., 1981). At the same time, research was conducted aimed at analysing how advances in computational technologies and DSS influenced the way information reached CEOs and was used for decision-making (Rockart & Treacy, 1982; Sprague, 1980).

The mid-1980s saw important software developments aimed at supporting project collaboration through the enhancement of digital communication. These were generically referred to as group decision support systems (GDSS) (DeSanctis & Gallupe, 1987; Huber, 1984). At the same time, Houdeshel and Watson reported the success, in terms of benefits and the frequency of use and customer satisfaction, of Lockheed-Georgia’s management information and decision support system (MIDS) (Houdeshel & Watson, 1987). They attributed it to the right combination of several factors: senior executive’s commitment, carefully defined information requirements, a team approach using carefully selected hardware and software system, and an evolutionary development.

Following the increase in data and information availability, MIDS, GDSS, and organizational decision support systems (ODSS) evolved from the single-user model-driven DSS to relational database products (Citroen, 2009; Power, 2002, 2007). This movement gave way to business information systems architectures based on data warehouses structured on relational databases and designed to provide easy interfaces and access to business data (Devlin & Murphy, 1988).

At the end of this decade, Howard Dresner, an analyst at Gartner Group, coined the term business intelligence (BI). Its use has been growing ever since and is meant to cover all support system methods aimed at improving decision-making by gaining knowledge through accessing and analysing business information (Dresner, 2007; Power, 2007).

From 1990 to 1999

The beginnings of this decade were characterized by the emergence and consolidation of technologies that extended the capabilities of existing support decision-making tools such as business intelligence (BI), data warehousing, or online analytical processing (OLAP) which implied major changes in the way information and organizational knowledge was managed (Codd et al., 1993; Dhar & Stein, 1997; Power, 2002). The need to deal with the rapid growth in the number and size of databases brought the development of tools and techniques such as knowledge discovery and data mining, which were aimed at an automatic intelligent understanding of data (Piateski-Shapiro & Frawley, 1991).

New desktop OLAP tools appeared and the emergence of client–server DSS left behind the systems based on mainframe data-driven DSS. These early years were characterized by the strengthening of object-oriented technological solutions to reuse the decision support capabilities, extending the approach based on online transaction processing (OLTP) for database management with real OLAP capabilities (Codd et al., 1993; Power, 2007).

The middle of this decade was marked by the possibilities created by the arrival of the World Wide Web and technological breakthroughs in data warehousing. Many organizations began to develop corporate intranets, to implement enterprise resource planning (ERP) applications and basic decision support tools such as ad-hoc query, reporting tools, and quantitative models. Independent data marts were a widespread alternative to data warehouses (Power, 2002). All this interest led to major advances in research and development in the fields of knowledge discovery and data mining, which were seen as tools that integrated statistics, databases systems, machine learning, and artificial intelligence (AI) to turn data into knowledge in order to achieve business results and appropriate customer relationship management (Berry & Linoff, 1999; Chen et al., 1996;  De la Hoz Domínguez et al., 2020b; Fayyad et al., 1996).

Concerns about developing methods to assure and measure data quality grew. As a consequence, Wang et al. conducted a study that resulted in the production of a hierarchical framework of data quality based on “data user’s” needs (Wang & Strong, 1996; Wang et al., 1995).

In parallel, throughout this decade, Intelligent DSS joined forces with the emerging discipline of knowledge management (KM). There were attempts to create AI-based DSS in the form of expert systems feed using organizational learning techniques (Burstein & Holsapple, 2008).

At the end of this decade, two major landmarks can be noticed. On the one hand, the implementation of data warehouses and heterogeneous information systems represented the fundamental basis for achieving knowledge environments that integrated and allowed the sharing of information across the organization, thus contributing to improved decision-making (Naumann et al., 1999). This enabled the enhancement of the functionality of MIDS through balance scorecard (BSC) systems and enterprise management performance (EMP). In addition, the late nineties saw the development and introduction of new web-based analytical and business intelligence applications (Power, 2002). On the other hand, DSS embraced KM and, henceforth became what it is currently called knowledge management-based decision support systems (Burstein & Holsapple, 2008).

From 2000 to 2009

This decade represented an accelerated growth in the use of information in an integrated or distributed manner and also through the web. There was great interest in measuring and ensuring data and information quality and an important evolution in the development, improvement, and implementation of BI solutions (Citroen, 2009; Power, 2002, 2007).

Early in this decade, applications service providers (ASPs) introduced software tools across the network and more sophisticated models of web services. They incorporated into their portals greater capabilities to support decisions by integrating knowledge management, business intelligence, and communications-driven DSS into their interface (Balasubramanian & Shankaranarayanan, 2002; Power, 2002). This triggered the development of more powerful techniques of data mining able to find hidden patterns in large databases; moreover, remarkable progress was made in transactional data. Their use was adopted by an increasing number of companies that expected that collecting and analysing data about customers would enable the development of quantitative models to predict their preferences. In turn, this would allow companies to offer customized products and services to their clients (Loveman, 2003).

The results from the research program “Quality Program and Total Data Quality Management (TDQM),” initiated during the past decade at MIT, aroused a lot of interest in the academic and professional world. This led to the development of new methods aimed at measuring, evaluating, managing, and improving data and information quality. Data started to be seen as an important and valuable asset to gain business insight, improve efficiency and increase competitive advantage in dynamic business environments (Lee et al., 2002, 2006; Pipino et al., 2002; Shankaranarayan et al., 2003; Shankaranarayanan & Cai, 2006).

The term business analytics (BA) was introduced at the beginning of this decade and represented the key analytical component in business intelligence. It also represented an extension of its capabilities through advanced and automated data analysis using the databases of the company and the web, sophisticated quantitative techniques, and presentation of dynamic reports (Kohavi et al., 2002). The adoption of these techniques by large corporations increased considerably, and this gave a boost to research aimed at achieving the maximum value of the data available and transforming it into a greater organizational knowledge. Knowledge of the customers and their needs, how to increase business effectiveness or of new business or innovation opportunities (Davenport, 2006) was gathered thanks to this analytic approach.

However, not all organizations managed to successfully undertake the path of BI and BA. Many found obstacles to their adoption. This led to several initiatives by the academic and professional community to analyse and identify appropriate methodologies adapted to different contexts in order to solve the problems that constituted a barrier to the successful implementation of BA (Davenport et al., 2010; Harris, 2010; LaValle et al., 2011).

The growing technological advances and changes at the technical and organizational level are reflected in the classification of data as structured, semi-structured, and unstructured. An increasing number of data sets included image and voice, which required new techniques to manage and improve the data quality of these new types of files. Furthermore, the wide adoption of mobile devices generating and displaying data also required new service-oriented technologies for delivering, over the internet, the information required for making decisions everywhere, leading to was later known as ubiquitous decision support systems (Batini et al., 2009; Lee et al., 2015; Madnick et al., 2009).

From the organizational and project management field, the concept of maturity models was embraced to assess the degree of adoption and use of BI. These concepts would experience a major upswing in the next decade (Rajterič, 2010).

At the end of this decade, big data polymorphism was evident. In the same extent that algorithms and solutions to process a big volume of data were developed, new challenges arose for the storage, processing, and analysis of new data streams and higher volumes. This dynamism led to thinking about taking full advantage of big data as a moving boundary out of our reach to the same extent that it imposes a positive constant drive to innovate and take advantage of such data (Jacobs, 2009).

From 2010 to Date

The early years of this decade have been characterized by further consolidation of BI and BA solutions at the organizational and academic level, as well as by interest in ensuring that technologies are aligned and complemented with the flourishing trend to adopt big data (Chen et al., 2012). This is due to the revolutionary potential of Big Data for creating useful knowledge for timely actions that improve the business and offer better products and customer service. This potential has placed the BI and BA tools as a technological priority for the Chief Information Officer (CIO) (Atwah Al-ma’aitah, 2013; Lim et al., 2013).

Among the main consequences of the current use of big data are greater granularity of the data sources, thanks to the rise of social networks and mobile devices, increased computing capabilities through the power of the cloud, the migration to search engine technologies applied to business systems, a more objective interpretation and insights into the feelings of group by means of opinion mining, applying techniques of social recommendation to provide consumers with predictive suggestions based on their preferences, and the preference of their contacts and peers (Lim et al., 2013).

Retrospective studies identified an increasing trend in the amount and impact of the scientific production with terms of BI, BA, and big data, which in turn is associated with a greater presence of these words on the web and an improvement in the webpage ranking of sites discussing those subjects (Chen et al., 2012). Moreover, experimental applications had become more common. For example, taking as input a historical time series data, Varshney and Mojsilović used signal processing techniques to develop a predictive model (Varshney & Mojsilović, 2011).

Some concerns and ethical-legal questions gained greater relevance due to the universalization of the Internet of Things (IoT). Ownership of data generated by users via sensor networks and the multiplicity of devices being used worldwide at every moment is an issue currently being debated (Russell & Abdelzaher, 2018). Alternatives and different views on privacy, the right to be forgotten, and the use and strategies for the proper protection of confidential data from users are being proposed (Staff, 2014).

One of the characteristics of the decade from 2010 to 2020 has been the proliferation of Dashboards. The idea behind this phenomenon is that real-time visualisation of data will provide the information necessary for good decision-making. Although we are not aware of any studies that confirm this, our opinion after multiple conversations with executives from many companies and sectors is that the use of many of these dashboards is lower than expected and their impact on decision-making is low. One of the reasons for this failure is the disconnect between the potential users of the dashboard and those who have been tasked with designing and implementing it (Kennet & Redman, 2019).

Discussion on the Evolution of the Role of Information in the DMP

When the evolution of DMPs and DSS are simultaneously analysed, the interaction between them is evident. Figure 1 shows a timeline which summarizes the major milestones previously mentioned in “Evolution of the Decision-Making Process” and “Chronology of Information Technology that Supports the Decision-Making.” The upper line represents a summary of the evolution of the DMP throughout the decades covered by this review. The lower line represents the evolution of the DSS technology. The gap between them represents their degree of interrelation and how the DMP evolved thanks to the development of technological capabilities, highlighting the most outstanding landmarks that allowed their convergence. The steps correspond to the most important milestones that influenced the development of DMP.

Fig. 1
figure 1

Timeline of the evolution of the DMP and the information technologies that support them

In terms of the chronological evolution shown in Fig. 1, the first interactive information systems developed in the 1960s significantly influenced the advancement in the DMP. This gained momentum once the first computer-aided complex methods were introduced in the 1970s, which provided new tools to improve the relevance, importance, and timeliness of information. Their deployment during the 1980s was transcendent in the path to make better decisions. Figure 1 also illustrates the influence of the advances in DSS technologies, especially those deployed from the 1990s in DMPs. These technologies allowed data on the different factors that surround a decision to be obtained, reducing the uncertainty and associated risks. This represented a shift towards a deeper knowledge of the organization, customers, suppliers, and competitors in order to detect business opportunities.

Likewise, Fig. 1 shows that in many instances, technologies emerged or were adopted as an answer to the managerial needs of the time. One could argue that was the general trend before the 2000s. Conversely, in the most recent 15 years, the progress of information technologies is what has truly pushed forward the data-driven managerial paradigm. Indeed, progress in the field of information technology and computer science is faster than it has ever been, surpassing the corresponding advance in the theories and techniques of making management decisions. This means a major breakthrough in the way companies make decisions have to take place, and perhaps this is about to happen.

In the same vein, Fig. 2 presents a cause-effect diagram used to show graphically the relationship between all the successive managerial and technological advances that took place between the years 1950 and 2015 and that led to our current state of the art on information-driven DMPs.

Fig. 2
figure 2

Cause-and-effect diagram of the chronological evolution of the information-driven DMP

Figure 2 shows that as managerial and technological streams become increasingly closer as time advances, the convergence between DMP and DSS has led to major organizational transformations, emerging new data-driven business models, with start-ups leveraging data as the key resource of their business. Those big data and analytics business models use data to create differentiated offerings, the brokering of the information, and the building of networks to deliver data anywhere and anytime (Hartmann et al., 2014; Wang, 2012). Nowadays, organizations can obtain benefits from analyzing data not only for making single strategic decisions of large impact but also through making autonomously minor decisions on a large scale. In this regard, the big internet-based companies such as Google, Amazon, or Facebook, as well as other big companies worldwide, rely more than ever on autonomous algorithms for making decisions, and the numbers seem to validate the success of their practice (Chen et al., 2012).

Despite the profound transformation of DMP as a result of the information resources available, it is noticeable that this transformation is slower and smaller than that taking place in the technological field, which is represented and reaffirmed by the outstanding milestones in Figs. 1 and 2, respectively. This disparity in evolution is also reflected in the adoption of these technologies (DSS and DMP) by businesses. Many organizations are ahead in the adoption of information technologies than in the development of the management systems needed to take advantage of them. They are, thus, not getting the most out of the massive amounts of data at their disposal.

A study conducted in 2010 revealed that 60% of executives interviewed claim to have more data than they know how to manage and use effectively (LaValle et al., 2011). This has been a recurrent fact in successive years. Therefore, this concern is still unresolved. Those who lead organizations usually do not have the information needed, although they may have the data necessary to provide it to make key decisions (Kiron et al., 2015). This indicates that they are not yet fully matched with emerging technologies that are in continuous evolution. In order to compete successfully, organizations need to become more efficient and differentiate from the competition. This reveals that it is easier to buy the needed technology than to change the way organizations make decisions and are managed. Important factors inherent to this problem include the lack of adaptation by management to the technological solutions, lack of adaptation by the technological solutions adopted by the company to the needs, and particularities of the organization, data quality problems, ineffective information governance, and of the cultural and management nature (LaValle et al., 2011).

The majority of successful cases are found in organizations that develop their own technologies or have reached maturity in the action-reaction cycle integrated into all areas, also called “managing the information transformation cycle,” which has led them to achieve the know-how to make better use of their information in order to make different types of decisions (Florez & González, 2013; Kiron et al., 2015). Nonetheless, it must be kept in mind that such successful cases are more likely to be reported while the struggles during the adoption of novel data-driven technologies and the missteps undergone by most companies are neither disclosed nor underlined in literature. Thus, the one company’s lessons learned are hardly transferable to others.

In particular, SMEs face a very challenging situation in this regard. While digitalization offers new opportunities for SMEs to reach global markets, the reality is that a large number of SMEs have not been able to reap the benefits of the technological transition (Gehrmann, 2020). Evidence shows that SMEs are lagging behind in adopting digital technologies as tools and analytical applications to take advantage of the available data in order to make better decisions, be more competitive and become information-driven companies (E-skills UK, 2013; TechNavio, 2014; OECD, 2017; De la Hoz Domínguez et al., 2020a). In these companies, such resources might be inaccessible, making it unfeasible for them to embrace commercially available business analytics solutions. In 2012, the adoption rate of business and big data analytics among UK SMEs was only 0.2 percent, compared to 25% for businesses with over 1000 employees (E-skills UK, 2013). During the next 6 years, the rate of growth of analytics technology adoption in SMEs is expected to be less than 50% (TechNavio, 2014), which considerably higher compared to large companies.

Therefore, companies must strive to establish mechanisms to systematize the identification and prioritization of key information to support decision-making. On the one hand, this means consolidating adequate data and information governance by recognizing the people, processes, and inputs needed to build a clear strategy on how to use data and analytics. This involves a company-wide alignment between IT and business in order to prioritize data and information requirements. Likewise, all those actions must be accompanied by measures for ensuring data quality in order to have reliable, consistent, and non-redundant data. On the other hand, specific efforts should be directed to the development of a suitable technological architecture for ensuring that analytical processes and information are embedded. Such technology must allow organizations to process and manage new sources of data for feeding more accurate predictive and optimization models. Finally, companies must not forget to pursue the analytical evolution of their corporate culture and the strengthening of the analytical capabilities of its personnel.

In this sense, academic and professional communities have important implications for further work and collaboration for aligning the DMP with the data-driven technology. A greater integration of the technological infrastructure along with adequate data/information governance and quality polices and the adoption of good analytical practices across the whole organization would contribute to bridge this gap. In line with the above, organizations can achieve a better-adapted toolset of technological resources in their DSS that are suitable to their real needs and particulars that allow them to consolidate the organization’s business strategy. From a pragmatic point of view, this will require relevant, timely information, which should be disseminated widely and globally, to support their decisions and lead to real organizational knowledge and competitive differentiation.

Accordingly, it is important to develop methodologies to measure, evaluate and determine the level of sophistication of DMP in organizations as a first step to identify and implement improvement actions. Maturity models could be an alternative to this end. In this regard, a research paper presented the CHROMA model “Circumplex Hierarchical Representation of Organisation Maturity Assessment” for the information-driven decision-making process. The CHROMA model provides a standard framework for assessing and categorizing the organization’s proficiency regarding its information-driven decision-making processes. By using the CHROMA it is possible to determine, on a case-by-case basis, the readiness of organizations for becoming data-driven. Collaterally, the CHROMA unveils the role of information and information technology in the DMP of the assessed company (Parra & Tort-Martorell, 2016; Parra et al., 2017).

CHROMA model is a tool that allows a consistent evaluation of how companies use existing data and information within their business to help make decisions, categorizing them into a maturity model. This model must allow companies to improve their DMPs of all kinds, giving them the opportunity to assess their current situation and to see how they can improve, becoming capable of prioritizing their future actions based on the analysis of their DMP in relation to the information they use. (Parra et al., 2020).

When researchers, organizations, and the managerial community finally come to bridge the ever existing gap between information technologies and the theory and practice of decision-making, there will be a major breakthrough in how companies and businesses are run. Recent experiences point to the evolution from data-driven to algorithm-driven organizations, which means providing automated and intelligent algorithms with the sufficient authority to make decisions across all levels of the organization with or without supervision. Similar to many other preceding advances, algorithm-driven management is surely a matter of discussion and is not immune to risks and criticism. It could be argued that if the gap between DMP and DSS technologies is due to the human factor, then a plausible solution would be relying more on algorithms rather than on the experience-based (biased) judgment of managers would help bridge it more easily (Krichevskiy, 2021). However, autonomous, algorithms for decision making are not the only answer for bridging the gap between the DMP and DSS. Having better DSS supported by AI and massive data analytics capabilities will surely help, but the human factor will always have great influence in organizations. Therefore, one of the most relevant discussions in the forthcoming years will be about the role of managers in the DMP of algorithm-driven organizations.

Conclusions

The DMP combines different disciplines to achieve a practical approach to make good decisions and thus, to achieve better results for the organizations. The DMP has gradually emerged as a discipline in its own right. It has evolved as result of multiple contributions from and research by the academic and professional community. The results have provided a better understanding of how to use data and of individual and group behaviour when making decisions. This understanding has led to the creation of models used for both improving the way companies make decisions and also to help further research by addressing the analysis of alternatives that lead to decision-making.

This study presents timelines and a cause-effect diagram to contextualize the chronological evolution of the management side of DMP, as well as the information technology and computer science that support them. The analysis performed allowed clarifying the strong interaction between the managerial and technological sides of the DMP. Naturally, we must acknowledge that our review provides only a subset of the people, events, research, and lines of thought that have contributed to the development of the topic but, we hope we have included enough of them to characterize the development of DMP.

After this characterization, it can be concluded that the technologies emerged to address the existing needs in the management DMP, as a means to implement managerial theories of decision-making. However, the computational power, once consolidated, started a vertiginous growth that has not been comparable from the managerial point of view, whose evolution has behaved gradually and driven by technological advances. These progressive and accelerated growth of the DSS technologies have been mainly driven by technology solution providers, technologists and academics and to a lesser extent by managers and decision makers. The adoption of technology solutions by companies is fast, and in many instances is leaving behind the managerial DMP side which often strives to adapt to the rapid and continuous changes. It is also worth noting that technology solutions related to gathering, storing, and analyzing data, as well as presenting the results of this process have created an industry in itself. Unfortunately, so far, the solutions commercialized are not fully adapted to organizational needs who not yet fully understand what to do with their data. Thus, it is clear that in this area, there are big opportunities for everyone: researchers, technology providers, and companies of all types.

Likewise, the analysis carried out also gave indications that a new paradigm or breakthrough change is approaching, from the information-driven DMP to the algorithm-driven DMP. Recent experience indicates that algorithms and AI are increasingly being used, not only for large-scale minor decisions but also to make strategic decisions. In this case, supervision and approval of such decisions is a responsibility of the managers and the board of directors. Such tools will soon allow expanding human intelligence and they should not be meant to replace the manager’s judgment.

It is difficult to foresee how technologies such as artificial intelligence and other data-driven techniques and applications will change the way executives and managers will make decisions. However, the analysed trends suggest decision-makers will be using and relying more on algorithms for keeping track of the fast technological advances. In that sense, the future directions on the DMP will be a direct consequence of the transformation of business to harness the technological breakthroughs of big data, IoT, AI, and extended reality.

As a final remark, it is important to comment that as we have seen in this article the path towards becoming data-driven companies has two legs that are equally important. Until very recently, these legs have advance at more or less compatible paces. However, in the last 10 years, the information technology leg, in its two branches: computer science and data analytics, has evolved much faster than the managerial one. This gap is causing various problems. On the one hand, frustration in companies that expected to obtain significant benefits in the short and medium term from their efforts. On the other hand, a growing number of young data science professionals whose expectations of finding jobs in which to develop their talent and contribute in a relevant way to the improvement of the activities of their companies are also frustrated and their tasks relegated to levels well below their qualifications. We are sure that in the coming years, this gap will get closer and management systems will be adapted to facilitate decision-making by managers, taking full advantage of the benefits of recent technological advances.

For instance, at an operative decision level, it is expected that all business processes will be monitored closely to achieve higher performance than ever by the exploitation of distributed analytics. The inputs from the IoT, crowdsourcing, digital twins, and extended reality will be fundamental in the design of products and services, under the paradigm of build to perfection, that is, to fulfil customized user expectations at the highest level of quality. Similarly, more and more companies will migrate to a service oriented business model. On the other hand, at the strategic decision level, the above-mentioned technologies will become a part of the decision makers’ toolbox. Executives and managers will work very closely with their analytics and AI teams for improving the capabilities of human judgement.

Also, it was notorious that the studies and success stories are concentrated in large corporations. Consequently, future research should be aimed at analyzing how to help SMEs on their way to becoming information-driven companies and successively in algorithms-driven as well as with respect to the role of managers in the DMP under this new scenario that is looming.

As a final remark, it is important to highlight the work of data scientists as experts who can support organizations, especially those who own data, on their way to becoming information-driven companies. On the one hand, data scientists are key for accelerating the organizational know-how regarding data processing and visualization for identifying and extracting the relevant information, the objective evidence, required to support effective decision making. On the other hand, data scientists are valuable for technology development organizations which can benefit from data analysis for adapting their products to the needs of users. In both cases, within multidisciplinary working teams, a data scientist can help to reduce the risks of inappropriate technology deployments.