The article examines multidimensional approaches to measuring information systems (IS) success. The current state of related research is explored through a literature review and the classification of articles published between 2003 and 2007. The results show that the dominant stream of empirical research analyzes the impact that a specific type of information system has by means of users’ evaluations obtained from surveys and structural equation modeling. Based on existing theoretical models and frameworks, several specialized success models have been developed to evaluate different types of IS, like knowledge management systems and enterprise systems. The results provide researchers with a comprehensive review and structuring of IS success research.

1 Introduction

Annual worldwide spending on information technology (IT) has been increasing for many years. By 2010, International Data Cooperation expects the total expenditure on IT to reach 1.48 trillion US dollars (IDC 2007). Simultaneously, however, a greater number of information systems (IS) failures are still emerging. A questionnaire-based survey carried out in 2006 in the USA indicated that only 62 % of software projects were considered successful (Verner et al. 2006). The measurement of investments and developed systems’ success, as well as the paradox of high investments and low productivity returns (“productivity paradox”) therefore remains a top concern for both practitioners and researchers (Brynjolfsson 1993).

During the last two and a half decades, research on measuring IS success – the clarification of an important dependent variable in IS research – has been a popular stream of research. A number of models have been proposed in attempts to define IS success and identify the various causes of success.

The purpose of this article is to present and classify the current state of research on the measurement of IS success. More concretely, the following questions are addressed:

  • Which multidimensional approaches for assessing IS success are found in scientific literature?

  • Which research designs were applied in past empirical studies?

  • What are the results of empirical and non-empirical research?

In order to answer these questions, we analyzed literature published over the last five years by means of a structured literature review approach. Our review attempts to systematically analyze, categorize, and synthesize a specified pool of journal and conference papers to provide a comprehensive overview of prior research in this area. According to Webster and Watson (2002, p. xiii), an effective literature review creates a firm foundation for advancing knowledge, eliminates areas where there is a plethora of existing literature, and uncovers areas where research is needed. This article provides such a review and, thus, a theoretical basis for future research. The results of this paper could be especially relevant for researchers who not only wish to obtain an overview of the topic, but also insights into the latest publications.

We organize this article as follows: section 2 provides the foundation for the literature review by defining the term IS success, as well as presenting previous research and widely accepted contributions in this area. In section 3, we outline our methodological approach to identify, review, and analyze current publications on the measurement of IS success. The results of our literature review are presented in section 4. In conclusion, section 5 points out our main contributions after a discussion of the key findings, the limitations, and presenting suggestions for future research.

2 Foundations

2.1 Terminological foundations

The IS literature provides several definitions and measures of IS success. As DeLone and McLean (1992, p. 61) state, there are nearly as many measures as there are studies. Tab. 1 illustrates the variety of definitions of IS success in previous publications.

Tab. 1 Different definitions of IS success

Obviously, there is no ultimate definition of IS success. Each group of stakeholders who assess IS success in an organization (Grover et al. 1996, p. 183) has a different definition. From a software developer’s perspective, a successful information system is completed on time and under budget, has a set of features consistent with specifications, and functions correctly. Users may find an information system successful if it improves their work satisfaction or work performance. From an organizational perspective, a successful information system contributes to the company’s profits or creates a competitive advantage. Furthermore, IS success also depends on the type of system that is evaluated (Seddon et al. 1999, p. 21).

In order to provide a more general and comprehensive definition of IS success that covers these different points of view, DeLone and McLean (1992) reviewed the existing definitions of IS success and their corresponding measures, classifying them into six major categories. They created a multidimensional measuring model with interdependencies between the different success categories. This D&M IS success model received much attention from IS researchers, who have often treated IS success as a multidimensional construct, also measuring it as such.

Some researchers use the term “IS effectiveness” synonymously with “IS success.” Others use IS effectiveness to subsume what DeLone and McLean label “individual impact” and “organizational impact” (DeLone and McLean 1992), or “net benefits” (DeLone and McLean 2003). In the context of this article, the term IS success is used in the sense of DeLone and McLean’s comprehensive understanding to explicitly cover the whole range of suggested measures.

2.2 Previous research

In 1980, Peter Keen referred to the lack of a scientific basis in MIS research and raised the question of what the dependent variable in MIS research should be. Motivated by this request for clarification of the dependent variable, many researchers have tried to identify the factors contributing to IS success. Surrogate variables like user satisfaction or hours of usage would continue to mislead researchers and evade the information theory issue (Keen 1980, p. 9). Largely, however, different researchers addressed different aspects of IS success, making comparisons difficult. In order to organize the large body of existing literature of that time, as well as to integrate the different concepts and findings, thus presenting a comprehensive taxonomy, DeLone and McLean introduced their (first) IS success model (DeLone and McLean 1992).

Building on the three levels of information by Shannon and Weaver (1949), together with Mason’s expansion of the effectiveness or influence level (Mason 1978), DeLone and McLean defined six distinct dimensions of IS success: system quality, information quality, use, user satisfaction, individual impact, and organizational impact. Based on this framework, they classified the empirical studies published in seven highly-ranked MIS journals between January 1981 and January 1988. Their examination supports the presumption that the many success measures fall into the six major interrelated and interdependent categories. These authors’ IS success model was their attempt to integrate these dimensions into a comprehensive framework. Judged by its frequent citations in articles published in leading journals, the D&M IS success model has, despite some revealed weaknesses (Hu 2003), become the dominant evaluation framework in MIS research, in part due to its understandability and simplicity.

Motivated by DeLone and McLean’s call for further development and validation of their model, many researchers have attempted to extend or respecify the original model. A number of researchers claim that the D&M IS success model is incomplete. They suggest that more dimensions should be included in the model, or present alternative success models (e. g., Ballantine et al. 1996; Seddon 1997; Seddon and Kiew 1994). Other researchers focus on the application and validation of the model (e. g., Rai et al. 2002).

Ten years after the publication of their first model, and based on the evaluation of the many contributions to it, DeLone and McLean proposed an updated IS success model, as depicted in Fig. 1 (DeLone and McLean 2002; DeLone and McLean 2003).

Fig. 1
figure 1

The updated D&M IS success model (DeLone and McLean 2003)

The primary differences between the original and the updated model are: (1) the addition of “service quality” to reflect the importance of service and support in successful e-commerce systems; (2) the addition of “intention to use” to measure user attitude as an alternative measure of “use”; and (3) the collapsing of “individual impact” and “organizational impact” into a more parsimonious “net benefits” construct. The updated model consists of six interrelated dimensions of IS success: information, system and service quality, (intention to) use, user satisfaction, and net benefits. The arrows demonstrate proposed associations between the success dimensions. The model can be interpreted as follows: a system can be evaluated in terms of information, system, and service quality; these characteristics affect subsequent use or intention to use and user satisfaction. Certain benefits will be achieved by using the system. The net benefits will (positively or negatively) influence user satisfaction and the further use of the information system.

3 Methodology

3.1 Literature review

The increasing number of published books and journals, as well as conferences and workshops has made the research process more complex and time-consuming. Consequently, there is a greater need to describe, synthesize, evaluate, and integrate the results of articles on a particular field of research. The process of conducting a literature review can be regarded as a scientific procedure that should be guided by an appropriate research method (Fettke 2006).

According to the newest edition of the Publication Manual of the American Psychological Association (APA 2001, p. 7), review articles are critical evaluations of material that has already been published. By organizing, integrating, and evaluating previously published material, the author of a review article examines current research’s progress toward clarifying a problem. In a sense, a review article is a tutorial in that the author

  • defines and clarifies the problem;

  • summarizes previous investigations in order to inform the reader of the state of current research;

  • identifies relations, contradictions, gaps, and inconsistencies in the relevant literature; and

  • suggests the next step or steps in solving the problem.

3.2 Literature selection process

The basis of a literature review is the relevant literature on the topic to be examined. A systematic search should ensure that a relatively complete number of relevant articles are accumulated. Our process of literature selection for inclusion in this review consisted of three steps: (1) selecting the literature sources, (2) defining a time frame for analysis, and (3) selecting articles to be reviewed.

(1) Source selection

The first step of the literature selection process was to identify a list of literature sources that was as comprehensive as possible. We started off by taking the journals surveyed by DeLone and McLean (1992; 2002; 2003) into consideration. As a field’s major contributions are likely to be in leading journals (Webster and Watson 2002, p. xvi), we extended the initial list of twelve journals by adding additional top journals. Based on Saunders’s (Saunders 2008) MIS journal ranking, we added more journals in ascending order of their average rank up to a value of 30. The MIS journal ranking is a meta-analysis based on nine separate journal rankings, therefore not representing a single researcher’s perception, but that of many. Journals that were ranked by only one original meta-analysis source were not taken into consideration, as they were regarded as lacking representativeness. Some journals were excluded due to their specialized character (e. g., “Operations Research”). In total, we selected 34 leading North American and European IS journals. In addition, we added the proceedings of four major international IS conferences considered important for the IS field (Caya and Pinsonneault 2004, p. 2; Gonzalez et al. 2006, p. 822). The inclusion of conference proceedings allows the consideration of very recent research. Consequently, we took into account that there might be some duplication of older papers that had first appeared in conference proceedings and were later published in journals. Tab. 2 lists all of the 38 literature sources that we surveyed to identify relevant articles. Books were deliberately omitted from the selection process on the assumption that their authors had already published their results in journals. Furthermore, the quality of the contributions is not always apparent, since not all of them were subjected to a formalized review process.

Tab. 2 Literature sources

(2) Time frame selection

The second step of the literature selection process was to define an appropriate time frame. For their original model, DeLone and McLean (1992) reviewed publications that appeared between January 1981 and January 1988. For their updated model of IS success (DeLone and McLean 2003), literature published between 1992 and mid-2002 was surveyed. In keeping with the current article’s objective – the examination of research on measuring IS success after the publication of the updated D&M IS success model – the period between 2003 and 2007 was considered an appropriate time frame for the literature search.

(3) Paper selection

Finally, we had to choose topic-related papers from the selected literature sources that had appeared in the defined time frame. We searched electronic databases (EBSCO, ScienceDirect, ProQuest) and specific journal and conference websites to select papers for inclusion in the review. An initial list of papers was generated by using the search strings “information systems success,” “IS success,” “information systems effectiveness,” and “IS effectiveness” to search for titles, abstracts, and keywords. Only if no electronic search was possible, did we scan the journals’ and conference proceedings’ tables of contents. To complete the selection process, we manually reviewed the resulting list of papers, selecting only the relevant ones.

3.3 Literature pool

In total, we identified 64 articles by means of database searches and examinations of specific websites. Of the papers included in the review, 35 are journal articles and 29 conference papers. We subjected these papers to a more detailed review in keeping with the review framework presented below.

3.4 Review framework

We defined an analytical framework to systematically classify and describe the selected literature. We consequently first examined the classification schemes of similar studies (e. g., Alavi and Carlson 1992; Grover et al. 1996; Palvia et al. 2004; Seddon et al. 1999) and adapted evaluation categories that were considered suitable for our review. We thereafter added further categories and items to cover all important aspects of this article’s objective. The resulting framework comprises eight categories: (1) theoretical foundation, (2) research approach, (3) object of analysis, (4) unit of analysis, (5) evaluation perspective, (6) data gathering, (7) data analysis, and (8) methodological type. Fig. 2 presents an overview of these categories.

Fig. 2
figure 2

Literature review framework

Theoretical foundation

This category refers to those reference theories and generally accepted frameworks on which the authors primarily relied in the design and analysis of their research models. The initial list consisted of: the D&M IS success model (DeLone and McLean 1992), the updated D&M IS success model (DeLone and McLean 2003), the Technology Acceptance Model (Davis 1989), and the Seddon Model (Seddon 1997). We considered these frameworks as the most accepted ones with regard to IS success measurement. Papers employing a theory not included in the initial list were classified as “other,” while papers that did not relate to any theory at all were classified as “n/a” (not applicable).

Research approach

The category “research approach” classifies the reviewed papers into empirical and non-empirical research. Following Alavi and Carlson (1992, pp. 47–48), papers are regarded as empirical if they rely on observation and apply some type of empirical method (e. g., survey, laboratory experiment, case study). Non-empirical papers are primarily based on ideas, frameworks, and speculation rather than on systematic observation. They may contain some empirical observations or data, but these will be in a secondary or supporting role only. The main focus of this literature review is on the empirical literature in the field under examination. Thus, most of the categories of the review framework refer to this type of research. Nevertheless, in consideration of King and He’s (King and He 2005, p. 671) observation of a sampling bias towards empirical studies analyzed in reviews, also non-empirical papers like frameworks, conceptual models, or speculation papers were taken into consideration.

Object of analysis

The category “object of analysis” is used to classify the type of system that is being evaluated. Following Seddon et al. (1999, p. 6), this category comprises the following six components: (1) an aspect of IT use (e. g., a single algorithm or form of user interface), (2) a single IT application (e. g., a certain data warehouse), (3) a type of IT or IT application (e. g., knowledge management systems), (4) all IT applications used by an organization or sub-organization, (5) an aspect of a system development methodology, and (6) the IT function of an organization or sub-organization. This category was chosen for the review framework to disclose the main focus of the studies under review.

Unit of analysis

This category responds to the question: What unit of analysis is used? Grover et al. (1996, p. 181) argue that the evaluation of IS success should be conducted from both a micro and a macro view in order to build a complete picture. Thus, IS success should be considered at the individual as well as at the organizational level. The distinction is necessary because IS supports individual decision making and can also provide competitive advantage in organizations. Consequently, from a micro perspective, the success of an IS is related to the extent to which IS satisfies the requirements of the organization’s members, whereas from a macro perspective, it is related to how much the IS helps organizations to gain competitiveness.

Evaluation perspective

Different stakeholders in an organization may validly come to different conclusions about the same information system’s success (Seddon et al. 1999, p. 183; Sedera et al. 2004b). Though an IS may be viewed a successful system from one standpoint, it may be interpreted as unsuccessful from another. The category “evaluation perspective” therefore specifies the person or group in whose interest the evaluation of IS success is determined. Grover et al. (1996, p. 183) list four different classes of evaluation perspectives: users, top management, IS personnel, and external entities (suppliers, customers, etc.). For a slightly broader differentiation, we added two additional items: IS executives and multiple stakeholders. All evaluation perspectives can employ both an individual and an organizational unit of analysis.

Data gathering

The category “data gathering” refers to the research methodology that the authors employ to gather empirical data. The research methodology can be considered the “overall process guiding the research project” or the “primary evidence generation mechanism” (Palvia et al. 2003, p. 290). An analysis of the research methodology provides insights into the reliability and generalizability of the study results. For a closer analysis of the research methodology applied for data gathering in the empirical papers, we distinguished four empirical research methods: survey, interview, case study, and laboratory experiment. We consider these methods the dominant empirical methods in IS research. “Other” covered papers employing an additional empirical research method.

Data analysis

We distinguished the following techniques that we consider most commonly used in IS research: structural equation modeling (e. g., LISREL, PLS), regression analysis, factor analysis, variance analysis, and cluster analysis. “Other” covered studies using methods like qualitative analysis techniques. Papers that did not employ analysis techniques were classified as “n/a.”

Methodological type

We classified the non-empirical papers according to their methodological type. Adopting the classification by Palvia et al. (2004, p. 529), we distinguished three non-empirical methodological types: research that intends to describe a framework or a conceptual model (“framework/conceptual model”); research that is not really based on any hard evidence but reflects the knowledge and experience of the authors (“speculation/commentary”); and research that is mainly based on the review of existing literature (“library research”). Non-empirical papers of other methodological types were classified as “other.”

3.5 Review and classification process

After identifying and selecting the papers to be included in the review, as well as defining our review framework, we read all the papers to classify them. The process of classifying involved a degree of interpretation on our part, as the authors often did not explicitly state their research question or methodology. In order to account for this and to demonstrate a high inter-rater reliability (Tinsley and Weiss 1975), we used a parallel assessment approach. Two researchers reviewed and classified the selected articles independently. At a reconciliation meeting, we compared the results, reconciled discrepancies, and agreed on the final classification through discussion. The results of our review process are presented in the following section.

4 Results

4.1 Selection of relevant literature

After reviewing the selected publications, their relevancy was analyzed in respect of this article’s objective. Of the 64 articles identified in the first step of the selection process, we subsequently considered 16 journal articles and 7 conference papers “not relevant.” Since the focus of this review is on comprehensively assessing IS success through multidimensional approaches, we excluded publications examining single success dimensions. Consequently, 19 journal articles and 22 conference papers remained, thus totaling 41 relevant publications that we analyzed in depth. Fig. 3 illustrates the selection process.

Fig. 3
figure 3

Selection of relevant literature

For the in-depth analysis, we classified the 41 remaining publications as either empirical (28) or non-empirical papers (13), according to their research approach.

4.2 Analysis of empirical papers

The main focus of this literature review is on the empirical literature in the field under examination. Consequently, we conducted an in-depth analysis of the selected empirical papers’ research design. The results of this analysis are presented in the following section. To answer the question “what” was measured, an examination is provided of the studies’ analysis objects.

Research design

The categorization of the empirical papers according to their research design is illustrated in Fig. 4. The results show that the dominant research is that which analyzes the individual impact of a certain type of information system that users evaluate by means of surveys (23) and structural equation modeling (17). The main theoretical basis of the reviewed studies is the D&M IS success model (either the original (18) or updated version (8)).

Fig. 4
figure 4

Classification of empirical publications

Object of analysis

The review dimension “object of analysis” is used to classify the type of information system being evaluated. Approximately half of the empirical studies analyze the success of a certain type of IT application (15). In six publications, the success of a single IT application is assessed. Few studies evaluate the success of all of an organization’s IT applications (3) or an organization’s IT function (1). Empirical studies validating general conceptual models without applying them (e. g., by conducting focus group interviews) were categorized as “not applicable.” The results of the classification in terms of the object of analysis are presented in Tab. 3.

Tab. 3 Objects of analysis in empirical studies

4.3 Analysis of non-empirical papers

Although our study focuses primarily on empirical publications, we conducted a less detailed descriptive analysis of the non-empirical papers. All non-empirical publications reviewed were either classified as a “framework/conceptual model” or “speculation/commentary.” No publication in the literature pool was categorized as purely “library research” or a “mathematical model.”

Of the 13 non-empirical articles under review, we classified eleven as a “framework/conceptual model.” In contrast to the models presented in the empirical papers, these frameworks or models have only been theoretically derived and their validation and application are not presented in the respective papers. An overview of these publications is presented in Tab. 4. Besides the various papers of the “framework/conceptual model” type, we classified only two of the non-empirical papers as “speculation/commentary.”

Tab. 4 Frameworks presented in non-empirical articles

5 Conclusions

5.1 Summary of findings

This article examines the existing literature on multidimensional approaches to measuring IS success by means of a literature review and a classification of articles published between 2003 and 2007 in order to explore the current state of research. We identified 41 articles in a systematic search of 34 leading North American and European IS journals and four reputable IS conferences. We analyzed the publications with regard to their theoretical foundation, research approach, and research design.

Based on an in-depth analysis of the 41 publications, we have deduced the following findings:

  • The D&M IS success model is still the dominant basis of IS success measurement. Of the 28 empirical articles reviewed, 22 refer directly to this model. Some studies test the model in its original version; the majority of the studies use the D&M IS success model – often in combination with other theoretical models – as a basis for deriving new research models that are applicable to the specific requirements of the corresponding problem domains.

  • Quantitative-empirical analysis is the primary methodology used in IS success measurement. The results of the literature classification indicate that the dominant empirical research is an analysis of the impact of a certain type of information system as evaluated by users by means of surveys and structural equation modeling.

  • Most of the empirical studies assess IS success as an “individual impact” and, thus, from a micro view. Only twelve of the 28 empirical papers consider IS success from both the individual and the organizational level, thus building a more comprehensive picture of IS success.

  • Several success models for evaluating specific types of IS, like knowledge management or enterprise systems, have been developed on the basis of existing theoretical models and frameworks. The adaptation of existing general models for more specific approaches might serve as a basis for other research in the same area.

5.2 Limitations

Our research is limited in that this review is based on a restricted number of journals and conferences as publication sources. Although major contributions to the field are likely to be found in leading journals, the decision on the scope may have omitted potentially important publications. Another limitation clearly results from the database-driven approach. By predominantly relying on database queries for the literature search, this review may have failed to identify relevant publications that do not include any of the search terms in their title, abstracts, or keywords. A further limitation lies in the term “IS success” being decisively influenced by DeLone and McLean’s work. The probability of the applied search strings identifying publications that refer to the D&M IS success model may therefore have been higher than finding articles with a different theoretical foundation. Finally, the analysis and classification of the publications were based on the parallel assessments of only two researchers. A parallel analysis by more researchers could have increased the results’ validity.

5.3 Recommendations for future research

Measuring IS success has been a popular stream of research during the last decades, resulting in many articles. Our study classifies the existing literature to provide an overview of prior research in the area. Based on the presented results, we offer the following suggestions for further research:

  • Our study’s limitations indicate that our analysis is based on a restricted number of publications. Future research could broaden the basis of the literature review by extending the range of journals and conference proceedings considered as literature sources. In addition, the database-driven approach could be complemented by a manual scanning of tables of content.

  • Researchers have recommended the reuse of proven success measures to allow a comparison of results. The analysis of the papers in this review focuses on the classification of research on IS success. The measures used in the reviewed studies therefore remain uninvestigated. An analysis of the success measures used in recent publications would further contribute to a comprehensive overview of prior research.

  • Scientific literature holds many theoretical models for measuring IS success. The usefulness of these approaches for practitioners is mostly relatively unknown. The “reality check” by Rosemann and Vessey (2005; 2008) is a first step toward understanding the relevance of the D&M IS success model for practice. Further research should be undertaken in this direction to increase the relevance of research in this area without compromising its rigor.