Abstract
Measuring information systems (IS) success is of great interest to both researchers and practitioners. This article examines multidimensional approaches to measuring IS success and explores the current state of IS success research through a literature review and by classifying articles published between 2003 and 2007. Based on a total of 41 academic journal and conference publications, the relevant research carried out is identified, while the research results are categorized, consolidated, and discussed. The results show that the dominant empirical research analyzes the individual impact of a certain type of information system by ascertaining users’ evaluation of it by means of surveys and then applying structural equation modeling. The DeLone and McLean information systems success model is the main theoretical basis of the reviewed empirical studies. This article provides researchers with a comprehensive review and structuring of IS success research. Furthermore, opportunities for additional development are identified and future research directions suggested.
Similar content being viewed by others
The article examines multidimensional approaches to measuring information systems (IS) success. The current state of related research is explored through a literature review and the classification of articles published between 2003 and 2007. The results show that the dominant stream of empirical research analyzes the impact that a specific type of information system has by means of users’ evaluations obtained from surveys and structural equation modeling. Based on existing theoretical models and frameworks, several specialized success models have been developed to evaluate different types of IS, like knowledge management systems and enterprise systems. The results provide researchers with a comprehensive review and structuring of IS success research.
1 Introduction
Annual worldwide spending on information technology (IT) has been increasing for many years. By 2010, International Data Cooperation expects the total expenditure on IT to reach 1.48 trillion US dollars (IDC 2007). Simultaneously, however, a greater number of information systems (IS) failures are still emerging. A questionnaire-based survey carried out in 2006 in the USA indicated that only 62 % of software projects were considered successful (Verner et al. 2006). The measurement of investments and developed systems’ success, as well as the paradox of high investments and low productivity returns (“productivity paradox”) therefore remains a top concern for both practitioners and researchers (Brynjolfsson 1993).
During the last two and a half decades, research on measuring IS success – the clarification of an important dependent variable in IS research – has been a popular stream of research. A number of models have been proposed in attempts to define IS success and identify the various causes of success.
The purpose of this article is to present and classify the current state of research on the measurement of IS success. More concretely, the following questions are addressed:
-
Which multidimensional approaches for assessing IS success are found in scientific literature?
-
Which research designs were applied in past empirical studies?
-
What are the results of empirical and non-empirical research?
In order to answer these questions, we analyzed literature published over the last five years by means of a structured literature review approach. Our review attempts to systematically analyze, categorize, and synthesize a specified pool of journal and conference papers to provide a comprehensive overview of prior research in this area. According to Webster and Watson (2002, p. xiii), an effective literature review creates a firm foundation for advancing knowledge, eliminates areas where there is a plethora of existing literature, and uncovers areas where research is needed. This article provides such a review and, thus, a theoretical basis for future research. The results of this paper could be especially relevant for researchers who not only wish to obtain an overview of the topic, but also insights into the latest publications.
We organize this article as follows: section 2 provides the foundation for the literature review by defining the term IS success, as well as presenting previous research and widely accepted contributions in this area. In section 3, we outline our methodological approach to identify, review, and analyze current publications on the measurement of IS success. The results of our literature review are presented in section 4. In conclusion, section 5 points out our main contributions after a discussion of the key findings, the limitations, and presenting suggestions for future research.
2 Foundations
2.1 Terminological foundations
The IS literature provides several definitions and measures of IS success. As DeLone and McLean (1992, p. 61) state, there are nearly as many measures as there are studies. Tab. 1 illustrates the variety of definitions of IS success in previous publications.
Obviously, there is no ultimate definition of IS success. Each group of stakeholders who assess IS success in an organization (Grover et al. 1996, p. 183) has a different definition. From a software developer’s perspective, a successful information system is completed on time and under budget, has a set of features consistent with specifications, and functions correctly. Users may find an information system successful if it improves their work satisfaction or work performance. From an organizational perspective, a successful information system contributes to the company’s profits or creates a competitive advantage. Furthermore, IS success also depends on the type of system that is evaluated (Seddon et al. 1999, p. 21).
In order to provide a more general and comprehensive definition of IS success that covers these different points of view, DeLone and McLean (1992) reviewed the existing definitions of IS success and their corresponding measures, classifying them into six major categories. They created a multidimensional measuring model with interdependencies between the different success categories. This D&M IS success model received much attention from IS researchers, who have often treated IS success as a multidimensional construct, also measuring it as such.
Some researchers use the term “IS effectiveness” synonymously with “IS success.” Others use IS effectiveness to subsume what DeLone and McLean label “individual impact” and “organizational impact” (DeLone and McLean 1992), or “net benefits” (DeLone and McLean 2003). In the context of this article, the term IS success is used in the sense of DeLone and McLean’s comprehensive understanding to explicitly cover the whole range of suggested measures.
2.2 Previous research
In 1980, Peter Keen referred to the lack of a scientific basis in MIS research and raised the question of what the dependent variable in MIS research should be. Motivated by this request for clarification of the dependent variable, many researchers have tried to identify the factors contributing to IS success. Surrogate variables like user satisfaction or hours of usage would continue to mislead researchers and evade the information theory issue (Keen 1980, p. 9). Largely, however, different researchers addressed different aspects of IS success, making comparisons difficult. In order to organize the large body of existing literature of that time, as well as to integrate the different concepts and findings, thus presenting a comprehensive taxonomy, DeLone and McLean introduced their (first) IS success model (DeLone and McLean 1992).
Building on the three levels of information by Shannon and Weaver (1949), together with Mason’s expansion of the effectiveness or influence level (Mason 1978), DeLone and McLean defined six distinct dimensions of IS success: system quality, information quality, use, user satisfaction, individual impact, and organizational impact. Based on this framework, they classified the empirical studies published in seven highly-ranked MIS journals between January 1981 and January 1988. Their examination supports the presumption that the many success measures fall into the six major interrelated and interdependent categories. These authors’ IS success model was their attempt to integrate these dimensions into a comprehensive framework. Judged by its frequent citations in articles published in leading journals, the D&M IS success model has, despite some revealed weaknesses (Hu 2003), become the dominant evaluation framework in MIS research, in part due to its understandability and simplicity.
Motivated by DeLone and McLean’s call for further development and validation of their model, many researchers have attempted to extend or respecify the original model. A number of researchers claim that the D&M IS success model is incomplete. They suggest that more dimensions should be included in the model, or present alternative success models (e. g., Ballantine et al. 1996; Seddon 1997; Seddon and Kiew 1994). Other researchers focus on the application and validation of the model (e. g., Rai et al. 2002).
Ten years after the publication of their first model, and based on the evaluation of the many contributions to it, DeLone and McLean proposed an updated IS success model, as depicted in Fig. 1 (DeLone and McLean 2002; DeLone and McLean 2003).
The primary differences between the original and the updated model are: (1) the addition of “service quality” to reflect the importance of service and support in successful e-commerce systems; (2) the addition of “intention to use” to measure user attitude as an alternative measure of “use”; and (3) the collapsing of “individual impact” and “organizational impact” into a more parsimonious “net benefits” construct. The updated model consists of six interrelated dimensions of IS success: information, system and service quality, (intention to) use, user satisfaction, and net benefits. The arrows demonstrate proposed associations between the success dimensions. The model can be interpreted as follows: a system can be evaluated in terms of information, system, and service quality; these characteristics affect subsequent use or intention to use and user satisfaction. Certain benefits will be achieved by using the system. The net benefits will (positively or negatively) influence user satisfaction and the further use of the information system.
3 Methodology
3.1 Literature review
The increasing number of published books and journals, as well as conferences and workshops has made the research process more complex and time-consuming. Consequently, there is a greater need to describe, synthesize, evaluate, and integrate the results of articles on a particular field of research. The process of conducting a literature review can be regarded as a scientific procedure that should be guided by an appropriate research method (Fettke 2006).
According to the newest edition of the Publication Manual of the American Psychological Association (APA 2001, p. 7), review articles are critical evaluations of material that has already been published. By organizing, integrating, and evaluating previously published material, the author of a review article examines current research’s progress toward clarifying a problem. In a sense, a review article is a tutorial in that the author
-
defines and clarifies the problem;
-
summarizes previous investigations in order to inform the reader of the state of current research;
-
identifies relations, contradictions, gaps, and inconsistencies in the relevant literature; and
-
suggests the next step or steps in solving the problem.
3.2 Literature selection process
The basis of a literature review is the relevant literature on the topic to be examined. A systematic search should ensure that a relatively complete number of relevant articles are accumulated. Our process of literature selection for inclusion in this review consisted of three steps: (1) selecting the literature sources, (2) defining a time frame for analysis, and (3) selecting articles to be reviewed.
(1) Source selection
The first step of the literature selection process was to identify a list of literature sources that was as comprehensive as possible. We started off by taking the journals surveyed by DeLone and McLean (1992; 2002; 2003) into consideration. As a field’s major contributions are likely to be in leading journals (Webster and Watson 2002, p. xvi), we extended the initial list of twelve journals by adding additional top journals. Based on Saunders’s (Saunders 2008) MIS journal ranking, we added more journals in ascending order of their average rank up to a value of 30. The MIS journal ranking is a meta-analysis based on nine separate journal rankings, therefore not representing a single researcher’s perception, but that of many. Journals that were ranked by only one original meta-analysis source were not taken into consideration, as they were regarded as lacking representativeness. Some journals were excluded due to their specialized character (e. g., “Operations Research”). In total, we selected 34 leading North American and European IS journals. In addition, we added the proceedings of four major international IS conferences considered important for the IS field (Caya and Pinsonneault 2004, p. 2; Gonzalez et al. 2006, p. 822). The inclusion of conference proceedings allows the consideration of very recent research. Consequently, we took into account that there might be some duplication of older papers that had first appeared in conference proceedings and were later published in journals. Tab. 2 lists all of the 38 literature sources that we surveyed to identify relevant articles. Books were deliberately omitted from the selection process on the assumption that their authors had already published their results in journals. Furthermore, the quality of the contributions is not always apparent, since not all of them were subjected to a formalized review process.
(2) Time frame selection
The second step of the literature selection process was to define an appropriate time frame. For their original model, DeLone and McLean (1992) reviewed publications that appeared between January 1981 and January 1988. For their updated model of IS success (DeLone and McLean 2003), literature published between 1992 and mid-2002 was surveyed. In keeping with the current article’s objective – the examination of research on measuring IS success after the publication of the updated D&M IS success model – the period between 2003 and 2007 was considered an appropriate time frame for the literature search.
(3) Paper selection
Finally, we had to choose topic-related papers from the selected literature sources that had appeared in the defined time frame. We searched electronic databases (EBSCO, ScienceDirect, ProQuest) and specific journal and conference websites to select papers for inclusion in the review. An initial list of papers was generated by using the search strings “information systems success,” “IS success,” “information systems effectiveness,” and “IS effectiveness” to search for titles, abstracts, and keywords. Only if no electronic search was possible, did we scan the journals’ and conference proceedings’ tables of contents. To complete the selection process, we manually reviewed the resulting list of papers, selecting only the relevant ones.
3.3 Literature pool
In total, we identified 64 articles by means of database searches and examinations of specific websites. Of the papers included in the review, 35 are journal articles and 29 conference papers. We subjected these papers to a more detailed review in keeping with the review framework presented below.
3.4 Review framework
We defined an analytical framework to systematically classify and describe the selected literature. We consequently first examined the classification schemes of similar studies (e. g., Alavi and Carlson 1992; Grover et al. 1996; Palvia et al. 2004; Seddon et al. 1999) and adapted evaluation categories that were considered suitable for our review. We thereafter added further categories and items to cover all important aspects of this article’s objective. The resulting framework comprises eight categories: (1) theoretical foundation, (2) research approach, (3) object of analysis, (4) unit of analysis, (5) evaluation perspective, (6) data gathering, (7) data analysis, and (8) methodological type. Fig. 2 presents an overview of these categories.
Theoretical foundation
This category refers to those reference theories and generally accepted frameworks on which the authors primarily relied in the design and analysis of their research models. The initial list consisted of: the D&M IS success model (DeLone and McLean 1992), the updated D&M IS success model (DeLone and McLean 2003), the Technology Acceptance Model (Davis 1989), and the Seddon Model (Seddon 1997). We considered these frameworks as the most accepted ones with regard to IS success measurement. Papers employing a theory not included in the initial list were classified as “other,” while papers that did not relate to any theory at all were classified as “n/a” (not applicable).
Research approach
The category “research approach” classifies the reviewed papers into empirical and non-empirical research. Following Alavi and Carlson (1992, pp. 47–48), papers are regarded as empirical if they rely on observation and apply some type of empirical method (e. g., survey, laboratory experiment, case study). Non-empirical papers are primarily based on ideas, frameworks, and speculation rather than on systematic observation. They may contain some empirical observations or data, but these will be in a secondary or supporting role only. The main focus of this literature review is on the empirical literature in the field under examination. Thus, most of the categories of the review framework refer to this type of research. Nevertheless, in consideration of King and He’s (King and He 2005, p. 671) observation of a sampling bias towards empirical studies analyzed in reviews, also non-empirical papers like frameworks, conceptual models, or speculation papers were taken into consideration.
Object of analysis
The category “object of analysis” is used to classify the type of system that is being evaluated. Following Seddon et al. (1999, p. 6), this category comprises the following six components: (1) an aspect of IT use (e. g., a single algorithm or form of user interface), (2) a single IT application (e. g., a certain data warehouse), (3) a type of IT or IT application (e. g., knowledge management systems), (4) all IT applications used by an organization or sub-organization, (5) an aspect of a system development methodology, and (6) the IT function of an organization or sub-organization. This category was chosen for the review framework to disclose the main focus of the studies under review.
Unit of analysis
This category responds to the question: What unit of analysis is used? Grover et al. (1996, p. 181) argue that the evaluation of IS success should be conducted from both a micro and a macro view in order to build a complete picture. Thus, IS success should be considered at the individual as well as at the organizational level. The distinction is necessary because IS supports individual decision making and can also provide competitive advantage in organizations. Consequently, from a micro perspective, the success of an IS is related to the extent to which IS satisfies the requirements of the organization’s members, whereas from a macro perspective, it is related to how much the IS helps organizations to gain competitiveness.
Evaluation perspective
Different stakeholders in an organization may validly come to different conclusions about the same information system’s success (Seddon et al. 1999, p. 183; Sedera et al. 2004b). Though an IS may be viewed a successful system from one standpoint, it may be interpreted as unsuccessful from another. The category “evaluation perspective” therefore specifies the person or group in whose interest the evaluation of IS success is determined. Grover et al. (1996, p. 183) list four different classes of evaluation perspectives: users, top management, IS personnel, and external entities (suppliers, customers, etc.). For a slightly broader differentiation, we added two additional items: IS executives and multiple stakeholders. All evaluation perspectives can employ both an individual and an organizational unit of analysis.
Data gathering
The category “data gathering” refers to the research methodology that the authors employ to gather empirical data. The research methodology can be considered the “overall process guiding the research project” or the “primary evidence generation mechanism” (Palvia et al. 2003, p. 290). An analysis of the research methodology provides insights into the reliability and generalizability of the study results. For a closer analysis of the research methodology applied for data gathering in the empirical papers, we distinguished four empirical research methods: survey, interview, case study, and laboratory experiment. We consider these methods the dominant empirical methods in IS research. “Other” covered papers employing an additional empirical research method.
Data analysis
We distinguished the following techniques that we consider most commonly used in IS research: structural equation modeling (e. g., LISREL, PLS), regression analysis, factor analysis, variance analysis, and cluster analysis. “Other” covered studies using methods like qualitative analysis techniques. Papers that did not employ analysis techniques were classified as “n/a.”
Methodological type
We classified the non-empirical papers according to their methodological type. Adopting the classification by Palvia et al. (2004, p. 529), we distinguished three non-empirical methodological types: research that intends to describe a framework or a conceptual model (“framework/conceptual model”); research that is not really based on any hard evidence but reflects the knowledge and experience of the authors (“speculation/commentary”); and research that is mainly based on the review of existing literature (“library research”). Non-empirical papers of other methodological types were classified as “other.”
3.5 Review and classification process
After identifying and selecting the papers to be included in the review, as well as defining our review framework, we read all the papers to classify them. The process of classifying involved a degree of interpretation on our part, as the authors often did not explicitly state their research question or methodology. In order to account for this and to demonstrate a high inter-rater reliability (Tinsley and Weiss 1975), we used a parallel assessment approach. Two researchers reviewed and classified the selected articles independently. At a reconciliation meeting, we compared the results, reconciled discrepancies, and agreed on the final classification through discussion. The results of our review process are presented in the following section.
4 Results
4.1 Selection of relevant literature
After reviewing the selected publications, their relevancy was analyzed in respect of this article’s objective. Of the 64 articles identified in the first step of the selection process, we subsequently considered 16 journal articles and 7 conference papers “not relevant.” Since the focus of this review is on comprehensively assessing IS success through multidimensional approaches, we excluded publications examining single success dimensions. Consequently, 19 journal articles and 22 conference papers remained, thus totaling 41 relevant publications that we analyzed in depth. Fig. 3 illustrates the selection process.
For the in-depth analysis, we classified the 41 remaining publications as either empirical (28) or non-empirical papers (13), according to their research approach.
4.2 Analysis of empirical papers
The main focus of this literature review is on the empirical literature in the field under examination. Consequently, we conducted an in-depth analysis of the selected empirical papers’ research design. The results of this analysis are presented in the following section. To answer the question “what” was measured, an examination is provided of the studies’ analysis objects.
Research design
The categorization of the empirical papers according to their research design is illustrated in Fig. 4. The results show that the dominant research is that which analyzes the individual impact of a certain type of information system that users evaluate by means of surveys (23) and structural equation modeling (17). The main theoretical basis of the reviewed studies is the D&M IS success model (either the original (18) or updated version (8)).
Object of analysis
The review dimension “object of analysis” is used to classify the type of information system being evaluated. Approximately half of the empirical studies analyze the success of a certain type of IT application (15). In six publications, the success of a single IT application is assessed. Few studies evaluate the success of all of an organization’s IT applications (3) or an organization’s IT function (1). Empirical studies validating general conceptual models without applying them (e. g., by conducting focus group interviews) were categorized as “not applicable.” The results of the classification in terms of the object of analysis are presented in Tab. 3.
4.3 Analysis of non-empirical papers
Although our study focuses primarily on empirical publications, we conducted a less detailed descriptive analysis of the non-empirical papers. All non-empirical publications reviewed were either classified as a “framework/conceptual model” or “speculation/commentary.” No publication in the literature pool was categorized as purely “library research” or a “mathematical model.”
Of the 13 non-empirical articles under review, we classified eleven as a “framework/conceptual model.” In contrast to the models presented in the empirical papers, these frameworks or models have only been theoretically derived and their validation and application are not presented in the respective papers. An overview of these publications is presented in Tab. 4. Besides the various papers of the “framework/conceptual model” type, we classified only two of the non-empirical papers as “speculation/commentary.”
5 Conclusions
5.1 Summary of findings
This article examines the existing literature on multidimensional approaches to measuring IS success by means of a literature review and a classification of articles published between 2003 and 2007 in order to explore the current state of research. We identified 41 articles in a systematic search of 34 leading North American and European IS journals and four reputable IS conferences. We analyzed the publications with regard to their theoretical foundation, research approach, and research design.
Based on an in-depth analysis of the 41 publications, we have deduced the following findings:
-
The D&M IS success model is still the dominant basis of IS success measurement. Of the 28 empirical articles reviewed, 22 refer directly to this model. Some studies test the model in its original version; the majority of the studies use the D&M IS success model – often in combination with other theoretical models – as a basis for deriving new research models that are applicable to the specific requirements of the corresponding problem domains.
-
Quantitative-empirical analysis is the primary methodology used in IS success measurement. The results of the literature classification indicate that the dominant empirical research is an analysis of the impact of a certain type of information system as evaluated by users by means of surveys and structural equation modeling.
-
Most of the empirical studies assess IS success as an “individual impact” and, thus, from a micro view. Only twelve of the 28 empirical papers consider IS success from both the individual and the organizational level, thus building a more comprehensive picture of IS success.
-
Several success models for evaluating specific types of IS, like knowledge management or enterprise systems, have been developed on the basis of existing theoretical models and frameworks. The adaptation of existing general models for more specific approaches might serve as a basis for other research in the same area.
5.2 Limitations
Our research is limited in that this review is based on a restricted number of journals and conferences as publication sources. Although major contributions to the field are likely to be found in leading journals, the decision on the scope may have omitted potentially important publications. Another limitation clearly results from the database-driven approach. By predominantly relying on database queries for the literature search, this review may have failed to identify relevant publications that do not include any of the search terms in their title, abstracts, or keywords. A further limitation lies in the term “IS success” being decisively influenced by DeLone and McLean’s work. The probability of the applied search strings identifying publications that refer to the D&M IS success model may therefore have been higher than finding articles with a different theoretical foundation. Finally, the analysis and classification of the publications were based on the parallel assessments of only two researchers. A parallel analysis by more researchers could have increased the results’ validity.
5.3 Recommendations for future research
Measuring IS success has been a popular stream of research during the last decades, resulting in many articles. Our study classifies the existing literature to provide an overview of prior research in the area. Based on the presented results, we offer the following suggestions for further research:
-
Our study’s limitations indicate that our analysis is based on a restricted number of publications. Future research could broaden the basis of the literature review by extending the range of journals and conference proceedings considered as literature sources. In addition, the database-driven approach could be complemented by a manual scanning of tables of content.
-
Researchers have recommended the reuse of proven success measures to allow a comparison of results. The analysis of the papers in this review focuses on the classification of research on IS success. The measures used in the reviewed studies therefore remain uninvestigated. An analysis of the success measures used in recent publications would further contribute to a comprehensive overview of prior research.
-
Scientific literature holds many theoretical models for measuring IS success. The usefulness of these approaches for practitioners is mostly relatively unknown. The “reality check” by Rosemann and Vessey (2005; 2008) is a first step toward understanding the relevance of the D&M IS success model for practice. Further research should be undertaken in this direction to increase the relevance of research in this area without compromising its rigor.
References
Alavi M, Carlson P (1992) A review of MIS research and disciplinary development. Journal of Management Information Systems 8(4):45–62
Almutairi H, Subramanian GH (2005) An empirical application of the DeLone and McLean model in the Kuwaiti private sector. Journal of Computer Information Systems 45(3):113–122
APA (2001) Publication manual of the American Psychological Association, 5th edn. APA, Washington, DC
Bailey JE, Pearson SW (1983) Development of a tool for measuring and analyzing computer user satisfaction. Management Science 29(5):530–545
Ballantine J, Bonner M, Levy M, Martin A, Munro I, Powell P (1996) The 3-D model of information systems success: the search for the dependent variable continues. Information Resource Management Journal 9(4):5–14
Bartis E, Mitev N (2007) A multiple narrative approach to information systems failure: a successful system that failed. In: Proceedings of the 15th European conference on information systems (ECIS 07). St. Gallen
Bélanger F, Fan W, Schaupp LC, Krishen A, Everhart J, Poteet D, Nakamoto K (2006) Web site success metrics: addressing the duality of goals. Communications of the ACM 49(12):114–116
Bradley RV, Pridmore JL, Byrd TA (2006) Information systems success in the context of different corporate cultural types: an empirical investigation. Journal of Management Information Systems 23(2):267–294
Briggs RO, De Vreede GJ, Nunamaker JF Jr, Sprague RH Jr (2003) Special issue: information systems success. Journal of Management Information Systems 19(4):5–8
Brynjolfsson E (1993) The productivity paradox of information technology. Communications of the ACM 36(12):66–77
Byrd TA, Thrasher EH, Lang T, Davidson NW (2006) A process-oriented perspective of IS success: examining the impact of IS on operational cost. Omega 34(5):448–460
Caya O, Pinsonneault A (2004) “Are we in crisis?” An assessment of thirty years of introspection in IS. In: Proceedings of the 32nd annual conference of the Administrative Sciences Association of Canada (ASAC 2004). Quebec
Cha-Jan Chang J, King WR (2005) Measuring the performance of information systems: a functional scorecard. Journal of Management Information Systems 22(1):85–115
Chae HCM (2007) IS success model and perceived IT value. In: Proceedings of the 13th Americas conference on information systems (AMCIS 07). Keystone
Cheung CMK, Lee MKO (2005) The asymmetric effect of website attribute performance on satisfaction: an empirical study. In: Proceedings of the 38th Hawaii international conference on system sciences (HICSS 05). Big Island
Clay PF, Dennis AR, Ko DG (2005) Factors affecting the loyal use of knowledge management systems. In: Proceedings of the 38th Hawaii international conference on system sciences (HICSS 05). Big Island
Davis FD (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13(3):318–340
DeLone WH, McLean ER (1992) Information systems success: the quest for the dependent variable. Information Systems Research 3(1):60–95
DeLone WH, McLean ER (2002) Information systems success revisited. In: Proceedings of the 35th Hawaii international conference on system sciences (HICSS 02). Big Island, pp 238–249
DeLone WH, McLean ER (2003) The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems 19(4):9–30
DeLone WH, McLean ER (2004) Measuring e-commerce success: applying the DeLone & McLean Information systems success model. International Journal of Electronic Commerce 9(1):31–47
Fettke P (2006) State-of-the-Art des State-of-the-Art: Eine Untersuchung der Forschungsmethode “Review” innerhalb der Wirtschaftsinformatik. WIRTSCHAFTSINFORMATIK 48(4):257–266
Gable G, Sedera D, Chan T (2003) Enterprise systems success: a measurement model. In: Proceedings of the 24th international conference on information systems (ICIS 03). Seattle
Garrity EJ, Glassberg B, Kim YJ, Sanders GL, Shin SK (2005) An experimental investigation of web-based information systems success in the context of electronic commerce. Decision Support Systems 39(3):485–503
Gatian AW (1994) Is user satisfaction a valid measure of system effectiveness? Information & Management 26(3):119–131
Gonzalez R, Gasco J, Llopis J (2006) Information systems outsourcing: A literature analysis. Information & Management 43(7):821–834
Goodhue DL, Thompson RL (1995) Task-technology fit and individual performance. MIS Quarterly 19(2):213
Grover V, Jeong SR, Segars AH (1996) Information systems effectiveness: the construct space and patters of application. Information & Management 31(4):177–191
Hu PJH (2003) Evaluating telemedicine systems success: a revised model. In: Proceedings of the 36th Hawaii international conference on system sciences (HICSS 03). Big Island
IDC (2007) IDC press release. Framingham
Iivari J (2005) An empirical test of the DeLone-McLean model of information system success. The DATA BASE for Advances in Information Systems 26(2):8–27
Jennex ME, Olfman L (2003) A knowledge management success model: an extension of DeLone and McLean’s IS success model. In: Proceedings of the 9th Americas conference on information systems (AMCIS 03). Tampa
Jennex ME, Olfman L (2004) Assessing knowledge management success/effectiveness models. In: Proceedings of the 37th Hawaii international conference on system sciences (HICSS 04). Big Island
Keen PGW (1980) Reference disciplines and a cumulative tradition. In: Proceedings of the 1st international conference on information systems (ICIS 80). Philadelphia, pp 9–18
King WR, He J (2005) Understanding the role and methods of meta-analysis in IS research. Communication of the AIS 16:656–686
Kleist VF, Williams L, Peace AG (2004) A performance evaluation framework for a public university knowledge management system. Journal of Computer Information Systems 44(3):9–16
Kulkarni UR, Ravindran S, Freeze R (2006) A knowledge management success model: theoretical development and empirical validation. Journal of Management Information Systems 23(3):309–347
Larsen KRT (2003) A taxonomy of antecedents of information systems success: variable analysis studies. Journal of Management Information Systems 20(2):169–246
Lucas HC Jr (1978) Empirical evidence for a descriptive model of implementation. MIS Quarterly 2(2):27
Mao E, Ambrose P (2004) A theoretical and empirical validation of is success models in a temporal and quasi volitional technology usage context. In: Proceedings of the 10th Americas conference on information systems (AMCIS 04). New York
Nelson RR, Todd PA, Wixom BH (2005) Antecedents of information and system quality: an empirical examination within the context of data warehousing. Journal of Management Information Systems 21(4):199–235
Palvia P, Mao E, Midha V (2003) Management information systems research: what’s there in a methodology? Communication of the AIS 11:289–309
Palvia P, Mao E, Midha V (2004) Research methodologies in MIS: an update. Communication of the AIS 14:526–542
Pare G, Aubry D, Lepanto L, Sicotte C (2005) Evaluating PACS success: a multidimensional model. In: Proceedings of the 38th Hawaii international conference on system sciences (HICSS 05). Big Island
Qian Z, Bock GW (2005) An empirical study on measuring the success of knowledge repository systems. In: Proceedings of the 38th Hawaii international conference on system sciences (HICSS 05). Big Island
Rai A, Lang SS, Welker RB (2002) Assessing the validity of IS success models: an empirical test and theoretical analysis. Information Systems Research 13(1):50–69
Rainer RK Jr, Watson HJ (1995) The keys to executive information system success. Journal of Management Information Systems 12(2):83–98
Rosemann M, Vessey I (2005) Linking theory and practice: performing a reality check on a model of IS success. In: Proceedings of the 13th European conference on information systems (ECIS 05). Regensburg
Rosemann M, Vessey I (2008) Toward improving the relevance of information systems research to practice: the role of applicability checks. MIS Quarterly 32(1):1–22
Saunders C (2008) MIS journal rankings. http://www.isworld.org/csaunders/rankings.htm. Accessed 2008-10-15
Schaupp LC, Fan W, Belanger F (2006) Determining success for different website goals. In: Proceedings of the 39th Hawaii international conference on system sciences (HICSS 06). Kauai
Seddon PB (1997) A respecification and extension of the DeLone and McLean model of IS success. Information Systems Research 8(3):240–253
Seddon PB, Kiew MY (1994) A partial test and development of the DeLone and McLean model of IS success. In: Proceedings of the 15th international conference on information systems (ICIS 94). Vancouver, pp 99–110
Seddon PB, Staples S, Patnayakuni R, Bowtell M (1999) Dimensions of information systems success. Communications of the AIS 2:1–60
Sedera D (2006) An empirical investigation of the salient characteristics of IS-success models. In: Proceedings of the 12th Americas conference on information systems (AMCIS 06). Acapulco
Sedera D, Gable G (2004a) A factor and structural equation analysis of the enterprise systems success measurement model. In: Proceedings of the 37th Hawaii international conference on system sciences (HICSS 04). Big Island
Sedera D, Gable G (2004b) A factor and structural equation analysis of the enterprise systems success measurement model. In: Proceedings of the 25th international conference on information systems (ICIS 04). Washington, DC, pp December 12–15
Sedera D, Gable G, Chan T (2004a) Knowledge management as an antecedent of enterprise system success. In: Proceedings of the 10th Americas conference on information systems (AMCIS 04). New York
Sedera D, Gable G, Chan T (2004b) Measuring enterprise systems success: the importance of a multiple stakeholder perspective. In: Proceedings of the 12th European conference on information systems (ECIS 04). Turku
Seen M, Rouse AC, Beaumont N (2007) Explaining and predicting information systems acceptance and success: an integrative model. In: Proceedings of the 15th European conference on information systems (ECIS 07). St Gallen
Shannon CE, Weaver W (1949) The mathematical theory of communication. University of Illinois Press, Urbana
Shin B (2003) An exploratory investigation of system success factors in data warehousing. Journal of the AIS 4:141–170
Sugumaran V, Arogyaswamy B (2003) Measuring IT performance: “contingency” variables and value modes. Journal of Computer Information Systems 44(2):79–86
Thomas P (2006) Information systems success and technology acceptance within government organization. In: Proceedings of the 12th Americas conference on information systems (AMCIS 06). Acapulco
Tinsley HE, Weiss DJ (1975) Interrater reliability and agreement of subjective judgments. Journal of Counseling Psychology 22(4):358–376
Verner J, Cox K, Bleistein SJ (2006) Predicting good requirements for in-house development projects. In: Proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering. Rio de Janeiro, pp 154–163
Watson RT, Pitt LF, Kavan CB (1998) Measuring information systems service quality: lessons from two longitudinal case studies. MIS Quarterly 22(1):61–79
Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Quarterly 26(2):xiii–xxiii
Wilkin C, Castleman T (2003) Development of an instrument to evaluate the quality of delivered information systems. In: Proceedings of the 36th Hawaii international conference on system sciences (HICSS 03). Big Island
Wixom BH, Todd PA (2005) A theoretical integration of user satisfaction and technology acceptance. Information Systems Research 16(1):85–102
Wu JH, Wang YM (2006) Measuring KMS success: A respecification of the DeLone and McLean’s model. Information & Management 43(6):728–739
Yusof MM, Paul RJ, Stergioulas LK (2006) Towards a framework for health information systems evaluation. In: Proceedings of the 39th Hawaii international conference on system sciences (HICSS 06). Kauai
Author information
Authors and Affiliations
Corresponding author
Additional information
Accepted after two revisions by Prof. Dr. Buxmann.
This article is also available in German in print and via http://www.wirtschaftsinformatik.de: Urbach N, Smolnik S, Riempp G (2009) Der Stand der Forschung zur Erfolgsmessung von Informationssystemen – Eine Analyse vorhandener mehrdimensionaler Ansätze. WIRTSCHAFTSINFORMATIK. doi: 10.1007/11576-009-0181-y.
Appendix
Appendix
Classification of empirical papers
Article | Theoretical foundation | Object of analysis | Unit of analysis | Evaluation perspective | Data gathering | Data analysis | |||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| DeLone and McLean (1992) | DeLone and McLean (2003) | Davis (1989) | Seddon (1997) | Other | n/a | Single IT application | Type of IT or IT application | All IT applications | IT function of an organization | n/a | Individual level | Organizational level | n/a | Users | IS executives | IS personnel | Multiple stakeholders | n/a | Survey | Interview | Case study | Other | Structural equation modeling | Regression analysis | Factor analysis | Variance analysis | Cluster analysis | Other | ||
Almutairi and Subramanian (2005) | X |
|
|
|
|
|
|
| X |
|
| X | X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
| |
Bartis and Mitev (2007) |
|
|
|
| X |
| X |
|
|
|
| X | X |
|
|
|
| X |
|
|
| X |
|
|
|
|
|
| X |
| |
Bradley et al. (2006) |
| X |
|
|
|
|
|
| X |
|
| X | X |
|
| X |
|
|
| X |
|
|
| X |
|
|
| X |
|
| |
Byrd et al. (2006) | X |
|
|
|
|
|
|
| X |
|
| X | X |
|
| X |
|
|
| X |
|
| X | X |
| X | X |
|
|
| |
Cha-Jan Chang and King (2005) |
|
|
|
| X |
|
|
|
| X |
| X | X |
| X |
|
|
|
| X |
|
|
| X |
| X |
|
|
|
| |
Cheung and Lee (2005) |
|
|
|
| X |
| X |
|
|
|
| X |
|
| X |
|
|
|
| X |
|
|
|
| X |
|
|
|
|
| |
Clay et al. (2005) | X |
|
|
|
|
| X |
|
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
|
|
|
|
| |
Gable et al. (2003) | X |
|
|
| X |
|
| X |
|
|
| X | X |
|
|
|
| X |
| X |
|
|
|
|
| X |
|
|
|
| |
Garrity et al. (2005) | X |
| X |
| X |
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
| X |
|
|
| |
Iivari (2005) | X |
|
|
|
|
| X |
|
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
|
|
|
|
| |
Kulkarni et al. (2006) |
| X |
|
|
|
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
| X |
|
|
|
| |
Larsen (2003) |
|
|
|
|
| X |
|
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
|
| X | X |
|
| |
Mao and Ambrose (2004) | X | X | X | X | X | X | X | X | |||||||||||||||||||||||
Nelson et al. (2005) | X |
|
| X | X |
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
| X |
|
|
| |
Pare et al. (2005) |
| X |
|
| X |
| X |
|
|
|
| X |
|
| X |
|
|
|
| X | X |
|
|
|
|
|
|
| X |
| |
Qian and Bock (2005) | X |
|
|
| X |
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
| X | X |
|
|
| |
Rosemann and Vessey (2005) | X | X |
|
|
|
|
|
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
|
|
|
| X |
| |
Sabherwal et al. (2006) | X |
| X |
| X |
|
|
|
|
| X | X |
|
| X |
|
|
|
|
|
|
| X | X |
|
|
|
|
|
| |
Schaupp et al. (2006) | X | X | X |
|
|
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
|
|
|
|
| |
Sedera (2006) | X | X |
|
| X |
|
| X |
|
|
| X | X |
| X |
|
|
|
| X |
|
|
|
|
| X |
|
| X |
| |
Sedera and Gable (2004b) | X |
|
|
| X |
|
| X |
|
|
| X | X |
| X |
|
|
|
| X |
|
|
| X |
| X |
|
|
|
| |
Sedera and Gable (2004a) | X |
|
|
| X |
|
| X |
|
|
| X | X |
| X |
|
|
|
| X |
|
|
| X |
| X |
|
|
|
| |
Sedera et al. (2004a) |
|
|
|
| X |
|
| X |
|
|
| X | X |
| X |
|
|
|
| X |
|
|
| X |
|
|
|
|
|
| |
Sedera et al. (2004b) |
|
|
|
| X |
|
| X |
|
|
| X | X |
|
|
|
| X |
| X |
|
|
|
|
|
| X |
|
|
| |
Shin (2003) |
| X |
|
|
|
| X |
|
|
|
| X |
|
| X |
|
|
|
| X | X |
|
|
| X | X | X |
|
|
| |
Wilkin and Castleman (2003) | X |
|
|
|
|
|
| X |
|
|
| X | X |
|
|
|
| X |
|
| X |
| X |
|
|
|
|
| X |
| |
Wixom and Todd (2005) | X |
| X |
| X |
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
|
|
|
|
|
| |
Wu and Wang, (Wu and Wang 2006) | X | X |
|
|
|
|
| X |
|
|
| X |
|
| X |
|
|
|
| X |
|
|
| X |
| X |
|
|
|
| |
Total | 28 | 18 | 8 | 4 | 2 | 15 | 1 | 6 | 15 | 3 | 1 | 3 | 26 | 12 | 2 | 19 | 2 | 1 | 4 | 2 | 23 | 3 | 1 | 5 | 17 | 3 | 11 | 7 | 2 | 6 |
|
Classification of non-empirical papers
Article | Methodological type | Theoretical foundation | Object of analysis | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Framework/ conceptual model | Speculation/ commentary | Literature analysis | Other | DeLone and McLean (1992) | DeLone and McLean (2003) | Davis (1989) | Seddon (1997) | Other | n/a | Single IT application | Type of IT or IT application | All IT applications | IT function of an organization | n/a | ||
Bélanger et al. (2006) |
| X |
|
|
|
|
|
|
|
|
| X |
|
|
|
| |
Briggs et al. (2003) |
| X |
|
|
|
|
|
|
| X |
|
|
|
| X |
| |
Chae (2007) | X |
|
|
| X | X |
|
| X |
|
|
|
|
| X |
| |
DeLone and McLean (2003) | X |
|
|
| X |
|
|
|
|
|
|
|
|
| X |
| |
DeLone and McLean (2004) | X |
|
|
|
| X |
|
|
|
|
| X |
|
|
|
| |
Hu (2003) | X |
|
|
| X |
|
|
|
|
|
| X |
|
|
|
| |
Jennex and Olfman (2003) | X |
|
|
| X |
|
|
|
|
|
| X |
|
|
|
| |
Jennex and Olfman (2004) | X |
|
|
|
|
|
|
|
| X |
| X |
|
|
|
| |
Kleist et al. (2004) | X |
|
|
| X |
|
|
|
|
| X |
|
|
|
|
| |
Seen et al. (2007) | X |
|
|
|
|
|
|
| X |
|
|
|
|
| X |
| |
Sugumaran and Arogyaswamy (2003) | X |
|
|
|
| X |
|
| X |
|
|
|
|
| X |
| |
Thomas (2006) | X |
|
|
| X |
|
|
| X |
|
|
| X |
|
|
| |
Yusof et al. (2006) | X |
|
|
|
| X |
|
| X |
|
| X |
|
|
|
| |
Total | 13 | 11 | 2 | 0 | 0 | 6 | 4 | 0 | 0 | 5 | 2 | 1 | 6 | 1 | 0 | 4 |
|
Rights and permissions
About this article
Cite this article
Urbach, N., Smolnik, S. & Riempp, G. The State of Research on Information Systems Success. Bus. Inf. Syst. Eng. 1, 315–325 (2009). https://doi.org/10.1007/s12599-009-0059-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12599-009-0059-y