Abstract
To objectively evaluate the success of alterations to existing Business Intelligence (BI) environments, we need a way to compare measures from altered and unaltered versions of applications. The focus of this paper is on producing an evaluation tool which can be used to measure the success of amendments or updates made to existing BI solutions to support improved BI reporting. We define what we understand by success in this context, we elicit appropriate clusters of measurements together with the factors to be used for measuring success, and we develop an evaluation tool to be used by relevant stakeholders to measure success. We validate the evaluation tool with relevant domain experts and key users and make suggestions for future work.
Keywords
- Business intelligence
- Measuring success
- User satisfaction
- Technical functionality
- Reports
Download conference paper PDF
1 Introduction
Improved decision-making, increased profit and market efficiency, and reduced costs are some of the potential benefits of improving existing analytical applications, such as Business Intelligence (BI), within an organisation. However, to measure the success of changes to existing applications, it is necessary to evaluate the changes and compare satisfaction measures for the original and the amended versions of that application. The focus of this paper is on measuring the success of changes made to BI systems from reporting perspective. The aims of this paper are: (i) to define what we understand by success in this context (ii) to contribute to knowledge by defining criteria to be used for measuring the success of BI improvements to enable more optimal reporting and (iii) to develop an evaluation tool to be used by relevant stakeholders to measure success. The paper is structured as follows: in Sect. 2 we discuss BI and BI reporting. Section 3 reviews measurement in BI, looking at end user satisfaction and technical functionality. Section 4 discusses the development of the evaluation tool and Sect. 5 presents conclusions and recommendations for future work.
2 Measuring Changes to BI Reporting Processes
2.1 Business Intelligence
BI is seen as providing competitive advantage [1–5] and essential for strategic decision-making [6] and business analysis [7]. There are a range of definitions of BI, some focus primarily on the goals of BI [8–10], others additionally discussing the structures and processes of BI [3, 11–15], and others seeing BI more as an umbrella term which should be understood to include all the elements that make up the BI environment [16]. In this paper, we understand BI as a term which includes the strategies, processes, applications, data, products, technologies and technical architectures used to support the collection, analysis, presentation and dissemination of business information. The focus in this paper is on the reporting layer. In the BI environment, data presentation and visualisation happens at the reporting layer through the use of BI reports, dashboard or queries. The reporting layer is one of the core concepts underlying BI [14, 17–25]. It provides users with meaningful operational data [26], which may be predefined queries in the form of standard reports or user defined reports based on self-service BI [27]. There is constant management pressure to justify the contribution of BI [28] and this leads in turn to a demand for data about the role and uses of BI. As enterprises need fast and accurate assessment of market needs, and quick decision making offers competitive advantage, reporting and analytical support becomes critical for enterprises [29].
2.2 Measuring Success in BI
Many organisations struggle to define and measure BI success as there are numerous critical factors to be considered, such as BI capability, data quality, integration with other systems, flexibility, user access and risk management support [30]. In this paper, we adopt an existing approach that proposes measuring success in BI as “[the] positive benefits organisations could achieve by applying proposed modification in their BI environment” [30], and adapt it to consider BI reporting changes to be successful only if the changes provide or improve a positive experience for users.
DeLone and McLean proposed the well-known D&M IS Success Model to measure Information Systems (IS) success [31]. The D&M model was based on a comprehensive literature survey but was not empirically tested [32]. In their initial model, which was later slightly amended [33, 34], DeLone and McLean wanted to synthesize previous research on IS success into coherent clusters. The D&M model, which is widely accepted, considers the dimensions of information quality, system quality, use, user satisfaction, organisational and individual aspect as relevant to IS success. The most current D&M model provides a list of IS success variable categories identifying some examples of key measures to be used in each category [34]. For example: the variable category system quality could use measurements such as ease of use, system flexibility, system reliability, ease of learning, flexibility and response time; information quality could use measurements such as relevance, intelligibility, accuracy, usability and completeness; service quality measurements such as responsiveness, accuracy, reliability and technical competence; system use could use measurements such as amount, frequency, nature, extend and purpose of use; user satisfaction could be measured by single item or via multi-attribute scales; and net benefits could be measured through increased sales, cost reductions or improved productivity. The intention of the D&M model was to cover all possible IS success variables. In the context of this paper, the first question that arises is which factors from those dimensions (IS success variables) can be used as measures of success for BI projects. Can examples of key measures proposed by DeLone and McLean [33] as standard critical success factors (CSFs), be used to measure the success of system changes relevant for BI reporting? As BI is a branch of IS science, the logical answer seems to be, yes. However, to identify appropriate IS success variables from the D&M model and associated CSFs we have to focus on activities, phases and processes relevant for BI.
3 Measurements Relevant to Improve and Manage Existing BI Processes
Measuring business performance has a long tradition in companies, and it can be useful in the case of BI to perform activities such as determining the actual value of BI to a company or to improve and manage existing BI processes [10]. Lönnqvist and Pirttimäki propose four phases to be considered when measuring the performance of BI: (1) identification of information needs (2) information acquisition (3) information analysis and (4) storage and information utilisation [10]. The first phase considers activities related to discovering business information needed to resolve problems, the second acquisition of data from heterogeneous sources, and the third analysis of acquired data and wrapping them into information products [10]. The focus of this paper is on measuring the impact of BI system changes to BI reporting processes, meaning that the first three phases are outside the scope of the paper. Before decision makers can properly utilise information by applying reporting processes, it has to be adequately and timely communicated to the decision maker, making the fourth phase, namely storage and information utilisation, relevant for this paper.
Storage and information utilisation covers how to store, retrieve and share knowledge and information in the most optimal way, with business and other users, by using different BI applications, such as queries, reports and dashboards. Thus, it covers two clusters of measurements we identified as relevant: (i) business/end-users satisfaction, and (ii) technical functionality.
3.1 Business/End Users Satisfaction
User satisfaction is recognised as a critical measure of the success of IS [31, 33–42]. User satisfaction has been seen as a surrogate measure of IS effectiveness [43] and is one of most extensively used aspects for the evaluation of IS success [28]. Data Warehouse (DW) performance must be acceptable to the end user community [42]. Consequently, performance of BI reporting solutions, such as reports and dashboards, needs to meet this criterion.
Doll and Torkzadeh defined user satisfaction as “an affective attitude towards a specific computer application by someone who interacts with the application directly” [38]. For example, by positively influencing the end user experience, such as improving productivity or facilitating easier decision making, IS can cause a positive increment of user satisfaction. On the other side, by negatively influencing the end user experience, IS can lead to lower user satisfaction. User satisfaction can be seen as the sum of feelings or attitudes of a user toward a numbers of factors relevant for a specific situation [36].
We identified user satisfaction as one cluster of measurements that should be considered in relation to the success of BI reporting systems, however, it is important to define what is meant by user in this context. Davis and Olson distinguished between two user groups: users making decisions based on output of the system, and users entering information and preparing system reports [44]. According to Doll and Torkzadeh [38] end-user satisfaction in computing can be evaluated in terms of both the primary and secondary user roles, thus, they merge these two groups defined by Davis and Olson into one.
We analysed relevant user roles in eight large companies, which utilise BI, and identified two different user roles that actually use reports to make their business decisions or to achieve their operational or everyday activities: Management and Business Users. Those roles are very similar to groups defined by Davis and Olson. Management uses reports and dashboards to make decisions at enterprise level. Business users use reports & dashboards to make decisions at lower levels, such as departments or cost centres, and to make operational and everyday activities, such as controlling or planning. Business users are expected to control the content of the reports & dashboards and to require changes or correction if needed. They also communicate Management requirements to technical personnel, and should participate in BI Competency Centre (BICC) activities. Business users can also have a more technical role. In this paper, we are interested in measuring user satisfaction in relation to Business users.
Measuring User Satisfaction.
Doll and Torkzadeh developed a widely used model to measure End User Computer Satisfaction (EUCS) that covers all key factors of the user perspective [38, 40]. The model to measure end user computer satisfaction included twelve attributes in the form of questions covering five aspects: content, accuracy, format, ease of use and timeliness. This model is well validated and has been found to be generalizable across several IS applications; however, it has not been validated with users of BI [40].
Petter et al. [34] provide several examples of measuring user satisfaction aspects as a part of IS success based on the D&M IS Success Model [34]. According to them, we can use single items to measure user satisfaction, semantic differential scales to assess attitudes and satisfaction with the system, or multi-attribute scales to measure user information satisfaction. However, we face three issues when considering this approach in the context of evaluating user satisfaction concerning changes to BI reporting systems. First is the fact that the discussion is about methods of measuring, rather than relevant measurements. The second issue is that this approach is designed for IS rather than the narrower spectrum of BI. The third issue is that this approach does not identify explicit measurements to be used to validate success when changes are made to BI reporting systems. Considering the D&M model in the context of this paper, we identify ease of use and flexibility as the measures of system quality possibly relevant when measuring user satisfaction.
In the Data Warehouse Balanced Scorecard Model (DWBSM), user perspective based on user satisfaction with data quality and query performance is defined as one of four aspects when measuring the success of the DW [42]. DWBSM considers data quality, average query response time, data freshness and timeliness of information per service level agreement as key factors in determining user satisfaction. As DW are at the heart of BI systems [1, 47], those factors are relevant to evaluating the success of changes to BI reporting but are not comprehensive enough as they cover only one part of a BI system.
To develop a model for the measurement of success in changes to BI reporting systems, we combined elements from different approaches, cross tabulating the aspects and attributes of the EUCS model with the phases to be considered when measuring performance of BI discussed in Sect. 3. Table 1 shows the initial results of the cross tabulation with areas of intersection marked with ‘x’, and where each number represents a phase to be considered when measuring performance of BI proposed by Lönnqvist and Pirttimäki. The questions shown in Table 1 were later modified following feedback, as discussed in Sect. 4.
As discussed in Sect. 3, only the storage and information utilisation phase (marked with number 4 in Table 1) from the Lönnqvist and Pirttimäki approach is relevant when measuring the success of changes to BI reporting systems to enable more optimal reporting. Based on the analysis given in Table 1, it is possible to extract a list of attributes (questions) to be used as user satisfaction measurements. We extracted eight key measures and modified these for use in the BI context. The elements identified from the EUCS model were extended to include three additional questions related to changing descriptive content (CDS) of BI reports. Descriptive content of the reports can include, but is not limited to, descriptions of categories, hierarchies or attributes, such as product, customer or location names descriptions. The most common cause of such requests for changes to descriptive content are errors in the descriptions and CDS issues are common with large and rapidly changing dimensions [47].
Table 2 presents the questions developed from these measures, which were later revised following feedback during the initial phase of validation.
The design of the questions supports both an interview-based approach and a quantitative survey based approach. However, using only user satisfaction criteria is not sufficient to measure the success of modifications to reporting systems.
3.2 Technical Functionality
In Sect. 2, we identified technical functionality as the second cluster of measurements that need to be considered when measuring the success of changes to BI reporting systems. To initiate and manage improvement activities for specific software solutions, it has been suggested that there should be sequential measurements of the quality attributes of product or process [48].
Measuring Technical Functionality.
In the DWBSM approach, the following technical key factors are identified: ETL code performance, batch cycles runtime, reporting & BI query runtime, agile development, testing and flawless deployment into production environment [42]. We identify reporting & BI query runtime as relevant in the context of BI reporting. From the D&M IS success model, we extract the response time measure from the system quality cluster of IS success variables. Reporting and BI query runtime and response time both belong to the time category although they are differently named. However, to measure the technical success of modifications to BI reporting solutions, it is not enough to conclude that we only need to measure the time. We need a clear definition and extraction of each relevant BI technical element belonging to the time and other technical categories that should be evaluated. Table 3 shows the extracted time elements and includes elements related to memory use and technical scalability.
4 Producing an Evaluation Tool to Measure Success of Changing BI Environment
As discussed in Sect. 3, we elicited two clusters of measurements for use when evaluating the success of changes to BI reporting systems. The measurements identified in the user satisfaction and in technical functionality clusters are intended to be recorded at two stages: (i) in the existing BI environment - before implementing any changes, and (ii) after modification of existing BI system - in a new environment. By comparing their values, the result from both stages can then be used to evaluate the success of changes to the BI reporting system.
To produce a tool for use by relevant stakeholders, we merged both clusters of measurements into one and developed a questionnaire like evaluation tool. We conducted a pilot survey with 10 BI domain experts and report users. Based on the responses received, the questions shown in Table 2 were amended; questions 2 and 3 were merged, we amended questions 5 and 6 and we removed question 9 as surplus. We also added one additional question identified as highly important by business users relating to the exporting and sharing of content functionality. We added one additional technical question, relating to speed of execution time when drilling-down, conditioning, removing or adding columns in reports. The final list of factors is shown in Table 4.
We validated the proposed factors by carrying out a survey with 30 key users working in the BI field. All users were asked to complete the user satisfaction element of the survey. However, technical functionality factors are arguably comprehensible and relevant only for technical users; thus, answering this part of survey was optional and dependent on the respondent’s expertise.
As we had series of questions and statements which needed to be validated, a Likert scale [45] was used, scoring each factor on a scale of 1 – 5 (where 1 is less important and 5 is most important). In the original Likert scale approach, responses are combined to create an attitudinal measurement scale, thus performing data analysis on the composite score from those responses [46]. However, our intention was to score each individual question or statement separately and to examine the views of users regarding each separate factor. We therefore used the concept of Likert-type items that supports using multiple questions as a part of the research instrument, but without combining the responses into composite values [46, 49]. Likert-type items fall into the ordinal measurement scale; thus mode or median are recommended to measure central tendency [46]. The results of our survey are presented in Table 4, and are grouped into two clusters of measurements, namely user satisfaction and technical functionality, where each contains individual factors.
As we see from Table 4, no single question relevant to user satisfaction had mode or median less than 4, indicating that each question was considered important. No single technical factor had mode or median less than 3, showing a strong tendency towards considering each technical factor important. As expected, a larger percentage of users with a greater technical role commented on technical aspects than users with a greater business orientation. Users with a greater business orientation rated user satisfaction questions as more important than users with a greater technical role, and the same effect was found in relation to users with a greater technical role commenting on technical functionality factors.
A free text question allowed survey respondents to suggest additional factors and this identified two additional questions that could be relevant to the measurement of user satisfaction:
-
Description of the key figures is available, sufficient and easy accessible via BI reports?
-
Functionality allowing further consolidation of existing information is available in BI reports?
It also elicited one additional factor that could be used to measuring technical satisfaction:
-
How platform independent are BI reports (able to run on any PC, OS, Laptop or mobile advice)?
However, those three additional factors were not validated in the same way as the factors listed in Table 4, thus, we do not include them and propose Table 4 as the core evaluation tool. An advantage of the approach is that the tool can be customised and additional factors added by stakeholders, meaning that the additional features identified in the survey could be added by users if required.
The proposed tool is limited on reporting aspect in BI and on business user group. Possible extension would include consideration to the views of other user groups, such as conceptual or organizational. The tool focuses on changes to support BI reporting and is not suitable for use to measure success of changes in regard to data warehousing, data acquisition or data modelling aspects. The tool would be easier to use if provided as a web based tool.
The tool discussed in this paper provides a mechanism for measuring the success of changes made to reporting in BI systems. The use of the tool could be extended beyond evaluation of changes to BI reporting systems and could be used as a general benchmarking tool when evaluating different BI software from the reporting aspect. For example, business and especially key BI users could use proposed tool to benchmark and select the most suitable existing BI software for implementation in their organisation. The approach used here could also be extended for use with other elements such as the impact of changes in data warehousing, data acquisition or data modelling processes.
5 Conclusions and Future Work
The focus of this paper was on measuring the success of new approaches to changing and improving existing BI solutions to enable more optimal BI reporting. Consequently, we explained BI and defined what we understand by success in terms of changes to BI reporting, we elicited appropriate clusters, including criteria to be used for measuring such success and developed an evaluation tool to be used by relevant stakeholders to measure success. Finally, using a preliminary and a further survey we validated our finding with relevant domain expert and key users. Future work will consist of using the evaluation tool in a real world environment to measure success when amending BI systems to improve BI reporting. This will allow evaluation of the tool on a case study basis.
References
Olszak, C.M., Ziemba, E.: Business intelligence systems in the holistic infrastructure development supporting decision-making in organisations. Interdiscip. J. Inf. Knowl. Manag. 1, 47–58 (2006)
Marchand, M., Raymond, L.: Researching performance measurement systems: an information systems perspective. Int. J. Oper. Prod. Manag. 28(7), 663–686 (2008)
Brannon, N.: Business intelligence and e-discovery. Intellect. Prop. Technol. Law J. 22(7), 1–5 (2010)
Alexander, A.: Case studies: business intelligence. Account. Today 28(6), 32 (2014)
Thamir, A., Poulis, E.: Business intelligence capabilities and implementation strategies. Int. J. Glob. Bus. 8(1), 34–45 (2015)
Popovič, A., Turk, T., Jaklič, J.: Conceptual model of business value of business intelligence systems. Manag.: J. Contemp. Manag. 15(1), 5–29 (2010)
Kurniawan, Y., Gunawan, A., Kurnia, S.G.: Application of business intelligence to support marketing strategies: a case study approach. J. Theor. Appl. Inf. Technol. 64(1), 214 (2014)
Luhn, H.P.: A business intelligence system. IBM J. Res. Dev. 2(4), 314–319 (1958)
Power, D.J.: Decision Support Systems: Concepts and Resources for Managers. Greenwood Publishing Group, Westport (2002)
Lönnqvist, A., Pirttimäki, V.: The measurement of business intelligence. Inf. Syst. Manag. 23(1), 32–40 (2006)
Moss, L.T., Atre, S.: Business Intelligence Roadmap: The Complete Project Lifecycle for Decision-support Applications. Addison-Wesley Professional, Boston (2003)
Golfarelli, M., Rizzi, S., Cella, I.: Beyond data warehousing: what’s next in business intelligence? In: Proceedings of the 7th ACM International Workshop on Data Warehousing and OLAP, pp. 1–6. ACM Press, New York (2004)
Dekkers, J., Versendaal, J., Batenburg, R.: Organising for business intelligence: a framework for aligning the use and development of information. In: BLED 2007 Proceedings, Bled, pp. 625–636 (2007)
Kimball, R., Ross, M., Thornthwaite, W., Mundy, J., Becker, B.: The Data Warehouse Lifecycle Toolkit, 2nd edn. Wiley, Indianapolis (2008)
Jamaludin, I.A., Mansor, Z.: Review on business intelligence “BI” success determinants in project implementation. Int. J. Comput. Appl. 33(8), 24–27 (2011)
Turban, E., Sharda, R., Delen, D., King, D.: Business Intelligence: A Managerial Approach, 2nd edn. Prentice Hall, Upper Saddle River (2010)
Inmon, B.W.: Building the Data Warehouse, 4th edn. Wiley, Indianapolis (2005)
Watson, H.J., Wixom, B.H.: The current state of business intelligence. Computer 40(9), 96–99 (2007)
Baars, H., Kemper, H.-G.: Management support with structured and unstructured data—an integrated business intelligence framework. Inf. Syst. Manag. 25(2), 132–148 (2008)
Ranjan, J.: Business intelligence: concepts, components, techniques and benefits. J. Theor. Appl. Inf. Technol. 9(1), 60–70 (2009)
Gluchowski, P., Kemper, H.-G.: Quo vadis business intelligence? BI-Spektrum 1, 12–19 (2006)
Chu, T.-H.: A framework for BI systems implementation in manufacturing. Int. J. Electron. Bus. Manag. 11(2), 113–120 (2013)
Anadiotis, G.: Agile business intelligence: reshaping the landscape, p. 3 (2013)
Obeidat, M., North, M., Richardson, R., Rattanak, V., North, S.: Business intelligence technology, applications, and trends. Int. Manag. Rev. 11(2), 47–56 (2015)
Imhoff, C., Galemmo, N., Geiger, J.G.: Mastering Data Warehouse Design: Relational and Dimensional Techniques. Wiley Publishing, Inc., Indianapolis (2003)
Mykitychyn, M.: Assessing the maturity of information architectures for complex dynamic enterprise systems. Georgia Institute of Technology (2007)
Rajesh, R.: Supply Chain Management for Retailing. Tata McGraw-Hill Education, Kalkota (2010)
Sedera, D., Tan, F.T.C.: User satisfaction: an overarching measure of enterprise system success. In: PACIS 2005 Proceedings, vol. 2, pp. 963–976 (2005)
Olszak, C.M., Ziemba, E.: Critical success factors for implementing business intelligence systems in small and medium enterprises on the example of Upper Silesia, Poland. Interdiscip. J. Inf. Knowl. Manag. 7(2012), 129 (2012)
Işik, Ö., Jones, M.C., Sidorova, A.: Business intelligence success: the roles of BI capabilities and decision environments. Inf. Manag. 50(1), 13–23 (2013)
DeLone, W.H., McLean, E.R.: Information systems success: the quest for the dependent variable. Inf. Syst. Res. 3(1), 60–95 (1992)
Sabherwal, R., Chowa, C.: Information system success: individual and organisational determinants. Manag. Sci. 52(12), 1849–1864 (2006)
DeLone, W.H., McLean, E.R.: The DeLone and McLean model of information systems success: a ten-year update. J. Manag. Inf. Syst. 19(4), 9–30 (2003)
Petter, S., DeLone, W., McLean, E.: Information systems success: the quest for the independent variables. J. Manag. Inf. Syst. 29(4), 7–61 (2013)
Powers, R.F., Dickson, G.W.: MIS project management: myths, opinions, and reality. Calif. Manag. Rev. 15(3), 147–156 (1973)
Bailey, J.E., Pearson, S.W.: Development of a tool for measuring and analyzing computer user satisfaction. Manag. Sci. 29(5), 530–545. 37 (1983)
Ives, B., Olson, M., Baroudi, J.: The measurement of user information satisfaction. Commun. ACM 26(10), 785–793 (1983)
Doll, W.J., Torkzadeh, G.: The measurement of end-user computing satisfaction. MIS Q. 12(2), 259–274 (1988)
Davison, J., Deeks, D.: Measuring the potential success of information system implementation. Meas. Bus. Excell. 11(4), 75–81 (2007)
Chung-Kuang, H.: Examining the effect of user satisfaction on system usage and individual performance with business intelligence systems: an empirical study of Taiwan’s electronics industry. Int. J. Inf. Manag. 32(6), 560–573 (2012)
Dastgir, M., Mortezaie, A.S.: Factors affecting the end-user computing satisfaction. Bus. Intell. J. 5(2), 292–298 (2012)
Rahman, N.: Measuring performance for data warehouses-a balanced scorecard approach. Int. J. Comput. Inf. Technol. 4(2), 1–6 (2013)
Gatian, A.W.: Is user satisfaction a valid measure of system effectiveness? Inf. Manag. 26(3), 119–131 (1994)
Davis, G.B., Olson, M.H.: Management Information Systems: Conceptual Foundations, Structure, and Development, 2nd edn. McGraw-Hill, Inc., New York City (1985)
Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. 22(140), 5–55 (1932)
Boone, H.N.J., Boone, D.: Analyzing Likert data. J. Ext. 50(2), 30 (2012)
Dedić, N., Stanier, C.: An evaluation of the challenges of multilingualism in data warehouse development. In Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016), Rome, Italy, pp. 196–206 (2016)
Florak, W.A., Park, R.E., Carleton, A.: Practical Software Measurement: Measuring for Process Management and Improvement, 1st edn. Software Engineering Institute, Carnegie Mellon University, Pittsburgh (1997)
Clason, D.L., Dormody, T.J.: Analyzing data measured by individual Likert-type items. J. Agric. Educ. 35(4), 31–35 (1994)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 IFIP International Federation for Information Processing
About this paper
Cite this paper
Dedić, N., Stanier, C. (2016). Measuring the Success of Changes to Existing Business Intelligence Solutions to Improve Business Intelligence Reporting. In: Tjoa, A., Xu, L., Raffai, M., Novak, N. (eds) Research and Practical Issues of Enterprise Information Systems. CONFENIS 2016. Lecture Notes in Business Information Processing, vol 268. Springer, Cham. https://doi.org/10.1007/978-3-319-49944-4_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-49944-4_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49943-7
Online ISBN: 978-3-319-49944-4
eBook Packages: Business and ManagementBusiness and Management (R0)