Keywords

1 Introduction

Improved decision-making, increased profit and market efficiency, and reduced costs are some of the potential benefits of improving existing analytical applications, such as Business Intelligence (BI), within an organisation. However, to measure the success of changes to existing applications, it is necessary to evaluate the changes and compare satisfaction measures for the original and the amended versions of that application. The focus of this paper is on measuring the success of changes made to BI systems from reporting perspective. The aims of this paper are: (i) to define what we understand by success in this context (ii) to contribute to knowledge by defining criteria to be used for measuring the success of BI improvements to enable more optimal reporting and (iii) to develop an evaluation tool to be used by relevant stakeholders to measure success. The paper is structured as follows: in Sect. 2 we discuss BI and BI reporting. Section 3 reviews measurement in BI, looking at end user satisfaction and technical functionality. Section 4 discusses the development of the evaluation tool and Sect. 5 presents conclusions and recommendations for future work.

2 Measuring Changes to BI Reporting Processes

2.1 Business Intelligence

BI is seen as providing competitive advantage [15] and essential for strategic decision-making [6] and business analysis [7]. There are a range of definitions of BI, some focus primarily on the goals of BI [810], others additionally discussing the structures and processes of BI [3, 1115], and others seeing BI more as an umbrella term which should be understood to include all the elements that make up the BI environment [16]. In this paper, we understand BI as a term which includes the strategies, processes, applications, data, products, technologies and technical architectures used to support the collection, analysis, presentation and dissemination of business information. The focus in this paper is on the reporting layer. In the BI environment, data presentation and visualisation happens at the reporting layer through the use of BI reports, dashboard or queries. The reporting layer is one of the core concepts underlying BI [14, 1725]. It provides users with meaningful operational data [26], which may be predefined queries in the form of standard reports or user defined reports based on self-service BI [27]. There is constant management pressure to justify the contribution of BI [28] and this leads in turn to a demand for data about the role and uses of BI. As enterprises need fast and accurate assessment of market needs, and quick decision making offers competitive advantage, reporting and analytical support becomes critical for enterprises [29].

2.2 Measuring Success in BI

Many organisations struggle to define and measure BI success as there are numerous critical factors to be considered, such as BI capability, data quality, integration with other systems, flexibility, user access and risk management support [30]. In this paper, we adopt an existing approach that proposes measuring success in BI as “[the] positive benefits organisations could achieve by applying proposed modification in their BI environment” [30], and adapt it to consider BI reporting changes to be successful only if the changes provide or improve a positive experience for users.

DeLone and McLean proposed the well-known D&M IS Success Model to measure Information Systems (IS) success [31]. The D&M model was based on a comprehensive literature survey but was not empirically tested [32]. In their initial model, which was later slightly amended [33, 34], DeLone and McLean wanted to synthesize previous research on IS success into coherent clusters. The D&M model, which is widely accepted, considers the dimensions of information quality, system quality, use, user satisfaction, organisational and individual aspect as relevant to IS success. The most current D&M model provides a list of IS success variable categories identifying some examples of key measures to be used in each category [34]. For example: the variable category system quality could use measurements such as ease of use, system flexibility, system reliability, ease of learning, flexibility and response time; information quality could use measurements such as relevance, intelligibility, accuracy, usability and completeness; service quality measurements such as responsiveness, accuracy, reliability and technical competence; system use could use measurements such as amount, frequency, nature, extend and purpose of use; user satisfaction could be measured by single item or via multi-attribute scales; and net benefits could be measured through increased sales, cost reductions or improved productivity. The intention of the D&M model was to cover all possible IS success variables. In the context of this paper, the first question that arises is which factors from those dimensions (IS success variables) can be used as measures of success for BI projects. Can examples of key measures proposed by DeLone and McLean [33] as standard critical success factors (CSFs), be used to measure the success of system changes relevant for BI reporting? As BI is a branch of IS science, the logical answer seems to be, yes. However, to identify appropriate IS success variables from the D&M model and associated CSFs we have to focus on activities, phases and processes relevant for BI.

3 Measurements Relevant to Improve and Manage Existing BI Processes

Measuring business performance has a long tradition in companies, and it can be useful in the case of BI to perform activities such as determining the actual value of BI to a company or to improve and manage existing BI processes [10]. Lönnqvist and Pirttimäki propose four phases to be considered when measuring the performance of BI: (1) identification of information needs (2) information acquisition (3) information analysis and (4) storage and information utilisation [10]. The first phase considers activities related to discovering business information needed to resolve problems, the second acquisition of data from heterogeneous sources, and the third analysis of acquired data and wrapping them into information products [10]. The focus of this paper is on measuring the impact of BI system changes to BI reporting processes, meaning that the first three phases are outside the scope of the paper. Before decision makers can properly utilise information by applying reporting processes, it has to be adequately and timely communicated to the decision maker, making the fourth phase, namely storage and information utilisation, relevant for this paper.

Storage and information utilisation covers how to store, retrieve and share knowledge and information in the most optimal way, with business and other users, by using different BI applications, such as queries, reports and dashboards. Thus, it covers two clusters of measurements we identified as relevant: (i) business/end-users satisfaction, and (ii) technical functionality.

3.1 Business/End Users Satisfaction

User satisfaction is recognised as a critical measure of the success of IS [31, 3342]. User satisfaction has been seen as a surrogate measure of IS effectiveness [43] and is one of most extensively used aspects for the evaluation of IS success [28]. Data Warehouse (DW) performance must be acceptable to the end user community [42]. Consequently, performance of BI reporting solutions, such as reports and dashboards, needs to meet this criterion.

Doll and Torkzadeh defined user satisfaction as “an affective attitude towards a specific computer application by someone who interacts with the application directly” [38]. For example, by positively influencing the end user experience, such as improving productivity or facilitating easier decision making, IS can cause a positive increment of user satisfaction. On the other side, by negatively influencing the end user experience, IS can lead to lower user satisfaction. User satisfaction can be seen as the sum of feelings or attitudes of a user toward a numbers of factors relevant for a specific situation [36].

We identified user satisfaction as one cluster of measurements that should be considered in relation to the success of BI reporting systems, however, it is important to define what is meant by user in this context. Davis and Olson distinguished between two user groups: users making decisions based on output of the system, and users entering information and preparing system reports [44]. According to Doll and Torkzadeh [38] end-user satisfaction in computing can be evaluated in terms of both the primary and secondary user roles, thus, they merge these two groups defined by Davis and Olson into one.

We analysed relevant user roles in eight large companies, which utilise BI, and identified two different user roles that actually use reports to make their business decisions or to achieve their operational or everyday activities: Management and Business Users. Those roles are very similar to groups defined by Davis and Olson. Management uses reports and dashboards to make decisions at enterprise level. Business users use reports & dashboards to make decisions at lower levels, such as departments or cost centres, and to make operational and everyday activities, such as controlling or planning. Business users are expected to control the content of the reports & dashboards and to require changes or correction if needed. They also communicate Management requirements to technical personnel, and should participate in BI Competency Centre (BICC) activities. Business users can also have a more technical role. In this paper, we are interested in measuring user satisfaction in relation to Business users.

Measuring User Satisfaction.

Doll and Torkzadeh developed a widely used model to measure End User Computer Satisfaction (EUCS) that covers all key factors of the user perspective [38, 40]. The model to measure end user computer satisfaction included twelve attributes in the form of questions covering five aspects: content, accuracy, format, ease of use and timeliness. This model is well validated and has been found to be generalizable across several IS applications; however, it has not been validated with users of BI [40].

Petter et al. [34] provide several examples of measuring user satisfaction aspects as a part of IS success based on the D&M IS Success Model [34]. According to them, we can use single items to measure user satisfaction, semantic differential scales to assess attitudes and satisfaction with the system, or multi-attribute scales to measure user information satisfaction. However, we face three issues when considering this approach in the context of evaluating user satisfaction concerning changes to BI reporting systems. First is the fact that the discussion is about methods of measuring, rather than relevant measurements. The second issue is that this approach is designed for IS rather than the narrower spectrum of BI. The third issue is that this approach does not identify explicit measurements to be used to validate success when changes are made to BI reporting systems. Considering the D&M model in the context of this paper, we identify ease of use and flexibility as the measures of system quality possibly relevant when measuring user satisfaction.

In the Data Warehouse Balanced Scorecard Model (DWBSM), user perspective based on user satisfaction with data quality and query performance is defined as one of four aspects when measuring the success of the DW [42]. DWBSM considers data quality, average query response time, data freshness and timeliness of information per service level agreement as key factors in determining user satisfaction. As DW are at the heart of BI systems [1, 47], those factors are relevant to evaluating the success of changes to BI reporting but are not comprehensive enough as they cover only one part of a BI system.

To develop a model for the measurement of success in changes to BI reporting systems, we combined elements from different approaches, cross tabulating the aspects and attributes of the EUCS model with the phases to be considered when measuring performance of BI discussed in Sect. 3. Table 1 shows the initial results of the cross tabulation with areas of intersection marked with ‘x’, and where each number represents a phase to be considered when measuring performance of BI proposed by Lönnqvist and Pirttimäki. The questions shown in Table 1 were later modified following feedback, as discussed in Sect. 4.

Table 1. Cross-tabulation of EUCS attributes and phases of measuring BI performance

As discussed in Sect. 3, only the storage and information utilisation phase (marked with number 4 in Table 1) from the Lönnqvist and Pirttimäki approach is relevant when measuring the success of changes to BI reporting systems to enable more optimal reporting. Based on the analysis given in Table 1, it is possible to extract a list of attributes (questions) to be used as user satisfaction measurements. We extracted eight key measures and modified these for use in the BI context. The elements identified from the EUCS model were extended to include three additional questions related to changing descriptive content (CDS) of BI reports. Descriptive content of the reports can include, but is not limited to, descriptions of categories, hierarchies or attributes, such as product, customer or location names descriptions. The most common cause of such requests for changes to descriptive content are errors in the descriptions and CDS issues are common with large and rapidly changing dimensions [47].

Table 2 presents the questions developed from these measures, which were later revised following feedback during the initial phase of validation.

Table 2. User satisfaction questions to measure success of improving existing BI system

The design of the questions supports both an interview-based approach and a quantitative survey based approach. However, using only user satisfaction criteria is not sufficient to measure the success of modifications to reporting systems.

3.2 Technical Functionality

In Sect. 2, we identified technical functionality as the second cluster of measurements that need to be considered when measuring the success of changes to BI reporting systems. To initiate and manage improvement activities for specific software solutions, it has been suggested that there should be sequential measurements of the quality attributes of product or process [48].

Measuring Technical Functionality.

In the DWBSM approach, the following technical key factors are identified: ETL code performance, batch cycles runtime, reporting & BI query runtime, agile development, testing and flawless deployment into production environment [42]. We identify reporting & BI query runtime as relevant in the context of BI reporting. From the D&M IS success model, we extract the response time measure from the system quality cluster of IS success variables. Reporting and BI query runtime and response time both belong to the time category although they are differently named. However, to measure the technical success of modifications to BI reporting solutions, it is not enough to conclude that we only need to measure the time. We need a clear definition and extraction of each relevant BI technical element belonging to the time and other technical categories that should be evaluated. Table 3 shows the extracted time elements and includes elements related to memory use and technical scalability.

Table 3. Technical measurements of success to improve existing BI system

4 Producing an Evaluation Tool to Measure Success of Changing BI Environment

As discussed in Sect. 3, we elicited two clusters of measurements for use when evaluating the success of changes to BI reporting systems. The measurements identified in the user satisfaction and in technical functionality clusters are intended to be recorded at two stages: (i) in the existing BI environment - before implementing any changes, and (ii) after modification of existing BI system - in a new environment. By comparing their values, the result from both stages can then be used to evaluate the success of changes to the BI reporting system.

To produce a tool for use by relevant stakeholders, we merged both clusters of measurements into one and developed a questionnaire like evaluation tool. We conducted a pilot survey with 10 BI domain experts and report users. Based on the responses received, the questions shown in Table 2 were amended; questions 2 and 3 were merged, we amended questions 5 and 6 and we removed question 9 as surplus. We also added one additional question identified as highly important by business users relating to the exporting and sharing of content functionality. We added one additional technical question, relating to speed of execution time when drilling-down, conditioning, removing or adding columns in reports. The final list of factors is shown in Table 4.

Table 4. Survey results based on Likert-type items

We validated the proposed factors by carrying out a survey with 30 key users working in the BI field. All users were asked to complete the user satisfaction element of the survey. However, technical functionality factors are arguably comprehensible and relevant only for technical users; thus, answering this part of survey was optional and dependent on the respondent’s expertise.

As we had series of questions and statements which needed to be validated, a Likert scale [45] was used, scoring each factor on a scale of 1 – 5 (where 1 is less important and 5 is most important). In the original Likert scale approach, responses are combined to create an attitudinal measurement scale, thus performing data analysis on the composite score from those responses [46]. However, our intention was to score each individual question or statement separately and to examine the views of users regarding each separate factor. We therefore used the concept of Likert-type items that supports using multiple questions as a part of the research instrument, but without combining the responses into composite values [46, 49]. Likert-type items fall into the ordinal measurement scale; thus mode or median are recommended to measure central tendency [46]. The results of our survey are presented in Table 4, and are grouped into two clusters of measurements, namely user satisfaction and technical functionality, where each contains individual factors.

As we see from Table 4, no single question relevant to user satisfaction had mode or median less than 4, indicating that each question was considered important. No single technical factor had mode or median less than 3, showing a strong tendency towards considering each technical factor important. As expected, a larger percentage of users with a greater technical role commented on technical aspects than users with a greater business orientation. Users with a greater business orientation rated user satisfaction questions as more important than users with a greater technical role, and the same effect was found in relation to users with a greater technical role commenting on technical functionality factors.

A free text question allowed survey respondents to suggest additional factors and this identified two additional questions that could be relevant to the measurement of user satisfaction:

  • Description of the key figures is available, sufficient and easy accessible via BI reports?

  • Functionality allowing further consolidation of existing information is available in BI reports?

It also elicited one additional factor that could be used to measuring technical satisfaction:

  • How platform independent are BI reports (able to run on any PC, OS, Laptop or mobile advice)?

However, those three additional factors were not validated in the same way as the factors listed in Table 4, thus, we do not include them and propose Table 4 as the core evaluation tool. An advantage of the approach is that the tool can be customised and additional factors added by stakeholders, meaning that the additional features identified in the survey could be added by users if required.

The proposed tool is limited on reporting aspect in BI and on business user group. Possible extension would include consideration to the views of other user groups, such as conceptual or organizational. The tool focuses on changes to support BI reporting and is not suitable for use to measure success of changes in regard to data warehousing, data acquisition or data modelling aspects. The tool would be easier to use if provided as a web based tool.

The tool discussed in this paper provides a mechanism for measuring the success of changes made to reporting in BI systems. The use of the tool could be extended beyond evaluation of changes to BI reporting systems and could be used as a general benchmarking tool when evaluating different BI software from the reporting aspect. For example, business and especially key BI users could use proposed tool to benchmark and select the most suitable existing BI software for implementation in their organisation. The approach used here could also be extended for use with other elements such as the impact of changes in data warehousing, data acquisition or data modelling processes.

5 Conclusions and Future Work

The focus of this paper was on measuring the success of new approaches to changing and improving existing BI solutions to enable more optimal BI reporting. Consequently, we explained BI and defined what we understand by success in terms of changes to BI reporting, we elicited appropriate clusters, including criteria to be used for measuring such success and developed an evaluation tool to be used by relevant stakeholders to measure success. Finally, using a preliminary and a further survey we validated our finding with relevant domain expert and key users. Future work will consist of using the evaluation tool in a real world environment to measure success when amending BI systems to improve BI reporting. This will allow evaluation of the tool on a case study basis.