1 Introduction

Information and Communication Technology (ICT) is nowadays commonly used to support the execution of States’ multiple governance activities in its different aspects – Politics, Administration and Society. Each of these aspects, and the relationships between them, are fundamental spaces where ICT can be used as a facilitator and catalyzer to promote the reform, transformation and modernization of the overall State governance activity [1, 2]. This use of ICT, and particularly the Internet, as a tool to achieve better governance, is what is currently called electronic governance (EGOV).

EGOV is advocated to offer a set of benefits, contributing to the existence of more efficient, effective and transparent public institutions, to improve services delivery, and for a more participatory and engaged society [3]. EGOV benefits, although commonly accepted by policy-makers [4], must be measured and assessed to ensure government accountability [5,6,7] and to support state functions and services [8]. Measuring and assessing are also relevant activities to analyze the state of EGOV development as well as to inform strategy and policy formulation [9]. For this, governments have been measuring, assessing and evaluating their EGOV initiatives [10] following the premise of “measuring for management and improvement” [11], guided by questions such as “are we doing it right?” and “what are others doing?”.

Many tools and instruments for measuring, assessing and monitoring different aspects of EGOV have also been proposed by researchers and can be found in the literature. While these may be very useful tools, their value may not have been fully exploited either by researchers or, particularly, by practitioners, for two main reasons. First, because the many existing tools are spread through a large body of sources, being difficult to find. At the best of our knowledge, there is no catalog or repository of EGOV evaluation tools that provides researchers and practitioners with a holistic view of the overall existent set of tools. Second, there is also no systematized conceptual framework to support the analysis and selection of and adequate tool to be used in specific assessment situations.

This work aims at contributing to mitigate these problems by answering a very preliminary question: “who is measuring what and how in the EGOV domain?” and providing a literature overview for researchers that resumes the state of the art regarding EGOV measurement and evaluation.

By replying to this question this paper will achieve two main objectives: (i) to characterize the research that has been conducted in the context of EGOV measurement, assessment and monitoring area, thus creating a rich and sound EGOV assessment base of knowledge fundamental for the creation of a future catalog, and (ii) to present a conceptual framework to support the analysis about the adequacy of a tool, and the selection process of such suitable tool.

The remainder of the paper is structured as follows: Sect. 2 highlights the importance and complexity of EGOV evaluation. The study design is presented in Sect. 3 and the results of the study conducted are reflected upon in Sect. 4. A first version of the conceptual model towards the selection of the appropriate instrument of e-government evaluation is presented in Sect. 5 and conclusions and future work are included in Sect. 6.

2 The Importance and Complexity of EGOV Evaluation

EGOV evaluation is a relevant topic not only for government agencies but also for other stakeholders who take an interest in the area. According to [12] there are different groups who take interest in measuring, assessing and monitoring EGOV: large international organizations that deal with e-government measurement globally; global independent organizations; multinational consulting companies; academic institutions and its non-profit research centres; national institutions or national associations for ICT in the public sector in a single country or region; and researchers groups.

Many of these provide international benchmarking studies while others focus on a single country, region or municipality. For any case, evaluation results are intended to: justify the investment made [13], justify a projects’ worthiness, and provide some kind of learning [14]. Specifically for EGOV assessment, we can ascertain that evaluation is vital to understand the level of EGOV development, review objectives, strategies and action plans, discover strengths and weaknesses, define new guidelines, search for best practices and compare organisations at the various levels [15, 16]. Meeting national strategy goals is vital for governments and therefore, effective, efficient and quality evaluation activities are required [17] to avoid loss of control that will consequently lead to loss of resources and failure to accomplish such goals [18].

Measuring and assessing EGOV is not, however, a simple task. The lack of unified, holistic perspectives to EGOV evaluation leaves practitioners with the strategic decision of what should they be focusing and how can adequate measurements be formulated [19].

We know that ICT is constantly evolving and a continuous adaptation to its growth implies a continuous evolution of the way e-government is assessed. Besides, e-government projects do not usually have immediate results and take a long time to permeate [20], making it difficult to demonstrate value and, consequently, hampering EGOV development [16]. These issues lead to a disperse set of evaluation goals that as [10] states, results in ‘an eclectic mixture of exercises undertaken in different ways for different purposes at different times by different people and with different audiences in mind’. This is a clear picture of the state of the art when it comes to measuring and monitoring EGOV.

Evaluation is a “job for the brave” because the hard work required and its complexity can be extenuating [18]. Especially in the EGOV context, many challenges arise [1]. A robust evaluation would enable the comparison of benefits that can be distinguished as direct and indirect [14]. Direct benefits are measurable or quantifiable while indirect benefits are more qualitative and not so easily measured, such as organizational, social, political or cultural aspects [13]. Benefits vary according to the initiatives goals and objectives and their measurement also varies according to the stakeholder perspective. This many times results in too simplistic evaluations focused on what is easy to measure [21], like the front-office, the visible side of e-government, ignoring back-office reorganization that could improve service efficiency [22].

Two other essential complexities regarding EGOV evaluation are appointed by [13]. The first one focuses on the multiple perspectives involved. Politicians, policy-makers, e-government development leaders, citizens, etc., are some of the perspectives that can be included. Answering the needs of every perspective can be a daunting task, not only because they are so different but they are also conflicting [16]. The second aspect is the social and technical context of use. Public sector cannot rely solely on economic values for it has a clear responsibility towards citizens and society in providing equality, openness and transparency values, among others that make e-government evaluation much more a social science [16].

Other complexities pointed in the literature include: the lack of a clearly defined purpose in some initiatives about what should be compared and measured, making it difficult to be adapted to specific national or regional contexts and priorities [22], and the lack of a comprehensive and holistic assessment [21, 22], although such an assessment would entail huge funding, time and other resources which are not usually available in public administration [1, 23].

3 Study Design

This work is the result of a meta-analysis of the published literature regarding EGOV evaluation. This ended in an extensive literature review intending to understand what the academic community is doing within this subject.

Articles were selected from Scopus in December 2016. Scopus was the only bibliographic database selected because it is one of the databases with most recognition. Articles selection was achieved using a word combination of ‘EGOV’, ‘egovernment’ or ‘e-government’ with evaluation, measurement, assessment and monitoring, with no date limitation. A total of 2428 references were identified. After retrieving complete information on each article, the process of reference filtering described in Table 1 was performed to achieve the final number of articles to be analysed.

Table 1. Process of references downsizing

The remaining 454 articles were analysed and categorized using a table created for the literature review process encompassing several categories, as described in Table 2.

Table 2. Categories for literature analysis

In some categories the classification NS (Not Specified) or NA (Not Applied) are applied regarding articles where the authors did not specify such category or where the category is not applied, respectively.

The set of categories defined in Table 2, was applied to each of the 454 papers, providing a huge base for understanding the current state of the art in EGOV evaluation. The main findings from this analysis are presented in the following section.

4 EGOV Evaluation Literature

This section provides a characterization of the research that has been conducted in the EGOV evaluation area.

EGOV evaluation literature starts in the early 2000s although e-government research begins in the mid 1990s [24]. A first look to the publishing years (Fig. 1) reveals a growing trend in the area beginning in 2002 until 2011, where the highest number of publications occurs. From 2011 onward, the number of publications has decreased but has been stabilizing. This evolution follows the typical evolution path exhibited by most research themes, which usually start with a low number of publications that grow slowly, having then an exponential rising until reaching a peak of researchers interest, after which suffer a slight decrease and tend to stabilize.

Fig. 1.
figure 1

Papers distribution by publication year

4.1 Who Is Conducting EGOV Evaluation

Presented research is mostly provided by the academic community which comprises over 80% of the selection (Table 3). This is, however, not surprising since the sources used for this analysis were retrieved from Scopus.

Table 3. Authors affiliation

Some research results from partnerships between private companies, government agencies, research institutes, or national, regional and international organizations. The most common association is between Academic Institutions and Government Agencies followed by Academic Institutions with Private Companies or Research Institutes.

Most of the selected authors contributed with a single research publication and few have two or more publications leading us to conclude that not many authors follow a research line within the EGOV evaluation area.

Academic authors that mentioned their full affiliation come essentially from Computer Science, Information Science, and Information Systems departments or schools. Other affiliation areas, but with less expression in our sample, are Management, Business Administration, Business Information, Economic Sciences, and Political Sciences.

4.2 What Is Being Evaluated

The object of evaluation – i.e. the focus of evaluation or measurement – is not always well described by researchers and sometimes distinct descriptions appear for the same object of evaluation. Even so, if, for example, several authors mentioned they were measuring ‘quality’ but definitions were different, we assumed ‘quality’ to be the object of evaluation for all of them.

Table 4 presents the list of different objects of evaluation that were found in the 454 papers analysed. The objects of evaluation listed are those that were found in at least 2 papers. The category Other includes various objects of evaluation mentioned in the literature that only come up once in the set of papers analyzed.

As [27] states, measuring EGOV has traditionally been focused on measuring and benchmarking websites and their use. Looking at Table 4 we see that there is indeed a big gap between Website Evaluations and other focus of study.

Table 4. Object of evaluation

Looking particularly to the studies focused on the Website Evaluation object (Fig. 2), Quality of websites or portals was the specific aspect most measured and assessed by researchers, followed by websites Accessibility. Usability, Performance and User Satisfaction are also specific aspects of Website Evaluation that have been assessed. The General category includes articles that reported a Website Evaluation study but with no regard to any specific aspect. The Other category includes all the specific aspects of Website Evaluation that have been the focus of assessment in only one study.

Fig. 2.
figure 2

Specific focuses of Website Evaluation

A curious aspect about Website Evaluation is the down peak of publications that it suffers in 2011, just when the selected literature reaches its peak of more publishing.

Regarding the geographical distribution of the assessment studies, a distribution by continent is presented in Fig. 3, and over 100 different countries were identified in the studies. Asia and Europe are the two best represented continents with the Americas following just behind. China is clearly the country where most of the selected research takes place with 54 publications, followed by the USA with 24 and some European countries with around 10 publications each.

Fig. 3.
figure 3

Selected literature distribution by continent where research took place

Twenty-nine authors found it relevant to make a distinction throughout their articles between developed and developing countries, based on the premise that EGOV initiatives are different for these types of countries when it comes to developing strategy, implementation, and utilization [13]. This work maintains this distinction based on the United Nations classification of developed and developing economies presented in the World Economic Situation and Prospects 2017 report. Our ideal is that if such differences exist, evaluation of such initiatives also encompasses social, cultural, technical and political differences and therefore it is important to make such distinction. Through the literature review we found 175 researches located in developing countries and 134 in developed countries.

Looking at the government level in which measurement instruments are put into practice, central and local levels are the most recurrent ones (Fig. 4), although literature suggests not enough attention is given to local level where citizens are more in contact with government and feel the effect of e-government initiatives more closely [25].

Fig. 4.
figure 4

Level of government distribution for research application

Some authors mention that their evaluation method, tool or framework can be used at any government level (All).

Local level evaluation has been a trend more recently (2015 and 2016), as it was in some of the earlier years (2006 and 2007), while central level evaluation is almost always continuously the focus of the evaluation research (Fig. 5).

Fig. 5.
figure 5

Level of government described in the publications - distribution by year

Different levels of government have different scopes, objectives and constraints [26] and it is therefore relevant to find and analyze differences between instruments applicable to each of the levels considering what is being assessed for each level.

4.3 How Is Evaluation Being Conducted

Articles were classified according to the type of contribution they provide for the EGOV evaluation area. Four types were considered: Theory, if the presented work was theoretical; Instrument/Framework Design if an instrument, framework, model or tool design/development was described in the article; Instrument/Framework Application if it describes the application of an instrument, framework, model or tool in a practical case; and Other if none of the latter was appropriate. In many cases (Table 5) Instrument/Framework Design and Application were both selected.

Table 5. Type of article

In the cases where an instrument/framework was designed or applied, the perspective of the study was analyzed. Two main perspectives were considered: end user, when the evaluation is based on information provided by end user, gathered through user interviews, surveys, website consultation, amongst others (demand-side perspective), and service provider, when the evaluation is based on information provided by government agencies or entities (supply-side perspective). As noted, there are situations where the participants in the evaluation are government employees but they are considered the end users of the system. In those cases, the perspective considered was “end user”.

Figure 6 shows the perspective distribution with end user being the most used perspective to perform e-government evaluations. Some works combine both perspectives.

Fig. 6.
figure 6

Number of publications by perspective used in the literature

5 Conceptual Framework for EGOV Evaluation Instrument Characterization

During the literature review, a set of relevant concepts related to EGOV evaluation instruments have been identified. These concepts are aligned with the categories presented in Table 2 and represent useful dimensions that should be considered when analysing, comparing and selecting evaluation instruments. These concepts are organized in the conceptual framework depicted in Fig. 7.

Fig. 7.
figure 7

Conceptual framework for EGOV evaluation instrument characterization

As shown, there are three main dimensions for characterization of an instrument: object of analysis, perspective and context.

Each evaluation instrument has (and must focus on) a specific object of analysis that constitutes the focus of the evaluation. This focus of analysis may be studied in different contexts.

It is thus relevant to characterize and define the context of an instrument in what concerns either (i) the level of government considered (local, regional, central (national, federal)), (ii) the level of development of the country where the evaluation is conducted (developed and developing countries), (iii) the stakeholders to whom the evaluation is relevant, and (iv) the moment when the evaluation occurs. It is also fundamental to characterize and define the perspective (end user or service provider) that the evaluation considers.

6 Conclusion and Future Work

This work intended to achieve two main goals: (i) to characterize the research that has been conducted in the context of EGOV measurement, assessment and monitoring area and (ii) to present a conceptual framework that would support the analysis and selection process of the most adequate tool for a specific evaluation situation.

The relevance and complexity of EGOV measurement, assessment and monitoring for governments and practitioners is patent in the literature available about the subject and was briefly presented in this article. Literature review of 454 articles retrieved from Scopus gave us a general picture of who has been studying the topic, what are the objects of evaluation, at what level of government are tools and instruments applied, and how is evaluation being conducted regarding the types of publications and the perspective chosen for such evaluations. Our analysis provided enough basis to argue that measurement tools and instruments are indeed dispersed and numerous, and as so, a catalogue of such initiatives could be helpful for researchers and practitioners alike. This led us to our second goal of presenting a conceptual framework to guide and support the choice of the most adequate tool or instrument for evaluation.

A first version of a conceptual map was presented in Sect. 5, highlighting the main constructs that characterize an EGOV evaluation instrument: context, perspective and object of analysis. The conceptual framework is oriented towards the characterization of EGOV evaluation instruments and its adequacy in a specific situation of evaluation. This framework must still be refined based on the analysis of literature from other databases such as Web of Science or Google Scholar that can complement the work done so far. The new version will also undergo a process of validation with experts through a focus group and enhancements will be continuously performed as we continue research in the area. Future work also includes a more detailed analysis of each object of evaluation regarding indicators and metrics used for its measurement. This will largely contribute for the cataloguing of the different tools and instruments available and also for the development of new ones that can complement the already existing corpus.