Background

African countries experience high disease burdens compounded by resource shortages. This results in competition for available resources so that each decision to invest represents an opportunity cost that governments must weigh carefully to ensure optimal use of available funds [1]. eHealth initiatives offer potential to help improve health system performance and there is growing pressure to show positive impacts and health system benefits for each investment [2]. African governments are handicapped by a lack of evidence of which eHealth impact appraisal methodologies are applicable to support decision-making in African countries. This is further complicated by a diversity of views on what eHealth means [3], the role it should play [4], and benchmarks for good practice [5]. Without resolving these issues, the likelihood of optimal investment decisions being made diminishes.

A number of terms have been used to describe the use of Information Communication Technology (ICT) in the health sector, such as eHealth, digital health and health ICT, as well as sub-disciplines such as telemedicine and mobile health [6]. This paper uses eHealth and its World Health Organization (WHO) definition; “eHealth is the use of ICTs for health” [7]. While digital health frequently appears as an alternative to eHealth, there are recognised differences in its meaning [3].

Many eHealth initiatives fail [8, 9]. Therefore, eHealth should not attract public investment until its probable impacts have been appraised. The main reason for estimating impact is to ensure that the benefits realised from an investment justify the costs over time for key stakeholders, and rationalise the opportunity cost. This requires a value judgement tailored to local priorities such as access to services, Sustainable Development Goals (SDGs) and Universal Health Coverage (UHC) [10], and a way to balance the competing dimensions of value and affordability. An eHealth Impact model is a generic appraisal approach to support this type of decision-making [11]. For African countries, a methodology is needed to help those faced with making decisions about proposed eHealth initiatives to conduct prospective appraisals despite a scarcity of specialised economics, eHealth and other expertise [12].

There are numerous approaches to the assessment of economic impact in the health sector [13,14,15]. Some, such as the Health Impact Assessment approach [16, 17], extend beyond economic aspects to deal with broader societal impact, often referred to as socio-economic impact [18]. Few approaches are specific to eHealth, with notable exceptions such as the European eHealth IMPACT study [19], the Digital Health Impact Framework (DHIF) developed by the Asian Development Bank [20] and based on the Five Case Model, and a staged-based approach to integrating economic and financial evaluations specifically for mobile health initiatives [21].

The Five Case Model is described in The Green Book [22] and a User Manual [23]. Each case has a specific purpose, addressing the distinct questions summarised in Table 1. The Five Case Model is recommended by the United Kingdom [22, 24] and New Zealand [25] as a tool for promoting accountability for decisions about a variety of public spending initiatives, including eHealth. It provides an appraisal of the estimated probable value of an initiative’s options within the complex health system it operates in. This strengthens the justification for investing in initiatives that do well in the five cases, and justifies further research into its potential utility.

Table 1 Overview of the cases constituting the Five Case Model

The Five Case Model is a decision-making tool designed with the flexibility African countries need, such as balancing value against affordability constraints, and allowing progress despite limited human capacity to conduct complex eHealth economic appraisals. Nevertheless, the applicability of the five cases to African eHealth investments has not been assessed. In order to show this applicability, metrics are needed that are aligned to the five cases and relevant to African countries’ eHealth investment decisions. This study aims to identify appropriate metrics and data sources in order to judge the applicability of the Five Case Model in African eHealth settings, as an important step preceding field testing of the Five Case Model in Africa.

Methods

To achieve the aim, readily accessible online data sources were explored to identify candidate metrics. The primary selection criteria were that each metric should provide information relevant to an eHealth issue aligned to one or more of the primary questions of the five cases, and be accessible online. Candidate metrics were then assessed for data availability from recognised sources such as the World Health Organization (WHO), International Telecommunications Union (ITU) and World Bank. Finally, the number of African countries for which the data were available was assessed. The initial intention was to select candidate metrics for which data were available for more than 80% of countries. However, the threshold was subsequently revised to 60% due to sparsity of data. Where less than 60% of African countries had available data, the metric was excluded. The remaining metrics became the component metrics of an eHealth Investment Readiness Assessment Tool.

Data for component metrics were collected for the fifty-four countries of the United Nations African region [26]. Where data were missing for a metric, values were set at zero to avoid recording progress that was not substantiated, except where more than a third of a metric’s data were missing, in which case the mean was used, since a zero value might reflect challenges with the metric’s data collection process rather than limited eHealth development. Thereafter, for each metric, values were reduced to a proportion of 1, where 1 was the maximum score possible and was assigned to the country with the highest score for that metric. An average of all component metrics provided a summary score that was used to rank overall country eHealth investment readiness. No weightings were applied. Scores were categorised as good (> 0.70), moderate (0.50–0.70) or poor (< 0.50).

Metric scores were analysed using spider charts, a graphical method of displaying multivariate data in a two-dimensional chart of three or more quantitative variables represented on axes starting from the same point. These visualisations were used to create multi-country profiles for five groups of countries: first, the countries with the highest five summary scores, and thereafter comparison of the countries with the highest, the lowest and the median scores for each of four regional economic communities of the African Union [27], the Arab Maghreb Union (AMU), East African Community (EAC), Economic Community of West African States (ECOWAS) and the Southern African Development Community (SADC).

Results

Candidate metrics were identified based on the authors’ experience working in eHealth in African countries over many decades. This identified the following nineteen candidate metrics, arranged in four categories:

Category 1: eHealth development indicators

  1. 1.

    Global digital health index [28]

  2. 2.

    Global Observatory for eHealth (GOE) survey score [29]

  3. 3.

    Relative ranking among countries that have achieved 100% birth registration and 80% death registration

  4. 4.

    Relative ranking among countries that have conducted at least one population and housing census in the last 10 years

  5. 5.

    Service availability and readiness assessment [30]

  6. 6.

    Status of national eHealth strategy

Category 2: Financial and economic indicators

  1. 7.

    Current Health Expenditure (CHE) as a percentage of Gross Domestic Product (GDP) [31]

  2. 8.

    CHE per capita [32]

  3. 9.

    Government debt per capita

  4. 10.

    Proportion of total government spending on essential services

  5. 11.

    Rate of growth of real GDP in developing economies [33, 34]

Category 3: ICT development indicators

  1. 12.

    ICT Development Index (IDI) score [35, 36]

  2. 13.

    International Health Regulations (IHR) capacity and health emergency preparedness [37]

  3. 14.

    International Organization of Standardization ratings

  4. 15.

    Internet penetration score [38]

  5. 16.

    ITU percentage of individuals using the Internet [39]

Category 4: Workforce and governance indicators

  1. 17.

    Human Capital Index (HCI) score [40, 41]

  2. 18.

    Ibrahim Index of Governance in Africa score [42]

  3. 19.

    Rating agency risks assessment scores

Nine of the candidate metrics had sufficiently complete and readily accessible data. Table 2 lists the nine selected metrics and indicates which of the five cases each addresses and the percentage of countries for which data were found. For each metric, the most recent available data were used. The most recent GOE survey score was for 2015 and was the only source found for data about countries’ eHealth environments. In contrast, the most recent data for the status of eHealth Strategy were for 2018. The other seven data sources were for years from 2015 to 2017.

Table 2 Information about the selected metrics and the cases each metric is applicable to

Ten candidate metrics were rejected for the following reasons:

  • Data sets incomplete, in the case of the digital health readiness index

  • Data sets not readily accessible, in the case of government debt per capita and proportion of total government spending on essential services

  • No easily interpretable summary score, in the case of the service availability and readiness assessment, rating agency assessments, IHR capacity and health emergency preparedness, and membership of the IOS

  • Similar, more appropriate metrics identified, as in the case of relative ranking among countries that have conducted at least one population and housing census in the last 10 years, and the relative ranking among countries that have achieved 100% birth registration and 80% death registration.

Descriptions of each of the selected metrics and the approaches to missing data are provided in Table 3.

Table 3 Descriptions of selected metrics and approach to missing data

Figure 1 shows the country scores and relative rankings of the summary metric. Detailed scores are provided in Tables 4 and 5.

Fig. 1
figure 1

Ranking of African country eHealth investment readiness using summary metric scores

Table 4 Metric scores
Table 5 Matrix of correlations between individual metrics

Mauritius achieved the highest summary metric score and was in the top five for five other metrics: CHE per capita (1st), IDI (1st), Ibrahim Governance Index (1st), HCI (2nd) and Internet penetration (4th). For the other four metrics Mauritius did not score in the top 20 countries.

Seven metrics showed greater than 0.50 correlation with the summary metric, namely the Ibrahim Governance Index (0.85), Internet penetration (0.78), ICT Development Index (IDI) (0.73), eHealth Strategy (0.61), Human Capital Index (HCI) (0.60), Global Observatory for eHealth (GOE) survey (0.53) and Current Health Expenditure (CHE) per capita (0.52). Correlation between component metrics was generally low, except between IDI and Internet penetration (0.84). Other correlations greater than 0.50 were between the Ibrahim Governance index and four other metrics: IDI (0.66), Internet penetration (0.64), CHE per capita (0.59), and HCI (0.52), and between CHE per capita and the two ICT metrics, Internet penetration (0.65) and IDI (0.65).

Results for five country groupings are provided in Tables 6, 7, 8, 9, 10. Table 6 is of five countries that scored highest on the summary metric. Tables 7, 8, 9, 10 each show the scores for three countries (highest scoring, lowest scoring and the median) from each of four economic regions. To aid comparison, these data are presented in Tables 6, 7, 8, 9, 10 and the corresponding spider charts in Figs. 2, 3, 4, 5, 6 where the axes represent metrics and are arranged radially (1–10), with the scores for each metric plotted and linked.

Table 6 Metric scores for countries with top five summary metric scores Mauritius, South Africa, Botswana, Seychelles, Morocco
Table 7 Metric scores for selected North African countries Morocco, Tunisia and Libya
Table 8 Metric scores for selected EAC countries Kenya, Tanzania, South Sudan
Table 9 Metric scores for selected ECOWAS countries Senegal, Sierra Leone, Guinea-Bissau
Table 10 Metric scores for selected SADC countries Mauritius, Namibia, Angola
Fig. 2
figure 2

Spider chart of countries with highest five summary metric scores

Fig. 3
figure 3

Spider chart comparison of three countries from the AMU: Morocco, Tunisia and Libya

Fig. 4
figure 4

Spider chart comparison of three EAC countries: Kenya, Tanzania and South Sudan

Fig. 5
figure 5

Spider chart comparison of three ECOWAS countries: Senegal, Sierra Leone and Guinea-Bissau

Fig. 6
figure 6

Spider chart comparison of three SADC countries: Mauritius, Namibia and Angola

In the highest scoring group there are results from Mauritius (0.74), South Africa (0.68), Botswana (0.65), Seychelles (0.64) and Morocco (0.62). All five countries scored above 0.70 on the Ibrahim Governance Index (> 0.73) and IDI (> 0.77). Internet penetration scores were good (> 0.84) for all except Botswana (0.62). Despite these high scores, all five countries had poor scores for CHE as a % of GDP and moderate or poor for real GDP growth rates. The results are shown in Table 6 and Fig. 2.

Three of the five AMU member countries were compared. Morocco scored highest (0.62), Libya lowest (0.36) and Tunisia was the median (0.52). Libya achieved the highest score for growth of real GDP out of all 54 countries and Morocco and Tunisia achieved good scores for IDI, Internet penetration, HCI and governance. All other scores were moderate or poor. The results are in Table 7 and Fig. 3.

Three of the six EAC member countries were compared. Kenya scored highest (0.58), South Sudan lowest (0.16) and Tanzania was the median (0.54). Kenya and Tanzania scored above 0.70 for strategy, growth of real GDP and governance. Kenya also scored above 0.70 for Internet penetration and HCI. All three countries’ scores were poor for CHE per capita and CHE as a percentage of GDP. South Sudan’s scores were poor for all metrics. The results are in Table 8 and Fig. 4.

Three of the fifteen ECOWAS member states were compared. Senegal scored highest (0.59), Guinea-Bissau lowest (0.28) and Sierra Leone was the median (0.46). All three countries achieved good scores for growth of real GDP and poor scores for CHE per capita and IDI. Senegal’s scores were good for strategy, GOE survey and governance and Sierra Leone scored the highest out of all 54 countries for CHE as a percentage of GDP. All other scores were moderate or poor. The results are in Table 9 and Fig. 5.

Finally, three member states of SADC were compared. Mauritius scored highest (0.74), Angola lowest (0.31) and Namibia was the median (0.49). As discussed earlier, Mauritius was the only country to score above 0.70 for the summary metric. Mauritius surpassed Namibia and Angola in all metrics except the GOE survey, where Mauritius, Namibia and Angola scored the same (0.54), and CHE as a % GDP, where Namibia scored higher (0.49) than Mauritius (0.30). Namibia and Angola had poor scores for eHealth strategy, CHE as a % GDP, growth of real GDP and Internet penetration. The results are in Table 10 and Fig. 6.

Discussion

Analysis of a country’s relative ranking on each component metric, and the summary metric, can be used to identify aspects where further development would contribute to eHealth investment strengthening. The summary metric provides an overall indication of a country’s eHealth investment readiness, relative to other countries. The inconsistency of data source years is a limitation, since a country’s economic condition, ICT development and eHealth development may vary from year to year. Future publication of an updated tool using metrics from a single, recent year – should they become available – would be of value.

Comparison of the component metric profiles of regional country groupings can help those countries identify good practices to be shared with neighbouring countries. Individual metrics can hide nuances, therefore exploring all metrics for each country under evaluation is encouraged. Similarly, comparing countries’ profiles provides additional insights illustrated by the varying patterns seen on the spider charts. Scoring less than 1.00 for a metric shows underperformance against peers, and represents an opportunity for improvement. Comparison of the scoring patterns can reveal individual and/or regional performance in each of these quadrants: bottom and right lower quadrant for financial and economic indicators, left lower quadrant for two ICT development indicators, left upper quadrant for human capital and governance, and right upper quadrant for development of the eHealth environment (Fig. 7).

Fig. 7
figure 7

Issues represented by each of the spider chart quadrants

Using the study findings, each African country can review its metric scores, plot its spider chart to show its performance, and use the results to establish an eHealth investment strengthening plan. For example, despite having the highest summary score, Mauritius’ results identified four areas that, if strengthened, will improve the likelihood of successful eHealth investment. These included updating its eHealth strategy, addressing aspects of the GOE survey that scored poorly, growing the Mauritian economy and lobbying for more allocation of the fiscus to health.

Countries with an eHealth Strategy, relatively high GDP and health spend per capita, and high governance scores, such as Mauritius, South Africa and Botswana, can apply the Five Case Model to improve eHealth investment decisions. Countries with an eHealth Strategy and high governance score, but low CHE scores, such as Kenya, Morocco and Senegal, should start by focusing on the economics and finance aspects of their eHealth programmes. Countries with an eHealth Strategy and low governance score, such as Nigeria, should focus on governance strengthening, as a foundational requirement for eHealth investment.

Regional spider charts help to illustrate this analysis. Thus, data for the AMU (Fig. 3) suggest that while Morocco and Tunisia show similar patterns, Tunisia remains hampered by lack of an eHealth Strategy retarding its eHealth development. Changing this will remain challenging while growth of real GDP and CHE metrics remain low, represented on the spider chart as low scores on the right, lower quadrant. Despite Libya’s generally lower than average performance, Libya scores well on growth of real GDP (2nd), far higher than Morocco (22nd) and Tunisia (39th). This, combined with a moderate IDI score (0.70), sets the stage for Libya to craft an eHealth Strategy to guide the beginning of eHealth investments.

In the EAC (Fig. 4), Kenya and Tanzania have similar summary metric scores and high scores for strategy, growth rates of real GDP and governance, yet important differences, such as Kenya’s higher score on Internet penetration. If the region considered identifying a country lead for key elements, it would include Kenya leading on connectivity. South Sudan scores are poor on all metrics, though with a slight shift to the left caused by the HCI score (0.45), which could indicate potential warranting further development. A dominant feature of the EAC spider chart is poor scores on the two CHE metrics, represented by the “missing” bottom right quadrant, highlighting the need for growth to include more fiscal allocations to health.

In the ECOWAS (Fig. 5), all three countries show good growth of real GDP, though CHE per capita remains poor and IDI is poor. Each of the spider chart quadrants has some activity, which may indicate that a collaborative regional approach will prove fruitful. Sierra Leone has achieved the highest score on CHE as a percentage of GDP (1st), though has inadequate eHealth Strategy (36th) and poor GOE survey scores (37th). An opportunity could be to develop a new eHealth Strategy, fuelled by CHE priorities. Promising governance rankings in Senegal (10th) underpin the growth of real GDP and a regional eHealth leadership role for Senegal.

The SADC spider chart (Fig. 6), shows a marked “lean” towards the left caused by low scores in the two eHealth implementation metrics in the top right quadrant. Namibia’s poor eHealth strategy score may help to explain why, despite promising rankings on governance (5th) and IDI (13th), the GOE survey score remains low (33rd). Angola is constrained by poor scores on strategy, CHE metrics, IDI and governance. A regional strategy that includes collaboration to share good practices, particularly to improve SADC country’s eHealth strategies, might prove useful.

Correlation analysis provides information about relationships between component metrics. Correlations above 0.75 between the summary metric and two component metrics, the Ibrahim Governance Index (0.85) and Internet penetration (0.78), suggest that either of these would provide a reasonable surrogate indicator of overall eHealth investment readiness. Correlation between component metrics shows modest correlation for Ibrahim Governance Index and IDI (0.66), and Ibrahim Governance Index and Internet penetration (0.64). These are consistent with suggestions that ICT development plays a role in promoting good governance [43, 44] and may suggest that governance is a requirement for countries to make productive eHealth investments. Correlation between health expenditure per capita and ICT development (0.65) underpins the importance of addressing affordability issues and may support suggestions that ICT initiatives themselves contribute positively to economic growth [45].

The metrics used to develop the eHealth Investment Readiness Assessment Tool reflect aspects of eHealth investment that are aligned to the five cases. The tool highlights countries’ strengths and weaknesses, thereby providing information for targeted eHealth investment plans. It also helps to identify strengths in neighbouring countries to support collaborative partnerships for regional eHealth investment. This demonstrates the applicability of the Five Case Model to African eHealth investment decisions. The Five Case Model should now be validated through in-country field testing, by designing a tool based on the five cases and testing its utility to help decision makers select an appropriate initiative for investment from among promising candidates.

Conclusion

The absence of recognised eHealth impact appraisal frameworks in regular use in African countries increases the opportunity cost of eHealth and the risk that investments will not produce optimal net benefits. The eHealth Investment Readiness Assessment Tool presented in this study ranked fifty-four African countries and profiled potential approaches for country and regional eHealth investment strengthening plans using metrics relevant to eHealth and aligned to the Five Case Model. The results illustrate the applicability of the Five Case Model for African eHealth investment decisions to serve as a component of an eHealth impact model for Africa. Whilst this study used African countries as the exemplar, the approach is likely to be useful elsewhere, particularly in Low and Middle Income Countries (LMICs), and complements recent developments such as the DHIF. Further scrutiny of the approach and assessment of its eHealth investment strengthening utility is encouraged.