An Evidence-Based Evaluation Tool
Based on the literature and the emerging findings from the fieldwork we identified ten ‘input areas’ for which evidence is required to assess policy and practice initiatives designed to be age-friendly. These make up the evaluation tool (see Table 2). The broad scope of the tool, and the capacity to gain detailed context-specific insights, is consistent with the “complex, dynamic and multi-dimensional” and also “highly context dependent” nature of age-friendliness of urban environments that has been highlighted by WHO Centre for Health Development ( 2015, p.65).
Table 2 AFC evaluation tool We propose that in applying the tool, an evidence synthesis table is used to record key findings for the individual input areas (see Table 3 - with exemplary data for input area 5: Priorities based on needs assessment). Once the input area has been specified (first row), the data sources from which evidence relevant to this area is available are recorded (first column). Where appropriate, references or hyperlinks to the original source documents can be incorporated. Ultimately, the tool is designed to assess a city’s performance in policy and practice (third column). This is preceded by appraisal of the available evidence (second column) in order to ensure transparency about the quality of the data on which performance assessment is based.
Table 3 Evidence synthesis table for application of AFC evaluation tool (populated with exemplary data from Liverpool’s AFC initiative) Criteria for the appraisal of evidence depend on the methods and data sources used. The evaluation tool offers flexibility in this respect. Tool users select the methods and sources through which evidence will be obtained, thus ensuring relevance to the local context. This is likely to involve consideration of existing information, for example in the form of routinely collected statistical data. Additionally, users might opt to collect new evidence, subject to the availability of relevant expertise and resources. There are no restrictions on methods and data sources that are feasible and appropriate in a specific context, and it is worth noting here the potential of participatory methods as a way for older people to become involved in shaping urban settings. Based on the methodological choices made, evidence appraisal criteria need to be determined. There is no shortage of relevant scientific literature that tool users can draw on. Examples of literature relevant to the qualitative methods used in the Liverpool case study include the Critical Appraisal Skills Programme (CASP) (2013), Walsh and Downe (2006), Horsburgh (2003), Williamson (2009) and Popay et al. (1998). This is a complex field in which tool users in cities might find collaboration with research partners beneficial.
In the Liverpool study we appraised the available evidence for each input area by considering the amount of data, the degree of detail provided by the data, and the heterogeneity of the informants with respect to their professional positions (in the case of the key informants) and their demographic characteristics (in the case of the older interviewees). We also considered the expertise of the informants in relation to the input area for which they provided information, and we identified further data collection methods that could have added to the evidence. This information was recorded in the evidence synthesis table (see Table 3; second column). In addition to this focus on individual input areas, we appraised all the evidence from data sources that supplied information for multiple input areas - the key informant interviews, the interviews with older people and the focus groups. Based on criteria suggested by CASP ( 2013), this involved assessing the appropriateness of method, recruitment, sample and data analysis for each data source. The findings contributed to evidence appraisal for all those input areas that drew on data from the respective source(s). They were recorded in a table that will be published separately. The approach used in the Liverpool study illustrates how evidence appraisal might be carried out. Application of the tool elsewhere is likely to involve at least some different data sources and, therefore, will require different appraisal criteria that need to be decided on by tool users.
On the basis of evidence appraisal in Liverpool, we carried out an assessment of city performance in policy and practice. Exemplary data with regards to Liverpool’s overall AFC initiative are presented in Table 3 (third column). They illustrate how findings can be summarised and recorded, with references and hyperlinks to the source documents as appropriate.
We have included space for narrative summaries of both evidence appraisal and performance assessment into the evidence synthesis table. These accounts can be translated into summary scores. Evidence appraisal scores range from 0 to 5. A score of 0 (zero) indicates that the minimum data requirements for an input area have not been met, and/or that the data quality is poor, with the implication that there can be no subsequent assessment of city performance for that input area. Scores for performance assessment also cover a scale from 0 to 5. Where no assessment of city performance for an input area is possible due to insufficient data availability/quality, this is reflected in the absence of a score.
The scores can be presented in radar charts (see Fig. 3a and b). Visual representation of this kind provides an accessible overview that highlights strengths (‘peaks’) as well as limitations and areas requiring further attention (‘dips’). In the study of Liverpool’s AFC initiative, the evidence appraisal scores covered a relatively narrow range at the upper end of the scale (from 3 to 5) (see Fig. 3a). This meant that while there was some room for improvement, the available evidence allowed for a confident assessment of the city’s AFC approach. The picture for city performance was more diverse, with scores ranging from 1 to 4 (see Fig. 3b).
We applied the evaluation tool to both Liverpool’s overall AFC approach and the city’s work specifically on falls. In both instances, all of the ten input areas were addressed. The adaptable nature of the tool allows users in other contexts to limit the focus to selected input areas where this is better suited to the nature of an intervention and/or the kind of assessment required, or where resource constraints dictate this.
The tool incorporates a focus on the extent to whichexisting frameworks for assessing age-friendliness have been used by a city to guide its work (input area 6). In addition, existing frameworks can be applied in combination with the tool. Put differently, a city assessment can benefit from an additional component where data that are available in relation to existing frameworks are recorded.
Where cities have worked with specific frameworks, these are an obvious choice for use together with the tool. In the case of Liverpool, the city had not apparently utilised existing AFC assessment frameworks in any formal or systematic way. We chose to apply the WHO set of core AFC indicators (WHO Centre for Health Development 2015; see Fig. 4) as part of our assessment of the city’s performance on age-friendliness. Rather than collecting new evidence for this, we recorded the data we had available that seemed broadly relevant to specific indicators (the findings from this are published in a separate paper reporting on the findings in Liverpool).
The WHO indicators address issues important for AFCs that are often very specific. The evaluation tool, which incorporates a strong focus on the conditions for an AFC, is designed to capture evidence on a broader scale. While it can accommodate the kind of information relevant to the WHO indicators, this is subsumed within its (broader) input areas, particularly the areas of provision and involvement of older people. Applying the WHO indicators together with the tool can draw attention to important issues that are often very specific and would be less visible if the tool alone was relied upon.
Logic Models
In addition to an evaluation tool, the fieldwork in Liverpool resulted in logic models (see Fig. 5a and b). A critical function of both was supporting the communication of findings. Both proved valuable in feeding back the results of the assessments of city performance to local stakeholders. They visualised areas identified as requiring further work as well as areas of strength. At the same time, they facilitated an understanding of the processes and interlinkages through which these areas were anchored in the wider AFC system, and which would need to be considered in future work.
Recommendations for Policy and Practice
The findings from the assessment of Liverpool’s overall focus on age-friendliness and its falls-related work highlight strengths in both areas. They thus draw attention to approaches and characteristics on which Liverpool can build, and from which other cities can learn. At the same time, they identify areas that require further attention. Challenges and gaps that became visible have formed the basis for researcher recommendations across the input areas of the tool that are rooted in the strengths in the city’s work (published separately). In other contexts in which the approach described is applied, the strengths and gaps identified can inform a city’s priorities for action.