HIV testing is increasingly central to wider approaches to HIV prevention for both those testing positive (Treatment as prevention, TasP) and those testing negative (Pre-exposure prophylaxis, PrEP). Public Health England estimated that 13% of HIV positive men who have sex with men (MSM) were undiagnosed in 2016 [1], and previous research suggests that delays in HIV diagnosis are associated with poorer health outcomes and treatment response, increased mortality and healthcare cost, and increased levels of onward transmission [2, 3]. Current UK guidelines recommend annual testing for all MSM, with more frequent testing (e.g. three monthly) recommended for men at higher risk of HIV infection [4]. However, recent research based on the findings of community-based surveys found that only half of UK gay, bisexual and other men who have sex with men (GBMSM) reported annual testing, with less than one quarter of GBMSM defined as ‘at risk’ of HIV testing more frequently [5]. HIV testing is associated with age, ethnicity and educational attainment, while multiple factors influence testing decisions, including fear of a positive test result and self-perceived risk [6, 7].

HIV testing is a complex behavioural domain and different types of intervention, which focus on increasing knowledge, awareness, access via varied settings and reducing barriers to testing, have been attempted to increase HIV testing among different population groups [8]. Mass media interventions are one approach that can be used to increase awareness of testing (others include social media, and one-to-one, opportunistic and group-based information provision) and have increasingly been recognized as a powerful tool in sharing public health messages [8,9,10]. Wei et al. conducted a systematic review exploring the impact of social marketing interventions on HIV/STI testing uptake among MSM and transgender women and found only three studies that met their inclusion criteria. Although these studies seemed to suggest that social marketing interventions could have an impact on HIV testing rates, the authors noted the low quality of evidence and high risk of bias within the included studies [9]. A recent evidence review for the NICE Guideline on HIV testing sought evidence relating to the cost effectiveness of interventions which increase awareness, the offering and uptake of HIV testing. The review identified just two recent RCTs examining the effectiveness of mass media and communication interventions on increasing HIV testing [10]. Both studies were conducted in the US, with women, and provide only moderate evidence of effectiveness in increasing HIV testing uptake [11, 12]. Our own evaluation of a mass media intervention for GBMSM suggested partial support for the role of such interventions in improving sexual health (men with mid or high intervention exposure were more likely to have tested for HIV in the previous 6 months) [22], but recognised the limitations of such mass media interventions if run without the nuanced targeting that social marketing approaches advocate. Indeed, this is an important point and while mass media interventions might not always draw on social marketing techniques, social marketing is recognized as a theoretical approach that can effectively change behaviour [13]. A previous review found that few studies have fully incorporated social marketing criteria and there is a need for more rigorous research designs and detailed process evaluation work to identify the social marketing intervention components that are most effective [9], as well as to account for changes in technology and media use in the interim. As a result it can be challenging to identify effective components of interventions; i.e. what works, why, for whom and in what circumstances. Addressing these questions is essential in developing effective interventions.

We conducted a systematic review of the effectiveness of social marketing and mass media interventions for HIV testing in MSM published or included in systematic reviews since 2010 (updating the last published systematic review [9], from which two of the three original papers were again included [14, 15] Footnote 1 to identify the best quality evidence to guide the development of an evidence-informed, theoretically-based, social marketing intervention to increase regular HIV testing among GBMSM. The review was commissioned by the health provider NHS Greater Glasgow & Clyde (NHS GGC) to inform the development of an intervention with a clear behavioural target, clear audience segmentation and appropriate behaviour change techniques. With regard to behavioural domain, NHS GGC recognised the need for a continued focus on regular and frequent HIV testing, stressing the benefits of knowledge of HIV status. They also wanted to support men (both population wide and as individuals) to be more open in their conversations about testing, re-testing and HIV status in order to inform sexual decision-making. This project was the first step in working with NHS GGC to develop an evidence-informed social marketing intervention targeting MSM in relation to regular HIV testing.


This review was registered on the PROSPERO International Prospective register of systematic reviews (CRD42017053451) and is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. The protocol is available at

Search Strategy

CINAHL, Embase, Medline, PsychInfo and Web of Science were searched for articles published between 2009 and 15th November 2016 with similar and standard MeSH search terms for HIV, MSM and social marketing/mass media interventions used previously [9]. An example of the search strategy applied to Medline is presented in Supplementary File 1. No restrictions were applied to the searches in terms of language or publication type at this stage. In addition to database searches, reference lists of included articles were searched manually and relevant abstracts were checked for inclusion criteria.

Study Selection

Only studies written in English were included. Results were downloaded into, and de-duplicated in, a database in Endnote 7 (Thomson ResearchSoft). Inclusion and exclusion criteria (Table 1) were applied to screen titles and abstracts by one researcher (A2), with a 10% sub-set validated by another (A4). The inclusion criteria were informed by the rationale for the overall study and framed to the behaviour (HIV testing), target population (MSM), and the type of intervention to be developed (i.e., social marketing/mass media). Where consensus could not be reached regarding inclusion, a third reviewer was used (A1). Full reports of the selected studies were screened using the same process as before. The full inclusion/exclusion process is outlined in Fig. 1.

Table 1 Inclusion and exclusion criteria
Fig. 1
figure 1

Prisma flow chart for study selection

Data Extraction

Data extraction tools were piloted on 10% of the sample and reviewed by all authors. We extracted data on: study identifier (first author, location, year); study design (study type, method of recruitment, duration of follow-up); outcome measures (details of the specificity of the HIV testing domain were also recorded); participant details (number of participants, age); and results. Data extraction was completed by one researcher (A2) with a 10% sample validated by another (A4), and discrepancies and disagreements resolved through consensus or through discussion with other co-authors (A1/A3/A6).

Quality Appraisal

Included studies were quality appraised using standard checklists to identify potential bias [17]. Each paper was assigned a grading for internal and external validity using standard NICE appraisal checklists for risk of bias [17], from high internal validity (all or most of the quality checklist criteria fulfilled) to low internal validity (few or no quality checklist criteria fulfilled). Studies were appraised by one researcher (A2), with a 10% sample validated by another (A4). Four of the included studies did not present sufficient detail for assessment (three were conference proceedings and one presented only illustrative examples of social marketing interventions). Papers were not excluded on the basis of quality.


A meta-analysis was not conducted due to the heterogeneity of the study outcomes, designs, methods and samples of the included studies. Instead, a narrative approach was used to analyse the data. The narrative synthesis is supported by summaries of the data extraction in Table 2, which outlines the key characteristics of each study included in the review. Findings of each study are presented in Table 3.

Table 2 Summary of included studies
Table 3 Summary of included study results and intervention effectiveness


Given the heterogeneity of contributing studies, findings were assessed and categorized in terms of effectiveness as follows:

  1. (1)

    Intervention had a negative effect (i.e. decrease in uptake of HIV testing).

  2. (2)

    Intervention had no effect

  3. (3)

    Intervention had a positive effect on the antecedent of behaviour (e.g. intentions to test or knowledge)

  4. (4)

    Indicative of some positive desired behaviour change (i.e. some increase in uptake of HIV testing or in one segment of the population, but not all)

  5. (5)

    Indicative of clear behaviour change in desired direction (i.e. increase in uptake of HIV testing)


A total of 2748 articles were identified, with an additional two found during manual reference searches. 242 studies met criteria for full text screening from which 223 were subsequently excluded as not meeting eligibility criteria. 19 studies were included in the review (Fig. 1, Table 2).

Study Characteristics

Of the 19 included studies, seven were conducted within the UK, with the majority of these conducted in England (n = 5) [15, 18,19,20,21] and the remainder in Scotland (n = 2) [22, 23]. Five studies were conducted in United States of America [24,25,26,27,28], three in Australia [14, 29, 30] and the rest conducted in Canada [31], South America [32], Italy [33] and China [34]. Twelve studies recruited participants with a total sample size of 10,894 (range 50–3092), one study did not report on sample size [14]. The average age of participants ranged from 22–47 years (nine studies did not provide an average age of participants [15, 18, 20, 21, 23, 24, 28, 31, 34]).

Nine studies were cross-sectional surveys [14, 15, 20, 22, 23, 25, 29,30,31], three were randomised control trials [26, 32, 34], two were non-comparative studies [18, 21], two were pre/post studies [24, 33], two were interrupted time series [19, 27] and there was one case study [28]. As a result, the majority did not contain a control group. The 19 studies evaluated 22 separate interventions. A variety of recruitment methods were adopted across the studies, with some combining more than one method (see Table 2). The majority of studies used online methods to recruit participants (e.g. banner adverts, use of online communities, direct emails to website users), eight used clinic visit data, six recruited participants in community venues (e.g. bars) and five recruited through other means (e.g. apps or peer referral). The average response rate within the studies was 48.5% (range 1.9–87.1%, median 62.6%).

The majority of studies included testing as a primary outcome (routinely-collected or self-reported testing data, with some reporting both). Just two studies explicitly measured frequency of HIV testing [19, 31], four included a measure of recency of previous HIV test [14, 22, 23, 30] whilst the remaining studies used isolated self-reported HIV testing or intention to test or testing rates at clinics within a specific time period. Additionally, five studies reported on antecedents of testing (e.g., knowledge or intentions) [22, 23, 27, 29, 32] and six reported on other outcomes (e.g. risk behaviours such as Condomless Anal Intercourse (CAI) and other HIV risk behaviours) [15, 23, 24, 26, 30, 33] (Table 3). Ten studies [14, 15, 18, 20, 21, 25, 27, 29,30,31] used routinely collected data (clinic samples), with a total of 73,704 tests (one study did not report actual numbers [27]). Twelve studies recruited participants [14, 19, 22,23,24, 26, 27, 29, 30, 32,33,34] with a total sample size of 10,894 (again one study did not report sample size [14]). Four studies merged participant self-reports and routinely collected data [14, 27, 29, 30], with two evaluating the same intervention at different time points [29, 30]. Six of the included studies gathered data only during the intervention period [15, 18, 20, 25, 30] whilst information was unclear about post intervention follow-up periods for four interventions [28, 29, 32, 33]. Of the nine studies reporting clear follow-up periods there was a wide variety of time frames. Only three of these studies reported post intervention follow-up periods over 6 months [14, 22, 31] whilst the remaining studies ranged from 3 weeks [34] up to 6 months [23].

Intervention Content

The purpose and nature of the interventions are reported in Table 2. Of the 19 studies, ten included specific reference to social marketing’s theoretical principles [14, 19, 22, 23, 26,27,28,29,30,31]. Very little could be gleaned from the studies about their behaviour change focus beyond a desire to increase HIV testing. This does not mean that actual mechanisms or techniques to change behaviour were not employed (the contrary to which was evident and reported in the BCT and theory coding analyses reported elsewhere, but instead that this was often implicit in the materials employed rather than explicit in the descriptions of these). However, studies generally included detailed descriptions of the nature of the intervention, the provider and the content. Most were delivered online or in gay venues and other community settings, with none delivered via one medium alone, and most relied on a variety of delivery media, including posters, leaflets and adverts. Three studies reported that delivery was supported by outreach workers or peer educators [22, 25, 28]. Most reported use of an intervention name, brand or logo and there was a considerable mix of tone (e.g., informative, positive, humorous etc.). Interventions were delivered for up to 14 months, but the studies were less clear on intensity (i.e., the length of time potential users might engage with intervention materials). Overall, the interventions used an array of different imagery, but the majority used photographs as the central image (our visual analysis interrogated audience reading of this and the implications of it for future intervention design is reported elsewhere). All but one of the interventions [21] featured actors who could be interpreted as representative of the target audience, implicitly or explicitly identifying actors as MSM. The interventions were primarily informal and direct in tone and all also featured text of some kind, most frequently phrased as an instruction or statement to convey key messages.

Quality Appraisal by Study Design

Whilst both internal and external validity gradings are reported (Table 2), the current study will focus on the internal quality (the robustness of the findings) rather than how generalizable the findings are (external validity). Quality appraisal was assessed for 15 studies, with no studies fulfilling all or most of the checklist criteria for both internal and external validity (see Supplementary file 2). Within the current study, four of the included studies were graded as showing high internal validity, fulfilling all or most of the checklist criteria of internal validity. Three of these studies used a cross sectional design [22, 29, 30] whilst the remaining study used an interrupted time series design [19]. Only four studies were graded as low internal validity, fulfilling none or few of the checklist criteria for internal validity, three of which used a cross sectional or retrospective cohort study [23, 25, 31] and one was a pre-post design [24]. Those that scored poorly on internal validity were largely judged to do so based on a general lack of information about the population (i.e. potential bias in sampling), lack of information around selection of participants and confounding variables (and comparison group), lack of detail regarding reliability of measures used (i.e. self-reported testing/unvalidated measures) and a lack of detail regarding other factors that may influence effectiveness of intervention (i.e. information relating to intensity of exposure to intervention).

Effectiveness of Interventions by Study Design

An overview of results and the relative effectiveness of interventions across four categories of effectiveness is provided in Table 3. Seven studies reported results that were indicative of behaviour change in the desired direction (i.e. an increase in testing) [15, 22, 23, 28, 29, 31, 34]. An additional five [18,19,20, 25, 32] reported results indicative of some positive desired behaviour change (i.e. an increase in one segment of the population targeted but not all [32]); an increase in the proportional representation of target populations in clinic samples [25]; or an increase in requests for self-sampling HIV tests [18, 20]. In one study, the increase in HIV testing was no longer statistically significant after adjusting for key demographics, sexual and testing history, and exposure to other health improvement interventions (rate ratio 1.11, 95% CI 0.85–1.45, p = 0.45) [19]. Two studies reported that the intervention had an effect on the antecedents of behaviour (e.g. knowledge of, or intentions for, testing) [21, 27] and the final five reported that the intervention had no effect [14, 24, 26, 30, 33]. None of the included studies reported a negative effect.

Looking at effectiveness in relation to study design, of the three RCTs included within the current study, one was indicative of clear behaviour change in the desired direction [34], one was indicative of some positive desired behaviour change [32] and the final study showed no effect [24]. All were graded as medium internal validity, fulfilling at least some but not all of the checklist criteria for internal validity.

Of the eight cross sectional or retrospective cohort studies, one study which was indicative of clear behaviour change in the desired direction was graded as high internal validity [22]. The final two studies graded as high internal validity used different analytical techniques and timescales to assess the impact of the same intervention (Drama Down Under) with different results [29, 30]. The first suggested that there was some initial evidence of an increase in testing across the duration of the intervention) [29], but the latter, when incorporating insights from more recent data sets, concluded that the increase in HIV testing suggested a continuation of temporal trends rather than more frequent testing among men [30].

Two of the cross sectional studies were graded as medium internal validity [14, 15], although the results for these were mixed with McOwan et al. suggesting results indicative of clear behaviour change and Guy et al. suggesting no effect. Three cross sectional or retrospective cohort studies were graded as low internal validity and yet reported results indicative of clear behaviour change [23, 31] or some positive desired behaviour change [25]. The final cross sectional study was unable to be assessed for internal validity due to insufficient study detail, although their results did indicate some positive desired behaviour change [20].

The two pre-post studies included showed no intervention effect, although internal validity varied with the first graded as medium internal validity [33] and the second graded as low internal validity [24]. Both studies using the interrupted time series design had results indicative of some positive desired behaviour change [19] or an effect on the antecedents of behaviour [27]. These studies were both graded positively in terms of internal validity with the Hickson et al. graded as high internal validity and Solario et al. graded as medium internal validity.

Studies using the non-comparative design were unable to be assessed for internal quality due to insufficient reported methodological detail, although their results did indicate some positive desired behaviour change [18] or an effect on the antecedents of behaviour [21]. Finally, the current study was unable to assess the internal validity of the Thackeray et al. case study (2011), although their results were indicative of clear behaviour change in the desired direction.


This systematic review has examined the effectiveness of contemporary social marketing and mass media interventions for HIV testing with GBMSM. Our review has demonstrated that there is now a growing body of evidence for the effectiveness of social marketing and mass media interventions to increase HIV testing among GBMSM. However, there was heterogeneity of interventions, study quality was mixed and few have adopted the most rigorous study designs. Of seven studies reporting an increase in HIV testing, five were cross sectional studies (two graded as high internal validity, one medium and two low internal validity), one was an RCT (medium internal validity) and one case study (unable to be assessed for validity). This speaks to the challenge of evaluating this particular type of intervention. Within the context of the limitations of general effectiveness reviews, we need to know what works, for whom, when and how. Further details relating to the specific content of the interventions can be found in forthcoming papers relating to an analysis of mechanisms of change [35] and social marketing and visual design components of the interventions [36]. By reviewing the key processes involved in mass media consumption, and examining the role of theory and behaviour change techniques employed in message delivery, we have achieved a high quality integration of multi-source data from different theoretical perspectives. In this way we have optimized the potential content of social marketing interventions to increase HIV testing in evidence-based and theoretically-informed ways. A detailed logic model that sets out the key components of social marketing, visual design and theoretical mechanisms of behaviour change that the overall review has suggested are required as inputs for an intervention is shown in Fig. 2.

Fig. 2
figure 2

Logic model for an evidence-informed, theoretically-based, social marketing intervention to increase regular HIV testing among GBMS

Our review is the first to explore patterns between study type (RCT, cross-sectional or pre/post or cohort study design), internal validity and intervention effectiveness. Seven of the 19 studies reported results indicative of an increase in HIV testing and another five reported results indicative of some positive desired behaviour change. Previous reviews demonstrated a lack of evidence on the effectiveness of social marketing and mass media interventions to increase HIV testing among GBMSM. The 2011 Cochrane review included just two studies in their final analysis [9], while the more recent evidence review for the NICE Guideline on HIV testing identified a further two recent RCTs, neither of which were conducted with GBMSM [10].

The 2011 Cochrane review called for more rigorous research designs and detailed process evaluation work to identify the social marketing intervention components that are most effective [7]. However, the studies included in this review were of relatively poor quality, with most study designs being cross-sectional and only three RCTs included. While two RCTs had results that were either indicative of behaviour change [34] or some positive desired behaviour change [32], the latter was judged to be of poor study quality. Our findings, unsurprisingly, suggest that the study designs, analytical techniques and timescales used to assess the impact of interventions can influence interpretations of effectiveness. Changes in testing rates across a population or in cross-sectional studies might not be the result of the intervention, but instead indicative of temporal trends and may be affected by a variety of other factors. This is particularly evident in the competing conclusions on the effectiveness of one intervention by two studies using different analytical techniques [29, 30]. Our review speaks to the challenge of evaluating this particular type of intervention, which has been discussed previously [22]. The lack of RCTs identified may be indicative of the difficulty of using this research design in the evaluation of population-level social marketing interventions. There is a need to consider and explore the potential for the development and use of alternatives, such as natural experiment designs, which are appropriate when evaluating population level and policy interventions [37], in order to overcome barriers associated with wide-population reach and exposure.

We found no qualitative studies or process evaluations, despite the importance of these to inform the design and implementation of future interventions. This demonstrates the need to further evaluate the social marketing and theoretical behaviour change content of interventions simultaneously. We had limited access to information regarding other factors that may have influenced effectiveness, e.g. context of delivering the intervention; knowledge of existing HIV interventions; changes to services and political/cultural setting in the locations in which interventions were delivered. Whilst we acknowledge that some of these factors may be impossible to control for within a real world setting, it is crucial that we consider this within intervention evaluations [38]. Detailed intervention development studies and accompanying process evaluations are needed in the future to enable consideration of this context in understanding intervention delivery.

Strengths and Limitations

We have conducted a rigorous search and systematic review accompanied by a narrative synthesis, updating and adding to previous work, providing a valuable contribution to the field. Whilst one of the included interventions [22] was conducted by authors involved within this review, we are confident that the data extraction process has limited the potential bias of reviewing its effectiveness as those authors were not involved in the data extraction of that study. The overall quality of evidence was relatively low and thus our findings should be interpreted with caution. As noted earlier, this is a consequence of the difficulty in evaluating population-level social marketing interventions. Despite this, we are confident that the methods adopted in the current study have contributed to a robust synthesis of existing evidence. Neither did we conduct an analysis of the cost-effectiveness of reviewed interventions, which is an important component for future evaluations. Our review focused on mass media, social marketing, multimedia, major poster and leaflet and radio interventions and combinations of these. Review of alternative media, particularly using social media or social networking sites was beyond of the scope of the work we were commissioned to undertake, but this is worthy of further research [39, 40]. It is also important to note that all of the included studies were conducted prior to the introduction or availability of PrEP for HIV prevention. This has dramatically altered the context of HIV testing, presents issues for the transferability of previous interventions to future contexts, and will need to be considered within any future intervention development. Whilst few studies reported measures relating to frequency and/or recency of HIV testing this may reflect a more recent emphasis on increasing and measuring HIV testing frequency. Future evaluations need to factor in appropriate timeframes to allow accurate measurement of changes in HIV testing frequency post intervention.


HIV testing is not a simple behavioural domain and there are important differences in the ways people think about testing and the antecedents to testing. Testing decisions for example are very different across testing scenarios such as testing for the first time, in relation to a high risk event [ [14]), in relation to regular check-ups or to access PrEP [41]. Public health gain is equally distinct. Interventions should be designed to accommodate the diverse antecedents of decisions to test. Significant knowledge gaps remain in relation to such segmentation, the means of increasing the frequency of HIV testing and on the maintenance of appropriate testing patterns over time.

To consolidate the individual and public health benefits presented by HIV testing interventions, HIV testing interventions should be considered in relation to the full range of technological, psychosocial and sociocultural contexts of HIV testing [41]. The increasing diversification and technological variation of tests available (point of care, self-sampling or self-testing) demands systematic consideration of the right test for particular circumstances and sub-populations (i.e., permutations of audience segmentation regarding for example, previous testing history, perceived likelihood of positive results). Intervention development and potential intervention content should potentially present the range of HIV tests according to individual circumstances. Despite the growing body of evidence for the effectiveness of social marketing/mass media interventions to increase HIV testing that we have demonstrated here, there remains a need for well-designed, high quality, robust and innovative evaluations, with accompanying process evaluations, to allow for better clarity in identifying the best social marketing and mass media interventions that can increase appropriate HIV testing among GBMSM.