Acceptability of online exercise-based interventions after breast cancer surgery: systematic review and narrative synthesis

Purpose eHealth and mHealth approaches are increasingly used to support cancer survivors. This review aimed to examine adherence, acceptability and satisfaction with Internet-based self-management programmes for post-surgical cancer rehabilitation and to identify common components of such interventions. Methods Nine electronic databases were searched from inception up to February 15, 2020, for relevant quantitative and qualitative studies evaluating Internet-based cancer rehabilitation interventions. Studies were required to include an exercise or physical activity–based self-management intervention and a measure of adherence, acceptability or user satisfaction with the programme. Two independent reviewers performed all data extraction and quality assessment procedures. Data were synthesized using a narrative approach. Results Six hundred ninety-six potential papers were identified and screened. Eleven met the inclusion criteria. Interventions had wide variations in levels of adherence, but the majority were reported as being acceptable to the users. Increased acceptability and user satisfaction were associated with interventions which were seen as time and cost-efficient, requiring acquisition of minimal or no new skills, which used coherent language, or which provided tailored information. The majority contained behaviour change components such as goal setting. Conclusions Despite high levels of heterogeneity between studies, Internet-based approaches may be an acceptable method for the delivery of self-management interventions in post-surgical cancer rehabilitation. Implications for Cancer Survivors There is a need for further studies exploring factors associated with increased user engagement and usage of digital interventions in cancer rehabilitation settings. These findings should be used to help develop interventions prior to testing their effectiveness in adequately powered randomized controlled trials.


Introduction
Despite increasing incidence, cancer survival rates have doubled in the past 40 years [1]. More than 40% of people undergo surgical interventions as part of their primary cancer treatment [2] and while many people are able to return to prediagnosis occupations and lifestyle [3][4][5], treatmentassociated side effects are a common occurrence [6]. This includes functional and musculoskeletal issues such as loss of muscular and cardiac fitness, fatigue, impaired motor sensory function and lymphoedema [6]. Cancer multidisciplinary rehabilitation is found by the literature [7][8][9] to minimize these effects in its capacity as a key element of the care that cancer patients receive, aiming to minimize long-term complications, reducing hospital re-admissions and improving quality of life [7][8][9]. Cancer rehabilitation assists individuals to achieve the best possible physical, psychological, social and vocational outcomes [10]. A multidisciplinary team approach which anticipates the needs of cancer survivors in a timely, coordinated and continuous manner from the time of diagnosis is recommended [10]. Worldwide policy drivers for patient empowerment during cancer treatments emphasize the need for selfmanagement and person-centred interventions to address unmet care needs [11]. Studies suggest that approximately 40% of patients report at least one unmet need for rehabilitation services in the immediate recovery period and in the longer term [12][13][14]. A large-scale cross-sectional survey also showed that 63% of cancer survivors had a need for at least one type of rehabilitative service, with physiotherapy and physical training being the most often required (43% and 34%, respectively) [15].The Internet is a powerful medium for providing accessible and low-cost resources to address unmet support needs in cancer survivorship. Although increasing, the number of these resources is relatively small and there is minimal evidence that describes users' experience of accessing them [16,17]. Engagement with interventions, facilitators and barriers to their use and users' views on their acceptability, therefore, needs further examination [16][17][18]. Yardley and colleagues [18] suggest there is a clear distinction between effective engagement with an online intervention which leads to desired outcomes and behavioural change, and a minimal level of engagement, which might not necessarily effect change. Further evidence suggests a number of factors are associated with poor user engagement. This includes the provision of standard information instead of more specialist support and personalization of information [19,20]. Engagement can, however, be limited by barriers such as lack of experience with using online resources and by usability issues [21,22]. To inform future research in this area, the aim of this review was to comprehensively examine adherence, acceptability and satisfaction with exercise-based online self-management programmes for post-surgical cancer rehabilitation and to identify common components of such interventions.

Study design and search strategy
This systematic review (PROSPERO registration number: CRD42018107411) was conducted using a predefined protocol developed according to the recommendations of the Cochrane Collaboration Handbook [23] and the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines [24]. Nine electronic databases (The Allied and Complementary Medicine Database (AMED), The Cochrane Library, The Cumulative Index to Nursing and Allied Health Literature (CINAHL), Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Physiotherapy Evidence Database (PEDro), ProQuest Medical Library, Pubmed and Scopus) were searched from inception to February 15, 2020. Figure 1 presents a copy of the search syntax for the Ovid-EMBASE database to facilitate replication of the search. In order to identify unindexed articles in the searched databases, grey literature was searched using Google scholar, as well as manual searches of the reference lists of relevant articles in the field. The search strategy included a list of concepts (Internet, self-management, exercise, cancer, surgery, response to intervention), with an extensive list of associated keywords and MeSH terms ( Table 1). The "explode" command, the truncation symbol (*) and Boolean terms (AND, OR) were applied in order to combine the different search concepts. Two independent reviewers (MS and IW) screened identified titles and abstracts before screening full-text copies of potentially relevant articles based on inclusion and exclusion criteria outlined below. Final agreement on study inclusion was agreed by both reviewers with a third reviewer (LR) consulted to resolve any disagreements (see Fig. 2 for PRISMA flow diagram).

Inclusion and exclusion criteria
To be included, studies were required to meet the following criteria: 1. Quantitative or qualitative study design & Included adult participants (aged 18 or over) with at least 2/3 of the study sample having received surgical intervention for any type of cancer & Included an Internet-based, self-management intervention which included any form of exercise or physical activity, e.g., walking cycling, etc. & Included at least one measure related to adherence, acceptability and/or satisfaction with the intervention As the majority of the authors were not multilingual and it was beyond resources of the team to involve a translator, it was decided to restrict the studies examined to those published in English.

Definitions
For the purposes of this article, the following definitions were used in order to avoid ambiguity in terms of defining Internetbased interventions, adherence, acceptability and satisfaction.

Internet interventions
Bennett and Glasgow [25] define these as "Systematic treatment/prevention programs, usually addressing one or more determinants of health (frequent health behaviours), delivered largely via the Internet (although not necessarily exclusively Web-based), and interfacing with an end user. These interventions are typically highly structured, mostly self-guided, interactive, and visually rich, and they may provide tailored messaging based on end-user data".

Adherence to a treatment modality
Adherence has been defined by Kelders and colleagues [26] as "The extent to which the patient's behaviour matches the recommendations that have been agreed upon with the prescriber". According to the definition we used and as per the context of this definition provided by its authors, "the patient's behaviour" is considered the usage or the absence of usage of the intervention and whether or not it matches the intended intervention usage that is recommended by the intervention creators [26].

Treatment acceptability
Sekhon and colleagues [27] define acceptability as "A multifaceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention". For the purposes of this article, this definition describes emotional or cognitive responses to an intervention that may or may not involve usage of the intervention. As per the given context of this definition by its authors, it includes the users' perceptions of treatment acceptability for both: before and as a result of participating or using a treatment intervention [27].

User satisfaction with web-based health interventions
Bob and colleagues [28] define user satisfaction with web-based health interventions as "Satisfaction is a user's evaluation of the

1) Internet
Internet or "Internet-based" or online or "online-based" or web or "web-based" or "e-health" or comput* or PC or website or mobile or ehealth or mhealth or "m-health" or telemedicine or telehealth or telerehab* or teletherap* Internet E-health Telerehabilitation Telehealth Telemedicine 2) Self-management "self management" or self-management or self-care or "self care" or homebased or home-based or "home based" or self-admin* or "self admin*" or "self help" or "self-help" or self-contained or "self contain" or self-direct* or "self directed" Self-care Self-management "Self-directed learning as topic" 3) exercise stretch* or strength* or physiotherap* or "physical therap*" or "range of motion" or "range of movement" or exercis* or "muscle strength*" or rehab* or "exercise program*" or "rehab program*" or exercise or "physical activit*" OR "surgical intervention" OR "postoperative" OR "after operat*" OR "after surg*" OR "surgical procedure" OR "surgical treatment" OR "post-surgical" General surgery Postoperative complications/ surgical procedures, operative/ surgery, operative and postoperative care 6) Response to intervention accept* or adher* or barrier* or facilitat* or preference* or reaction or satisfact* or uptake or usab* Treatment adherence and compliance, personal satisfaction, patient acceptance of healthcare, health knowledge, attitudes, practice, patient preference, patient attitude received Web-based intervention". For the purposes of this article, the definition for satisfaction does not describe, and therefore distinguishes itself, from the emotional or cognitive reaction to an intervention. This definition only describes the evaluation processes that the intervention users might undergo during or after intervention usage in order to approve or disapprove a given intervention [28].

Self-management interventions
"Interventions that aim to equip patients with skills to actively participate and take responsibility in the management of their chronic condition. This includes knowledge acquisition, and a combination of at least two of the following: (1) stimulation of independent sign and/or symptom monitoring; (2) medication management; (3) enhancing problem-solving and decisionmaking skills for treatment or disease management; (4) or changing physical activity, dietary and/or smoking behaviour" [29].

Data extraction and methodological quality assessment
Predefined data extraction tables were used to summarize study designs and main characteristics (Table 2), participant characteristics (Table 3), types and features of interventions (Table 4) and the main study findings (Table 5). Methodological quality was assessed using the standard quality assessment criteria for evaluating primary research papers from a variety of fields by Kmet and colleagues [30] which consists of two separate quality assessment scales for qualitative and quantitative studies (Tables 6 and 7). The quality of the studies was rated according to the scoring that Lee et al. [31] and Maharaj and Harding [32] used in their similarly designed reviews. Study quality was rated according to accepted scoring methods and cutoffs with summary scores > 80% defined as "strong", 71-79% as good, 50-70% as adequate, and scores of < 50% indicating "poor" or limited quality. Data extraction and quality assessment were conducted by at least two independent reviewers (MS and IW or LR) and inter-rater level of agreement between the reviewers was evaluated using Cohen's kappa [33] and Cohen's weighted kappa values [34]. Studies were not excluded from the synthesis based on quality scores, which were used to interpret the findings of the review.

Analysis and synthesis of the results
A narrative approach [35] was used to synthesize study characteristics and key findings of the included evidence. The  included studies were categorized and agreed on as quantitative, qualitative or mixed-methods studies by two of the reviewers, based on the study design definitions presented by the study authors and on the type of their quantitative, qualitative or mixed findings [35].

Results
A total of 696 records were identified and 41 underwent fulltext review. Eleven studies published between 2013 and 2018 and with a total sample size (n = 965) met the study inclusion criteria and were included in the synthesis. Five studies were conducted in the Netherlands, three in the United Kingdom, two in South Korea and one in the United States. There were three randomized controlled trials (RCTs) which were presented in five different studies [36][37][38][39][40] (with predominantly quantitative study designs [36,37,39,40] and one mixedmethods RCT and process evaluation study [38]), three feasibility studies, all with quantitative study designs [41][42][43], one qualitative early user testing study [44] and two evaluation studies [45,46] that had qualitative [45] and mixed-methods [46] study designs. The studies with quantitative designs that constituted the majority of all studies in this review (n = 7) used single-group feasibility design [43], an RCT [36,37,39], a pilot RCT [40], a pre-and post-test feasibility study [41] and a randomized parallel-group feasibility study [42]. The two studies with the entirely qualitative designs and   2) "Awareness that exercises are ongoing" 3) "Lacking or inconsistent advice"

4)
"Gaps in care pathway and follow-up"

5)
"Need for more directions or physiotherapy" Second focus group discussion:  Purpose of participating in the trial: Content of the RESTORE intervention: To enhance the content of RESTORE, one participant suggested for more graphics and pictures to be added: -Providing equal intervention access based on the users' socioeconomic status   Table 7 Breakdown of quality appraisal scorings and inter-rater agreement kappa and weighted kappa values for the quantitative and mixed-methods studies [30] Study Item on Kmet et al. checklist 1) Question or objective sufficiently described?
2) Evident and appropriate design − (number of "N/A"*2) Summary score: total sum/total possible sum qualitative findings [44,45] were conducted using focus groups [44] and in-depth interviews [45] for the purposes of conducting qualitative testing [44] and an evaluation [45] of their interventions. The two studies that adopted mixed quantitative and qualitative methodologies [38,46] conducted process [38] and formative [46] evaluations and, hence, provided both types of data, with prevailing quantitative data in them. The most commonly used tools for quantitative data collection across the studies were study-specific surveys or questionnaires, validated outcome-specific tools which occasionally were adapted and/or translated into the participants' language, semi-structured telephone interviews, data usage, standard questionnaires and self-reported questionnaires. The collection of qualitative data was mainly performed using telephone interviews, open-ended questions and an evaluation survey. Lee et al. [46] in their study used qualitative semi-structured interviews to obtain their qualitative data during the intervention development and questionnaires with 7-point scales in order to obtain quantitative data for process evaluation.

Demographic characteristics of included studies
Sample sizes (total n = 965) varied greatly: from 13 participants in one qualitative study [44] to 462 participants in a randomized controlled trial (RCT) described in three articles [36,37,39]. The sample size range within the qualitative studies [44,45] was much smaller (N = 13 and N = 19, respectively) than the sample sizes in the studies with quantitative designs. However, even within the studies with quantitative designs, variations depending on the type of study were noted. The three quantitative feasibility studies [41][42][43] had relatively smaller sample sizes of N = 68, N = 71 and N = 38, compared with the significantly larger sample sizes within the RCT studies with samples of N = 462 [36,37,39] and N = 159 [38]. Noticeably, the pilot RCT study by Lee and colleagues [40] also had a relatively small sample size (N = 59) compared with the other RCTs included in this review.
Participants across seven out of the 11 articles (described in detail in Table 3) were predominantly females (Median: 80%, IQR: 20%), with two studies [40,44] having entirely female populations. The only study with a male majority of participants was by Cnossen et al. [43], where 76% were men. Three articles [41,42,46] did not explicitly report the gender of their participants. Participants across all featured studies had a mean age of 53.2 years with the youngest participants with a mean age of 41.5 years [46] and with the oldest participants' mean age of 65 years [43].
The most prevalent type of cancer diagnosis that the participants had was BC. In five out of 11 studies [40-42, 44, 46], all participants had BC and received various types of breast surgeries (see Table 3). Only one study [43] had participants who all had a cancer different to BC (laryngeal cancer) and they received head and neck (HAN) surgery. The studies by Foster et al. [38] and by Myall et al. [45] included participants with, respectively, seven and five differing types of cancers (with the relevant surgeries). BC was again the most prevalent one. Kanera et al. [36], Kanera et al. [39] and Willems et al. [37] reported on the same RCT participant sample, the majority (70.5%) of whom had BC. Four studies [36,37,39,41] had imposed a minimum of 4 weeks since surgery or other treatment as inclusion criterion. Three studies [38,40,45] had no minimal time threshold since surgery or treatment. Paxton et al. [42] and Lee et al. [46] had no upper time limit since initial cancer diagnosis or treatment, whereas Foster et al. [38] and Myall et al. [45] set a 5-year maximum period since diagnosis for inclusion. All participants, except for those in the studies by Harder et al. [44] and Lee et al. [46], were not receiving or had completed radiotherapy and/or chemotherapy treatments. Cnossen et al. [43] did not exclude the presence of radiotherapy and/or chemotherapy treatments but did not report participants undergoing such treatment.
All interventions but one [44] were Internet web-based and participants accessed these via a web browser. Only the intervention by Harder et al. [44] was a downloadable mobile application. The intervention periods varied: the shortest being 1 week for intervention usage [41] and the longest being 6 months described by Kenara et al. [36,39] and by Willems et al. [37]. The most common intervention duration was 12 weeks long, which was noted across the two interventions by Lee et al. [40,46] and by Paxton et al. [42].
Topic-wise, one intervention aimed to raise participants' general awareness and knowledge about cancer, its treatment and supportive services [41] and another intervention [43] provided specific advice about laryngeal cancer and its aftermath. Harder et al. [44] designed their intervention specifically for upper limb exercising after BC surgery. The single intervention "RESTORE" that was written up in the two studies by Myall et al. [45] and by Foster et al. [38] was specifically about coping with fatigue. The most common combination of topic modules was about a healthier diet and increased levels of physical activity (PA) included across two of the interventions by Kenara et al. [36,37,39] and by Lee et al. [40,46]. Table 4 presents a breakdown of all the features of the interventions across the studies and their duration. Table 4 presents a breakdown of the intervention features. Many of the articles reported common intervention features, for instance, all interventions included password-restricted login access, specific or non-specific exercise programmes or advice and images and visual graphics. All but one [43] offered automated and individually tailored progress feedback notifying the intervention user of achieved goals and selfregulation purposes while using the interventions; for instance, they provided personalized feedback on dietary behaviours, as per pre-set goals in the intervention by Kanera et al. [36,39]/Willems et al. [37]. The two interventions described by Foster et al. [38]/Myall et al. [45] and by Cnossen et al. [43] did not offer tailored educational information and online selfevaluation of progress, unlike the other interventions in this review. The offered tailored educational information was usually provided by automated personalization of the information for advice and educational purposes, depending on the user information provided prior to or during using the intervention and aiming to correspond to their needs, for instance, the tumour-specific BC educational information for intervention users who have had BC [41]. The features for self-evaluation of progress while using the intervention were usually tools allowing self-ticking options for self-monitoring purposes within the intervention [42] or for self-reporting to the research team web-based progress outcomes in the form of surveys [40,46]. Other features were printable results [41], automated phone calls with a coaching session and achievement rewards [42], automated telephone text messages [40,46], a "frequently asked questions" section [44], video animations [43] and videos with healthcare professionals and/or educational information and advice [36,37,39]. The interventions by Foster et al. [38], Paxton et al. [42] and Myall et al. [45] released their contents weekly. Foster et al. [38], Harder et al. [44], Lee et al. [40,46] and Myall et al. [45] involved the use of a diary. Additional information for signposting was provided in most interventions: Foster et al. [38], Harder et al. [44], Kanara et al. [36,39], Melissant et al. [41], Myall et al. [45] and Willems et al. [37].

Quality assessment and inter-rater reliability
All included studies were rated as having "good" or "strong" methodological quality. The overall qualitative and quantitative combined quality scores ranged from 75 to 100% (median score: 92%, IQR: 17.5%). Tables 6 and 7 show the quality scorings for each criterion for all qualitative and quantitative design studies, respectively, and the inter-rater levels of agreement. A substantial level of agreement between the assessors on seven of the 11 included articles was achieved. Adjusting the calculations with weighted kappa values, the raters achieved "almost perfect" agreements on seven of the 11 articles, a substantial agreement on one article and a slightly lower, but moderate agreement on two articles (Tables 6 and 7).
Two articles [44,45] were assessed with the qualitative checklist and achieved scores of 75% (implying good methodological quality,) and 80% (implying strong methodological quality), respectively. Both articles fully satisfied six out of ten quality criteria (Table 6). However, neither of the articles presented evidence of verification procedures in order to support the credibility of their qualitative results.
Nine articles [36][37][38][39][40][41][42][43]46] were assessed with the quantitative checklist and achieved quality scores that ranged between 75% and 100% (median score: 92%, IQR: 12%). Table 7 shows that all articles have achieved scorings indicating "strong" methodological quality (> 80%), apart from the two articles by Lee and colleagues [40,46] which were categorized as having a "good" methodological quality. All articles achieved maximum scores on four of the criteria. In all but one RCT [40], the nature of the study designs precluded subject blinding and that criterion was marked as non-applicable. Although participant blinding was deemed as being possible and attempted in Lee et al. [40] by not informing the participants whether they were allocated to the interventional or to the control groups, there was no evidence that this was achieved. This is so since the authors acknowledged that some of the participants might have guessed that the WSEDI intervention was the one being tested [40]. The study design allowed possible blinding of the investigators in five of the articles [36][37][38][39][40]; however, only Foster et al. [38] presented evidence for appropriate investigator blinding procedures. One article [46] failed to report the recruitment process and the gender of their participants.

Main outcomes of interest
The three main outcomes of interest (adherence and usage, acceptability and satisfaction) were analysed across all included 11 studies, as long as these were present in them, irrespective of the type of methodology and findings that these studies possessed: quantitative, qualitative or mixed quantitative and qualitative. The findings concerning adherence and usage were analysed in all studies, except for the qualitative only study by Myall et al. [45] as this outcome was not described in it. The outcomes for acceptability were described in one of the two studies with mixed qualitative and quantitative methodologies and findings [38], also in both qualitative studies [44,45] and in one out of the seven quantitative studies [40].

Adherence and usage
Adherence to the interventions was measured and described in all articles, except for Myall et al. [45]. Predominantly, this was achieved by tracking login and usage data or self-reported measures (Table 5). Adherence levels across the included articles were generally high, but the longer the intervention period and follow-up lasted, the lower the adherence levels were. Follow-up periods varied between 1 week [41] and 12 months in Kanera et al. [36]. Adherence was mainly measured in percentages and varied between 10.1% at 6 months [39] to 100% for at 8 weeks in Harder et al. [44]. Most studies had predefined cut-off levels of adherence [36][37][38][39][40][41][42]. Foster et al. [38] considered participants as adherent if at least two out of five modules were accessed; Kanera et al. [36] required at least three pages accessed within each module for adherence.

Acceptability
Acceptability was measured in four studies [38,40,44,45] describing three interventions. Based on the provided outcomes for acceptability across the four studies that measured it, the majority of the participants across these studies had positive feedback and opinions of the interventions they were using, finding the interventions acceptable and beneficial, which led to positive behaviour and lifestyle changes ( Table 5). Foster et al. [38] measured acceptability by exploring participants' perceptions of the intervention timing, the attrition rate (36%), identified benefits from participation, adherence levels (71%) and preferred mode of access (50% preferred using the RESTORE intervention along with a leaflet). Harder et al. [44] measured acceptability of their intervention by exploring its usability and attractiveness during a focus group discussion, whereas Lee et al. [40] measured the participation in the programme during the interventional period (89%). The level of acceptability was determined through telephone interviews in Myall et al. [45], where the authors found that participants benefited from using the intervention, and this resulted in a positive lifestyle behaviour change for the majority of their participants.

Satisfaction
Satisfaction with the intervention was reported in three articles [41][42][43] using different outcome measures (Table 5) and was predominantly positively evaluated by intervention users. Only in Melissant et al.'s [41] satisfaction was negatively reported: net promoter score (NPS) was negative at − 36 (range: − 100 to + 100 describing how many of the intervention users would promote it to others (if more than the antipromoters, then this is considered positive", how many would not promote it to others and how many would take a passive stance and would neither promote it). Apart from this, their other satisfaction outcomes were positive: mean score for satisfaction with the intervention was 6.9 out of 10 and with the specific BC module-7.6 out of 10. The "Learn", the "Selfcare advice" and the "Act" modules were all viewed by more than 50% of the participants. Cnossen et al. [43] measured satisfaction with the overall intervention (84%), userfriendliness (74%), overall satisfaction (66%) and a net promoter score (NPS = +5). Paxton et al. [42] used a 5-point Likert scale to measure overall satisfaction and satisfaction levels with the intervention components. They also found that 97% of their respondents would recommend the intervention with the most popular component being the "Educational Information" and the least popular component being "Functionality". However, no significant between-group differences regarding overall satisfaction were found.

Secondary outcomes of interest
Moderating factors and associations affecting adherence, acceptability or satisfaction Moderating factors and associations affecting intervention adherence, acceptability or satisfaction were reported, respectively, in two studies [36,43]. Kanera et al. [36] found that younger participants (age < 57 years) used the intervention significantly more which proved that younger age, unlike gender, education level and use of the physical activity (PA) module, positively affected intervention use at 6 months (p = 0.040) and at 12 months (p = 0.000). This effect of moderation was also confirmed by secondary analyses. Conversely, Cnossen et al. [43] did not conduct analysis that assesses moderations; however, they found a statistically significant positive association between satisfaction with their intervention and education level (p = 0.004) and also for health literacy skills positively affecting satisfaction levels (p = 0.038)-i.e. the higher the levels of educational level and health literacy skills, the higher levels of satisfaction with the intervention.

Barriers and facilitators to intervention usage
Barriers and facilitators to intervention usage and adherence were explored by Lee et al. [46], Melissant et al. [41] and Myall et al. [45] through (telephone) semi-structured interviews and surveys. Barriers were identified as the intervention being too extensive [41] and having lack of time, new skills needed and negative impacts from cancer memories [45]. Common facilitators for usage were when the scoring for well-being generated by the intervention was similar (41%) to participants' own perceptions [41], and accessible, easy-to-understand language was used within the intervention [45]. Lee et al. [46] also measured perceived ease of use and reported that their intervention was perceived as easy to use and understand, with a mean usability score of 81.3/100 (SD = 20). Paxton et al. [42] found no significant betweengroup difference for another self-reported outcome, perceived effectiveness of the intervention, which meant that the PA and the Diet groups who used the online intervention perceived it to be similarly effective (Table 5).

Suggestions for improvement
Suggestions for improvement of interventions were requesting additional demonstration videos or sections including frequently asked questions [44], more specific information, precautionary advice [46], quicker access to the intervention postoperatively, improved intervention interface and equal opportunities to access the intervention regardless of social, economic and geographical factors [45].

Discussion
The aim of this review was to evaluate the current literature and explore adherence, acceptance and satisfaction with Internet self-management interventions for cancer rehabilitation after surgery and whether intervention features or other factors affected these outcomes. The studies reported in this review were classified as having "good" to "strong" methodological quality. Evidence was provided that participants were more inclined to be satisfied with, to accept and adhere to the interventions if the following criteria were present: the intervention was time and cost-efficient, required the acquisition of minimal or no new skills, was presented with coherent language, was offered as soon as possible after cancer treatments and contained the essential precautionary and educational information relevant to and tailored for the individual user. These findings are supported by another systematic review of web-based interventions for symptom management in cancer patients by Fridriksdottir et al. [47]. These authors reported that web-based interventions can have a positive effect on cancer symptoms management provided that the interventions are timely and include evidence-based information, tailored feedback and self-management components.
There was a wide range of adherence to interventions that varied across the studies. Analysis showed that adherence was significantly better where contents had been personally chosen by the users [42], interventions with personalized information [40,43,45] and interventions with tailored information [41,44]. A similar wide range of adherence to healthcare web-based interventions, directly correlated to the intervention duration, was also noted by Kelders et al. [26] in their systematic review. Results showed that all interventions with adherence above 80% lasted between 6 and 21 weeks, and with average adherence levels to these interventions of 55%, whereas interventions with durations ranging between 52 to 130 weeks had an average adherence level of 39%.
Kelders et al. [26] also correlate web-based intervention adherence to its "intended usage", i.e. to the recommended by the intervention creators "extent" of usage for gaining maximum benefits from the treatment intervention. However, only a few studies mention intention to use the intervention in some form. Lee et al. [40] mention "intended usage" in their study and the fact that they have provided the information about the intended usage of the intervention to their study participants in the form of a manual containing recommended optimal for the user dietary or exercise parameters. Kanera et al. [36] mention the "intended action" as a feature in their action planning component, referring to a specified action to be done by the participants, in order to perform a given behaviour change. Melissant et al. [41] have predefined the feasibility of their intervention as 50% or more adoption and usage "as intended, based on login data", however, not providing a clear description of what the intended usage as per login data was. Another study in this review provided recommended cut-off rates of usage based on physical activity and dietary guidelines in the field of the intervention [42]. As per the definition provided by Kilders et al. [26], one could argue that the intended usage of

Strengths and implications for research
This review was based on thorough and systematic searches and included independent reviewers screening the selected articles and assessing the quality of the final selection. Identification of common positive intervention features and components will facilitate developers to build future Internet interventions that will improve the provision of rehabilitation services for cancer survivors, the majority of whom receive surgery after diagnosis and deal with its consequences afterwards.

Conclusions and recommendations
Based on studies with good to strong methodological quality, this review provides evidence suggesting that Internet selfmanagement interventions for postoperative cancer rehabilitation can be satisfactory, acceptable and usable, as long as: Due to the scarcity of RCTs, the findings from this review should be treated with caution. Despite no limitations on publication year being set, the short publication span of 5 years indicates the lack of accumulated empirical evidence regarding these novel interventions. This implies the need for future more rigorous, large-scaled clinical trials to be conducted in this area.