Computerized Decision Support Improves Medication Review Effectiveness: An Experiment Evaluating the STRIP Assistant’s Usability
- 1.5k Downloads
Polypharmacy poses threats to patients’ health. The Systematic Tool to Reduce Inappropriate Prescribing (STRIP) is a drug optimization process for conducting medication reviews in primary care. To effectively and efficiently incorporate this method into daily practice, the STRIP Assistant—a decision support system that aims to assist physicians with the pharmacotherapeutic analysis of patients’ medical records—has been developed. It generates context-specific advice based on clinical guidelines.
The aim of this study was to validate the STRIP Assistant’s usability as a tool for physicians to optimize medical records for polypharmacy patients.
In an online experiment, 42 physicians were asked to optimize medical records for two comparable polypharmacy patients, one in their usual manner and one using the STRIP Assistant. Changes in effectiveness were measured by comparing respondents’ optimized medicine prescriptions with medication prepared by an expert panel of two geriatrician-pharmacologists. Efficiency was operationalized by recording the time the respondents took to optimize the two cases. User satisfaction was measured with the System Usability Scale (SUS). Independent and paired t tests were used for analysis.
Medication optimization significantly improved with the STRIP Assistant. Appropriate decisions increased from 58 % without the STRIP Assistant to 76 % with it (p < 0.0001). Inappropriate decisions decreased from 42 % without the STRIP Assistant to 24 % with it (p < 0.0001). Participants spent significantly more time optimizing medication with the STRIP Assistant (24 min) than without it (13 min; p < 0.0001). They assigned it a below-average SUS score of 63.25.
The STRIP Assistant improves the effectiveness of medication reviews for polypharmacy patients.
Clinical decision support systems significantly improve the number of appropriate decisions made in medication reviews, and decrease the number of inappropriate choices.
Users spend significantly more time optimizing prescribing with (unfamiliar) clinical decision support systems than without any digital assistance.
This study confirms the results of previous studies reporting that structured methods for medication review significantly improve the medication appropriateness of prescriptions.
1.1 Polypharmacy and Inappropriate Prescribing
Polypharmacy, or chronic use of multiple medicines, poses significant threats to patients’ health. A consensual definition of polypharmacy is lacking, but it is often described as the concurrent use of five or more different chronically used drugs . Polypharmacy has been associated with negative health consequences. Drugs may cause clinical interactions or adverse effects that may aggravate patients’ symptoms instead of relieving them. Medicine issues including underprescribing, overtreatment and decreased drug adherence have been associated with polypharmacy [2, 3, 4, 5, 6, 7, 8, 9]. A 2008 study showed that in the Netherlands, 5.6 % of all acute hospital admissions had medication-related causes . For elderly patients, who constitute half of all chronically ill polypharmacy patients, this figure was twice as high .
The concurrent use of multiple medications is not entirely undesirable, as in many patient cases, polypharmacy is indicated or even unavoidable. However, inappropriate prescribing of medications is prevalent among elderly patients . An incidence-focused study found that inappropriate medication use increased elderly persons’ risks of hospitalization and mortality . Geriatric assessment and medication review have been shown to be effective methods in aiding prescribers with optimizing polypharmacy [14, 15].
A multitude of initiatives has been developed to assess the appropriateness of drugs prescribed for individual patients. These approaches can be divided into implicit and explicit methods. The former implicit methods use patient-specific information, combined with medical knowledge, to determine medication appropriateness, while the latter explicit methods provide screening tools, containing lists of clinical interactions or contraindications . Among the explicit methods are the Beers Criteria and the Screening Tool to Alert to Right Treatment (START) and Screening Tool of Older People’s Prescriptions (STOPP) criteria, while the implicit methods include the Medication Appropriateness Index and the pharmacotherapy review focused on drugs’ use, indication, safety and effectiveness (Gebruik Indicatie Veiligheid Effectiviteit; GIVE) [16, 17, 18, 19]. The effectiveness of these interventions varies; generally they appear beneficial in terms of reducing inappropriate prescribing and medication-related problems, but they have not been proven to lead to clinically significant improvement .
In order to improve medication prescribing in primary care, several implicit and explicit methods have been combined into an all-encompassing systematic medication review approach—the Polypharmacy Optimization Method (POM). It has been shown to significantly improve general practitioners’ (GPs’) prescriptions for polypharmacy patients in an experimental setting .
A variety of barriers are impeding the widespread adoption of structured medication reviews in daily practice. Recently, Anderson et al.  conducted a systematic literature review on enablers and barriers to minimizing potentially inappropriate medications by GPs. Most factors revolved around physicians, and they included inertia (his or her attitudes towards discontinuation, such as fearing negative consequences), self-efficacy (his or her knowledge and available information on the topic) and awareness (his or her having poor insight or discrepant beliefs). Barriers that were not physician related included a lack of resources, patients resisting changes to their medication, and practical and cultural factors. A separate study focusing on barriers regarding pharmacist-led medication reviews reported lack of time and lack of self-confidence as the most commonly perceived barriers .
The STRIP analysis is more extensive than its predecessors [14, 17, 19]. It combines both the implicit approaches of the POM and the GIVE, and the explicit lists of the first version of the START and STOPP criteria. The pharmacotherapeutic analysis in the STRIP includes checks on underprescribing, overtreatment, recommended dosage adjustments, drug effectiveness, potential adverse effects, dose frequency, clinical interactions and medication adherence, including practical problems with medication use. The START and STOPP criteria are implemented in the pharmacotherapeutic analysis. This extensive medication review results in a patient-specific treatment plan in which new drugs are gradually added and superfluous ones are discontinued. This approach to conducting structured medication reviews is based on consensus rather than evidence, synthesizing the results of the earlier optimization methods mentioned above. Currently, solid evidence for choosing specific strategies for the optimization of pharmacotherapy in the elderly over others is lacking .
Involvement of patients in the medication review is emphasized to ensure their therapy adherence; patients’ preferences are taken into account as much as possible. The pursuit of the treatment plan is monitored through regular communication between the practitioner, pharmacist and patient. The involvement of pharmacists in medication reviews, as part of multidisciplinary teams, has been shown to lead to improved pharmacotherapy for older patients . Educating patients on their medication use and treatment goals, simplifying their drugs regimens and preventing adverse drug reactions have all been identified as factors influencing patients’ adherence to their treatments .
1.2 Clinical Decision Support Systems
In recent years, computerized physician order entry (CPOE) systems have gradually changed in terms of functionality. From systems that were traditionally organizational in nature, they have been enhanced to facilitate management of electronic medical records and clinical decision support . There is consensus in the literature that clinical decision support has the potential to improve GPs’ and pharmacists’ decision-making : “Both commercially and locally developed CDSSs [clinical decision support systems] are effective at improving health care process measures across diverse settings”. The evidence for concurrent improvement in efficiency, cost effectiveness or clinical effectiveness is inadequate or ambiguous. A study investigating the attitudes of Dutch GPs to the introduction of a decision support system specifically aiding them with conducting medication reviews revealed that the majority were positively inclined towards using such a system .
1.2.1 STRIP Assistant
In order to enable GPs and pharmacists to effectively and efficiently incorporate the STRIP method into their daily practice, the STRIP Assistant has been developed. The STRIP Assistant has been designed as a stand-alone web application, which aims to assist GPs and pharmacists with pharmacotherapeutic analysis of patients’ medical records. On the basis of patients’ records and the decisions that GPs and pharmacists make during the medication review, the application generates context-specific advice. The STRIP Assistant’s design decisions adhere to best practice in information science research; the user interface conception and decision rule implementation have been designed to balance efficiency and information completeness, aiming to minimize previously mentioned barriers such as users’ lack of confidence and lack of time .
The knowledge used to generate the STRIP Assistant’s advice consists of well-established guidelines on clinical interactions, double-medication, contraindications, dosage strength and frequency, and specific implementations of version 1 of the START and STOPP criteria [30, 31]. The rules incorporate not only patients’ diseases and drugs but also their contraindications, complaints and relevant physical properties (such as renal function and weight). This results in items of advice that recommend users to add new drugs or to remove superfluous ones, or to change dosages of existing medicines.
It has been planned that in the future, the STRIP Assistant will integrate with existing CPOE systems, thereby increasing the efficiency with which the method can be performed. Additionally, use of data-mining techniques on historical data should reveal patterns in users’ behaviour towards the generated advice, which could be used to improve recommendations .
Usability has long been regarded as an essential factor for the success of software applications. In the widely used definition issued by the International Organization for Standardization (ISO), usability is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” . In this context, effectiveness is understood as the “accuracy and completeness with which users achieve specified goals”, while efficiency consists of “resources expended in relation to the accuracy and completeness with which users achieve goals”. Finally, user satisfaction is the subjective “degree to which user needs are satisfied when a product or system is used in a specified context of use”.
A recent systematic literature review on clinical decision support systems showed that there is ample evidence that these systems can improve effectiveness; not enough research on efficiency and user satisfaction is available to make generalizations regarding these aspects . In the technology adoption literature, it has been shown that systems’ perceived usefulness and ease-of-use—aspects closely related to usability—are the major determinants of people’s attitudes towards using technology [35, 36, 37].
Hornbaek  described the current practices in evaluating usability. A multitude of metrics and instruments have been used to measure the three main factors of usability identified in the ISO definition. Measurements of effectiveness usually involve the degree to which a task has been successfully completed, leading to metrics such as accuracy, recall and completeness. Efficiency metrics mostly revolve around the time spent completing a task but can also involve mental efforts. The subjective user satisfaction criterion is often measured through standardized questionnaires or interface ranking.
Do GPs and pharmacists make significantly more appropriate decisions when optimizing the medical records of polypharmacy patients with the STRIP Assistant than without it?
Do GPs and pharmacists make significantly fewer inappropriate decisions when optimizing the medical records of polypharmacy patients with the STRIP Assistant than without it?
Do GPs and pharmacists take significantly less time to optimize prescribing for polypharmacy patients with the STRIP Assistant than without it?
Do GPs and pharmacists perceive use of the STRIP Assistant for optimizing the medical records of polypharmacy patients as satisfactory?
In this context, the term ‘appropriate decisions’ means decisions that correspond to those agreed upon by an expert panel.
In order to explore to what degree the STRIP Assistant is usable for aiding GPs and pharmacists with performing medication reviews, an experiment was conducted.
The experiment was aimed at GPs and pharmacists. Fifty-two respondents were selected through opportunity sampling, as the researchers lacked the resources to guarantee participants’ cooperation through reimbursement. All participants were required to be either GPs or pharmacists in Dutch primary care and had to fully complete both parts of the experiment to warrant inclusion. Of the 52 responses, nine had to be discarded because of corruptions in the data: three participants did not fill out the unassisted first part of the experiment, five did not assign drugs to diseases or did not respond to advice during the assisted part, and one record was a duplicate. Finally, 43 participants’ results were eligible for inclusion in the data analysis.
Respondents were recruited through the researchers’ personal networks (i.e. symposia, conferences and [training] conventions). They were briefly informed about the experiment’s goal and assured that their anonymity would be guaranteed. As an incentive, respondents were offered 3 months’ use of the software application for their own patients, free of charge.
2.2 Study Design
The experiment took the form of a pre-experiment with a one-group pre-test post-test design, as described by ‘t Hart et al. . Respondents were placed in a single research group; an initial test was performed, after which a stimulus was applied and the test was repeated.
In the test, the medical records of two polypharmacy patients, which had been selected from the geriatric ward of an academic medical centre for the study by Drenth-van Maanen et al. , were used; they were actualized (i.e. drugs that were no longer available were replaced by their contemporary counterparts) and confirmed to be of comparable difficulty by an expert panel of geriatricians specializing in clinical pharmacology (PJ and WK). During the experiment, respondents were asked to optimize the first case in their usual manner and the second one using the STRIP Assistant.
The three usability aspects of effectiveness, efficiency and user satisfaction were operationalized in the experiment as follows: effectiveness was measured by recording the respondents’ medicine prescriptions, after their optimization. The decisions made by the respondents were then compared with the medication list that the aforementioned expert panel of two geriatrician–pharmacologists prepared. They reached consensus on the pharmacotherapeutic changes that should be made in the medical records that were optimized by the respondents, and classified the decisions as correct, neutral or potentially harmful. Efficiency was operationalized by recording the time that respondents took to optimize the two cases. Finally, user satisfaction was measured through a standardized questionnaire—the System Usability Scale (SUS)—consisting of ten statements with which respondents had to indicate their agreement or disagreement on a Likert scale.
All data were gathered between November 2013 and June 2014. During that time, no changes of any kind were made to the software.
2.3 Outcome Measures
The main outcome measure was the difference in the percentage of appropriate decisions made by the participants without and with use of the STRIP Assistant. Secondary outcome measures were the difference in the number of inappropriate decisions taken by participants without and with use of the STRIP Assistant, the difference in the time needed to perform the medication review without and with use of the STRIP Assistant, and the extent to which participants experienced their use of the STRIP Assistant as satisfactory.
The STRIP Assistant has been designed as a stand-alone web application, which aims to assist GPs and pharmacists with pharmacotherapeutic analysis of patients’ medical records. The user interface accommodates the six phases of the STRIP medication review (i.e. drugs–disease assignment, undertreatment, overtreatment, side effects–drugs assignment, clinical interactions and dosage frequency). In most phases, users are shown advice on missing, superfluous or incompatible drugs. The items of advice are patient specific, incorporating their diseases, drugs, side effects and users’ actions up to that point. The STRIP Assistant’s rule base consists of a combination of well-established clinical rule databases and specific implementations of the START and STOPP criteria.
For the experiment, the user interface was enhanced to first display one of the patient cases in a bulleted list, summing up his/her diseases, drugs, side effects, complaints, measurements and laboratory test results.
Respondents were asked to optimize the first case in their usual manner, specifying in an adjacent text field which drugs should be added or removed for optimal treatment. They were then shown a 1.5-min video explaining the use of the STRIP Assistant, after which they were presented with the second patient case in the STRIP Assistant user interface. Respondents were asked to optimize this case through the STRIP process, reacting to the advice generated by the application. Each screen contained a help button explaining what was expected of the respondents.
After optimizing the second case, respondents were presented with the SUS, consisting of ten statements with which they had to indicate their agreement or disagreement on a Likert scale. Finally, information on the respondents’ demographic characteristics (age and sex) was collected, alongside their experience with medication reviews and CPOE systems. In a text field, respondents could optionally leave their comments.
2.6 Statistical Analysis
In all cases, an expert panel determined the correctness of the decisions made by the participants. Slight corrections to the data had to be made to account for the differences in the potential number of appropriate decisions that respondents could make in each case: 17 in the unassisted case and 20 in the assisted one. Similar corrections were applied to account for differences in the possible number of inappropriate decisions: 30 in the unassisted case and 40 in the assisted one. Paired t tests were used to analyse the data pertaining to appropriateness and inappropriateness of decisions, and the differences in time spent.
The results of the SUS were formatted in the manner described by Brooke : for the odd questions, 1 was subtracted from the values; for the even questions, the values were subtracted from 5 to get the corrected scores. The sum of all questions was multiplied by 2.5 to calculate the final score ranging from 0 to 100.
3.1 Descriptive Statistics
Overview of participants’ characteristics
GP in training
Experience with medication reviews
Other medication review method
3.2 Usability Hypotheses
Overview of the tested hypotheses and their statistical outcomes
The STRIP Assistant positively influences the number of appropriate decisions made in a medication review: accepted
418 (58 %; mean 11.44; SD 2.63)
656 (76 %; mean 15.26; SD 2.05)
Paired t test: t(42) = 8.80; p < 0.0001
The STRIP Assistant negatively influences the number of inappropriate decisions made in a medication review: accepted
302 (42 %; mean 9.36; SD 2.53)
210 (24 %; mean 4.88; SD 2.23)
Paired t test: t(42) = 8.93; p < 0.0001
The STRIP Assistant negatively influences the time taken to perform a medication review: rejected
13 min (mean 0.94; SD 0.40)
24 min (mean 1.34; SD 0.20)
Paired t test: t(42) = 7.07; p < 0.0001
Users perceive using the STRIP Assistant as satisfactory: rejected
SUS score 63.25
Quality consensus test: 63.25 (<70)
A paired t test showed a statistical difference in the appropriateness of the decisions made without the STRIP Assistant [mean 11.44; standard deviation (SD) 2.63] and with the STRIP Assistant [mean 15.26; SD 2.05; t(42) = 8.80; p < 0.0001]. A Wilcoxon signed-rank test showed similar results (Z = −5.40; p < 0.0001). From totals of 418 unassisted correct decisions and 656 aided ones, over decision totals of 720 and 866, respectively, it follows that the proportion of appropriate decisions increased from 58 % without help to 76 % with the STRIP Assistant.
A paired t test showed a statistical difference in the inappropriateness of the decisions made without the STRIP Assistant (mean 9.36; SD 2.53) and with the STRIP Assistant [mean 4.88; SD 2.23; t(42) = 8.93; p < 0.0001]. The percentage of inappropriate decisions decreased from 42 % in the unassisted case to 24 % in the assisted one.
On average, participants took 13 min to complete the unassisted part of the experiment and 24 min to complete the assisted medication review. A paired t test of the base 10 logarithm of these values showed a statistical difference in the time taken without the STRIP Assistant (mean 0.94; SD 0.40) and with the STRIP Assistant [mean 1.34; SD 0.20; t(42) = 7.07; p < 0.0001]. This indicates that participants spent significantly more time optimizing medication with the STRIP Assistant.
On average, the respondents assigned the STRIP Assistant an SUS score of 63.25 out of a possible maximum of 100. This value is lower than the quality threshold of 70 arrived at by Bangor et al.  and corresponds to a marginal acceptance rate in a later paper by the same authors .
This study has shown that a decision support system can make GPs and pharmacists perform better medication reviews, albeit in an experimental setting with preselected patient cases. This is in line with the consensus on the effectiveness of health recommendation systems in the literature . More specifically, the results indicate that the choice for a recommender based on a predetermined explicit knowledge base yields viable results in a complex domain with potentially far-reaching implications. Rather than relying solely on collaborative or content-based filtering, a knowledge base guarantees a minimal quality level when recommendations are generated .
Even though the medication reviews performed with the STRIP Assistant were significantly better than those performed without assistance, a non-negligible number of mistakes the respondents made (15 %) could be attributed to software suggestions. In this experiment, each START advice was presented as an alphabetically ordered list of medicines that users had the possibility to prescribe. In practice, many users picked the first item in the list, resulting in an overabundance of suboptimal choices; when adding a vitamin D supplement, for example, many users picked alfacalcidol instead of cholecalciferol, even though the former has fewer and more specific indications. Few publications have touched upon the subject of decision support systems generating incorrect recommendations; consequently, strategies to prevent them are lacking [44, 45, 46]. Hybrid recommendation systems, combining an explicit knowledge base with content-based or collaborative filtering, have been shown to outperform their simpler counterparts . As long as the risk associated with automatic learning systems in a precarious domain such as health care is accounted for, a hybrid approach may prove beneficial in improving recommenders’ effectiveness.
Contrary to our assumption, performing medication reviews with the STRIP Assistant was less efficient (i.e. it took more time) than optimizing drugs manually. Traditionally, the three aspects of usability are assumed to be positively correlated . However, a different perspective viewing effectiveness and efficiency as conflicting requirements in a project has been proposed by Nilsson and Følstad . In an experiment such as the one in this study, where respondents either use their habitual approach or have to learn a new structured method, a drop in efficiency can be reasonably attributed to effectiveness and efficiency conflicting. Because of the experimental setting, unfamiliarity with the method and the user interface is likely to play a role as well.
Conducting experiments in which more gradual changes in the method are applied may result in improvements in both effectiveness and efficiency; in a study related to this one, a paper version of the earlier POM was tested in an experiment . It, too, proved to be less efficient than performing a medication review manually. However, the software-aided reviews performed in this study took less time than the paper-based ones in the previous study. This lends credibility to the assumption that gradual changes may improve all aspects of usability simultaneously.
4.3 User Satisfaction
Respondents perceived using the STRIP Assistant as only marginally acceptable. The average SUS score of 63.25 was lower than the commonly accepted quality indicator of 70 [41, 42]. This aspect, too, can be understood by viewing the usability aspects as conflicting requirements . The suboptimal prototypical design of the software’s user interface, and the respondents’ unfamiliarity with the application, may explain this inconsistency with the consensus in the literature.
4.4 Clinical Relevance
Methods for medication review have proven to be valuable in improving prescriptions for polypharmacy patients. The POM, which served as a foundation for the STRIP method, has led to improvements in appropriate decisions in medication reviews . The START and STOPP criteria, which constitute a major part of the STRIP Assistant’s knowledge base, have been shown to be associated with improvements in medication appropriateness, reductions in adverse drug reactions and decreases in drug use and costs [17, 49].
The two patient cases used in this experiment were comparable in their complexity and number of medicines, but there could, for reasons of validity, not be a complete overlap of diseases and drugs. This makes it difficult to determine the clinical relevance of the intervention. Nevertheless, the most noticeable improvements in the adequate prescribing of drugs were the treatment of osteoporosis with bisphosphonates, calcium and vitamin D; and the treatment of systolic heart failure with angiotensin-converting enzyme (ACE) inhibitors. The most important improvement relating to stopping medicine use was discontinuation of digoxin when atrial fibrillation was adequately treated with beta blockers. These interventions correspond to the guidelines of the START and STOPP criteria .
Thus, the results in this study confirm the results of previous studies—namely, that structured methods for medication review significantly improve the medication appropriateness of prescriptions.
When these results are interpreted, the experimental nature of the method should be taken into account. The STRIP Assistant’s usability has been tested and validated with real patient cases in a controlled environment, but it has not been validated in practice with users reviewing their own patients. While the results lend credibility to the STRIP method being useful in practice, this study does not prove its clinical relevance.
When the results of this study are generalized, the limited number of participants should be considered, as well as the sampling method. Forty-two GPs and pharmacists participated voluntarily, raising the possibility that they were positively biased towards use of a clinical decision support system to aid them with medication reviews.
4.6 Further Research
A randomized controlled trial incorporating a large representative sample should be conducted to conclude the STRIP Assistant’s effectiveness, efficiency and user satisfaction. Further research should focus on its usability through evaluation in a real-life setting over a longer period of time, exploring to what extent experience influences users’ effectiveness and efficiency in working with the software. Furthermore, longitudinal research could show if the STRIP Assistant is clinically relevant in practice and could evaluate its impact on adverse effects and medicine costs.
In this study, a clinical decision support system (the STRIP Assistant) designed to aid GPs and pharmacists with conducting medication reviews was validated in an experimental setting. The results showed that use of the STRIP Assistant positively influenced the number of appropriate decisions made in a medication review of elderly polypharmacy patients and decreased the number of inappropriate choices. Contrary to our assumptions, users spent more time optimizing prescribing with the STRIP Assistant than without it. The users perceived the experience of using the software as only marginally acceptable. Further research is needed to determine whether optimization of polypharmacy with the help of the STRIP Assistant is clinically beneficial.
We would like to thank the participating GPs and pharmacists for their contributions to this study.
Compliance with ethical standards
This manuscript does not contain clinical studies or patient data. No sources of funding were used in the conduct of this study or the preparation of the manuscript. The authors have no conflicts of interest that are directly relevant to the content of the study. All procedures were performed in accordance with the ethical standards of the 1964 Helsinki Declaration and its later amendments. Since the medical research involving human subjects act (WMO) did not apply to this study, approval of the Medical Ethical Committee of University Medical Center Utrecht was not required.
- 9.Wright R, Sloane R, Pieper C, et al. Underuse of indicated medications among physically frail older US veterans at the time of hospital discharge: results of a cross-sectional analysis of data from the Geriatric Evaluation and Management Drug Study. Am J Geriatr Pharmacother. 2009;7(5):271–80.PubMedCentralPubMedCrossRefGoogle Scholar
- 11.Stichting Farmaceutische Kengetallen. Polyfarmacie. Pharmaceutisch Weekblad. 2005;140(32).Google Scholar
- 23.Dutch College of General Practitioners. Multidisciplinaire Richtlijn Polyfarmacie bij ouderen 2012. Utrecht: Nederlands Huisartsen Genootschap; 2012. https://www.nhg.org/sites/default/files/content/nhg_org/uploads/polyfarmacie_bij_ouderen.pdf. Accessed 7 May 2015.
- 26.Van der Lugt J, Klapwijk S. Overzicht in beeld. SynthesHIS. 2008;(4). http://www.syntheshis.nl/archief.php?nummer=2008-04&artikel=overzicht_in_beeld. Accessed 7 May 2015.
- 29.Meulendijk MC, Spruit MR, Jansen PAF, et al. STRIPA: a rule-based decision support system for medication reviews in primary care. In: Proceedings of the European Conference on Information Systems (ECIS) 2015. Münster, Germany; 2015.Google Scholar
- 30.Z-Index. G-Standaard—Z-Index [online]. 2014. https://www.z-index.nl/g-standaard. Accessed 7 May 2015.
- 33.Meulendijk MC. STRIP Assistant [online]. 2014. http://videodemo.stripa.eu/english/. Accessed 7 May 2015.
- 34.International Organization for Standardization (ISO). ISO/IEC 25010:2011: systems and software engineering—systems and software quality requirements and evaluation (SQuaRE)—system and software quality models. 2011. http://www.iso.org/iso/catalogue_detail.htm?csnumber=35733. Accessed 7 May 2015.
- 37.Venkatesh V, Morris MG, Davis GB, et al. User acceptance of information technology: toward a unified view. MIS Quarterly. 2003;27(3):425–78.Google Scholar
- 39.‘t hart H, Van Dijk J, De Goede M, et al. Onderzoeksmethoden. 6th ed. Amsterdam: Boom; 2003.Google Scholar
- 40.Brooke J. SUS: a quick and dirty usability scale. In: Jordan PW, Thomas B, McClelland IL, et al., editors. Usability evaluation in industry. Boca Raton: CRC Press; 1996. p. 189–94.Google Scholar
- 42.Bangor A, Kortum PT, Miller JT. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114–23.Google Scholar
- 47.Hornbæk K, Law ELC. Meta-analysis of correlations among usability measures. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 2007. p. 617–26.Google Scholar
- 48.Nilsson EG, Følstad A. Effectiveness and efficiency as conflicting requirements in designing emergency mission reporting. I-UxSED; 2012. p. 20–5.Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.