Introduction

The collection of clinical outcome assessments electronically in clinical studies involves a process that requires clinical study sponsors and electronic clinical outcome assessment (eCOA) providers to work closely together to implement study-specific requirements, incorporate best practices, and ensure successful data collection to generate evidence for regulators and other stakeholders including payers and health technology assessment bodies. There are multiple steps in the system development process (Fig. 1), most of which have been discussed in the literature [1, 2] and regulatory guidance [3,4,5]. However, one of the most important steps in this process, user acceptance testing (UAT), which aims to ensure that an electronic system functions according to agreed-upon requirements (e.g., business requirements document based on the study protocol), deserves increased attention. Therefore, Critical Path Institute’s electronic patient-reported outcome (ePRO) Consortium and patient-reported outcome (PRO) Consortium have developed UAT best practice recommendations for clinical study sponsors or their designee for conducting UAT with support from eCOA providers to ensure data quality and enhance operational efficiency of the eCOA system. Utilizing these best practices should improve the reliability or precision of clinical outcome assessment (COA) data collected electronically in clinical studies to support product registration.

Fig. 1
figure 1

Typical eCOA implementation process

The United States Food and Drug Administration’s (FDA’s) “General Principles of Software Validation; Final Guidance for Industry and FDA Staff” outlines regulatory expectations for software validation [3]. This guidance states that terms such as beta test, site validation, user acceptance test, installation verification, and installation testing have all been used to describe user site testing which encompasses any other testing that takes place outside of the developer’s controlled environment. For purposes of this paper, the term “UAT” will be referenced and “user” will refer to sponsor staff (or designee) who serve as substitutes to trial participants for the participant-facing components of the eCOA system. The FDA general principles go on to say that “User site testing should follow a pre-defined written plan with a formal summary of testing and a record of formal acceptance. Documented evidence of all testing procedures, test input data, and test results should be retained” [3, p. 27]. These statements in the guidance indicate that a user testing process itself as well as documentation are both best practices in software development as well as regulatory expectations.

In 2013, the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) ePRO Systems Validation Task Force defined UAT as “the process by which the clinical trial team determines whether the system meets expectations and performs according to the system requirements documentation” [2, p. 486]. In this same report, the task force also indicated that UAT should not be “a complete revalidation effort conducted by the sponsoring clinical trial team” [2, p. 486]. but, rather, a “focused, risk-based approach to testing that allows the clinical trial team to determine whether the system complies with the key system requirements (which ultimately reflect the protocol)” [2, p. 486]. Because differentiating between the specific activities recommended for UAT and those activities conducted during system validation can be confusing, these best practice recommendations were developed to clarify those activities and considerations that should be accounted for during UAT by the sponsor or designee. A separate process called usability testing involves participants and evaluates their ability to use the system as intended for the purposes of the study, which is outside the scope of this paper. See Coons et al. [1] and Eremenco et al. [6] for more information on usability testing, and FDA’s Discussion Document for Patient-Focused Drug Development Public 2 Workshop on Guidance 3 [7] which discusses both usability testing and UAT.

The concept of UAT comes from the software development lifecycle (SDLC) and is intended to test how the system would perform in circumstances similar to those in which the system will eventually be used. In clinical studies where electronic systems are being used to collect COA data, UAT provides the clinical study team, including sponsor and/or contract research organization (CRO) representatives, an opportunity to evaluate actual system performance and ensure that the sponsor’s intended requirements were communicated clearly and accurately translated into the system design, and that the system conforms to a sponsor-approved requirements document.

System requirements should be thoroughly tested by the eCOA provider prior to UAT in conformance with the SDLC process implemented by the eCOA provider. The eCOA provider project manager will notify the sponsor and/or designee when the vendor testing process is completed so that UAT may proceed. This step followed by the eCOA provider allows the focus of UAT to remain on a common understanding of the requirements with the actual system in hand, as well as identifying and correcting issues proactively that study team, site, and study participant users might experience once the system is deployed. UAT takes place toward the end of the eCOA implementation process (Fig. 1), occurring after the study-specific system requirements have been documented by the eCOA provider and approved by the study sponsor, and the system is built and tested by the eCOA provider’s in-house testing team. UAT must be completed prior to launching the technology for the study.

Components of an eCOA System

eCOA systems are built differently by each eCOA provider but typically have the same core components. Table 1 provides the suggested guidelines for testing these components in terms of when formal testing using UAT scripts is recommended as a best practice as opposed to cases where ad hoc testing may be sufficient. Details on the development of UAT scripts are provided in the UAT Documentation section of this paper. eCOA systems can be deployed on provisioned devices. If the study is utilizing a provisioned device model, the eCOA provider will distribute devices to each tester. eCOA systems can also contain components that are application-based such as those developed for Bring Your Own Device (BYOD) studies, where a variety of devices (including different makes and models) should be included in the formal UAT to ensure consistency between device types. If a study is utilizing a BYOD setup, the eCOA provider is required to provide the testers with minimum operating system and device requirements (e.g., Android/iOS operating system versions, internet browser, screen size). If feasible to be done at the time of the eCOA UAT, testing of any integrated devices (e.g., glucometers, sensor patches) or systems (e.g., IRT, EDC), should also be included within the component testing of UAT. For purposes of this paper, best practices for testing integrated devices or systems will not be covered.

Table 1 eCOA system components and testing guideline

eCOA Hosting Environments

A hosting environment is the physical server environment in which the eCOA platform resides. eCOA providers should have multiple hosting environments to support a proper setup. Typically, all development of an eCOA system is done within a Development (or Dev) environment. In the Dev environment, the eCOA provider builds the system per the study requirements and can easily make changes as needed. The Dev environment is sometimes referred to as a sand box as the eCOA provider is able to modify the design without impact to test or live study data.

Once the development of the software application is completed, system/integration testing of the software application is performed by the eCOA provider in a Test environment. After this process is completed by the eCOA provider, UAT should be performed by the sponsor or designee who is provided access to the software application in a separate UAT environment hosted by the eCOA provider.

Once UAT has been completed successfully, with no outstanding issues, and all parties agree that the system is acceptable for study use, the study configuration is moved to the Production environment. The Production environment will collect only live study data. UAT should not be performed in a Production environment under any circumstances, as UAT data could end up in a live system database. In the event that the study requirements change (e.g., due to a protocol amendment) once the system is live, any post-production changes must be made in the Development environment and subsequently tested in the Test environment by the eCOA provider and UAT environment by the sponsor or designee before moving the modified study configuration to the Production environment.

Roles and Responsibilities

When planning and executing UAT for an eCOA system implemented for a clinical study, there are two main expected stakeholders, which can be categorized on a high level as:

  1. (1)

    Sponsor or designee: the entity for whom the system is built and who funds both the build and clinical study, and who has ultimate accountability for the study overall. Note that a CRO and/or UAT vendor may be engaged to act as a designee of the sponsor to perform UAT.

  2. (2)

    eCOA Provider: the entity who is contracted by the sponsor or CRO to carry out the design, build, and support of the system

These primary stakeholders can delegate or outsource roles and responsibilities to any degree necessary to a third party. It is recommended that the Sponsor (or designee) performing UAT ensures all testers are fully trained in the UAT process. In addition, it is recommended that a range of study team roles be involved in UAT execution, including for example, clinical operations, site monitoring, data management, and biostatistics. It is not a best practice for the eCOA provider’s staff to conduct UAT, as it should be conducted by a separate entity to ensure it is objective. It is important to note that study participants are not included in UAT as a standard practice because of its emphasis on formally testing requirements.

UAT Stages

Each UAT should go through the basic stages of Planning, Execution, and Follow-Up/Closeout, and all stakeholders should participate in each stage. Table 2 details the ideal level of involvement and responsibilities by stage.

Table 2 Stages and stakeholder responsibilities

Table 3 outlines primary responsibilities for the tasks necessary to conduct UAT.

Table 3 Task ownership matrix

UAT Conduct

A UAT timeline can vary; however, it is best to plan for at least a 2-week cycle that assumes multiple rounds for UAT including testing as outlined in the test plan and scripts, changes, re-verification, and final approval. UAT timelines are also dependent on the complexity of the study design including the number of treatment arms, assessments, visit schedule and when the system build will be fully validated by the eCOA provider versus the planned date for the system to be launched for the study as UAT is often the rate-limiting step that must be completed to launch the system. The UAT timeline can be extended or shortened depending on these variables and the actual iterations of rounds of testing needed. Regardless of the length of time for UAT, time for testing, changes, validation of changes by the eCOA provider test team, and re-testing by the UAT team needs to be accounted for prior to a system launch. If these steps are not carried out, the potential for issues and reduced system quality increases.

While UAT is being conducted, each tester should document findings within the test script(s) and provide all findings (issues/questions/changes) within a UAT findings log. This log can be in several different formats such as spreadsheets or an electronic UAT system. At the completion of each round of testing, findings should be collated into one log for ease of review by sponsor and/or designee team with duplicate issues removed. Following each round of UAT, a debrief meeting should be held to examine and discuss all findings as a team. It is important for all testers to be represented at the meeting so that each finding can be discussed and clarified as necessary. The team may prioritize UAT findings and decide on a phased implementation based which bugs/errors must be corrected ahead of Go-Live vs. those that can be implemented in a “Post Go-Live release plan.” If this approach is taken, it is critical to get agreement between the sponsor and the eCOA provider along with a communication plan to the study team members. Impact to the Data Management Review Plan and Study Site Monitoring Plans should also be evaluated for impact.

Issues (bugs) or changes identified in the UAT findings log need to be categorized to determine their priority and relevance. Categories may include system issue, application or software bug, design change, enhancement, or script error, all of which may have different names, depending on the eCOA provider, but ultimately these categories help determine the corrective plan of action (if necessary). A system issue is a problem in the software programming that causes the system to function incorrectly, which is a critical finding and should be prioritized over all other findings for correction and re-testing. An application or software bug is an error, flaw or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. An issue is an instance where the agreed-upon requirements were not met. Design changes are requests for changes to the system but are not error corrections while enhancements are requests for improvements to the system that arise from the UAT. Design changes and/or enhancements should be evaluated by the full team to determine whether the change would improve the performance of the system and/or user experience as well as whether time permits the change to be made within the constraints of the system launch date. Enhancements or changes to the original system design need to be reviewed carefully between the sponsor and the eCOA provider as system design changes create risk and fees may be charged if a change request is deemed out of scope or an expansion of the previously agreed scope. Script errors are mistakes in the script that may lead to erroneous results although the system actually performs correctly. Script errors should be documented and updated in the script template to ensure any future user of the script does not encounter the same problem(s).

While discussing changes resulting from UAT, the original scope of work should always be reviewed and referenced when considering implementing the change. eCOA providers should correct any programming errors found in UAT at no additional cost. If necessary design features not included in the original requirements document are identified as a result of UAT, sponsors are advised to consider the timeline and cost implications of introducing those new features at this late stage. If it is deemed the changes are required prior to launch, the sponsor may need to accept any additional costs or delays to launch, depending on the assumptions built into the original contract. Alternatively, the team may decide that although changes should be made, they are not needed for launch and can be made as a post-production change, after the system is launched into production. The UAT testers and other sponsor representatives should discuss the cost and timeline implications for any options prior to making a final decision about design changes. Involvement of the key stakeholders during the bidding and design process is an ideal way to reduce/limit design changes and expedite processes between the sponsor/CRO and the eCOA provider.

UAT Documentation

Proper documentation is imperative to ensure execution of effective testing as shown in Fig. 2 and to meet regulatory expectations. UAT documents should include a UAT test plan, test scripts, findings log, a summary of issues and resolutions (e.g., UAT Summary Report), and lastly, a UAT approval form. The eCOA provider may generate additional documentation such as instructions related to”time travel” (the mechanism by which testers can move between different dates and times by adjusting the eCOA device clock) to assist UAT.

Fig. 2
figure 2

UAT documentation workflow

Standard Operating Procedures (SOPs), Working Instructions, and/or guidance documents, and performance metrics for UAT should be developed by the sponsor or designee who is managing UAT to document the requirements for the process and all necessary documentation. UAT SOPs should outline how clinical study teams determine whether the system performs in accordance with the finalized system requirements document. SOPs should define the documents required to complete the UAT, those responsible to perform testing, and when and how UAT is to be performed. Frequency of UAT is also defined in the SOP depending upon initial production releases, updates to correct issues, and/or updates requested by the sponsor. UAT documentation should be made available for audits / inspections and follow Good Clinical Practice as well as appropriate record retention (Fig. 3).

Fig. 3
figure 3

Example of manual and electronic test script

UAT Test Plan

The UAT test plan is developed by the sponsor or designee and may contain: a purpose, a scope, definitions, references, strategy and approach, assumptions and constraints, risk assessment, UAT team roles and responsibilities, information about the test environment(s), a description of all test cases, scripts, deliverables, the UAT test summary (results), and approvals/signatures. A UAT test plan ensures all parties are aware of the scope and strategy of how requirements will be tested. It will allow the sponsor or designee to review the system per the protocol and the signed system requirements document. As such, it should be considered the first document to be created within the UAT process. The following sections should be considered when creating the Test Plan (Table 4).

Table 4 UAT test plan template content

Table 5 provides several considerations for testing functionality that is common across eCOA providers. The screen interface may include different controls to navigate from one screen to the next and buttons or graphic controls to select responses to items; these elements are referred to as screen controls. In addition, the Test Plan should include the method for testing custom features for each study.

Table 5 Functionality and methodology for testing

Test Scripts

Test scripts outline each step that a tester will take to test the use cases in the system. Test scripts are designed to be followed step-by-step so that the tester does not have to try to remember how he or she arrived at a given screen. If the step occurs as expected in the script, the tester indicates “pass.” If something happens when the step is carried out that is not as expected, the tester indicates “fail” and provides a reason for failure, with applicable screenshots, if necessary. UAT test scripts will be referenced in the UAT Test Plan. It is best practice that the sponsor or designee write the test scripts and not ask the eCOA provider to provision test scripts. Test scripts should be approved by the appropriate individual within the sponsor or designee prior to UAT being conducted. The approver may vary depending on sponsor UAT process and SOPs. Upon completion of the scripts, the tester should sign (or electronically sign) as well as record the date(s) of script execution.

In some cases, a tester may informally test functionality that is not detailed in the test script, which is referred to as ad hoc testing; for example, this might occur when the actual results of a test step are not the expected results, and ad hoc testing might help identify the root cause of the issue. While such ad hoc testing can be useful in identifying issues, it is considered supplemental and should not be conducted in place of following test scripts. Any issue detected in ad hoc testing should be documented and formally tested in the next round of UAT to document resolution.

Table 6 outlines the aspects that should be documented in each test script section:

Table 6 Test script content

If any step in a script “fails” due to an issue with the system, device, or configuration, then the entire test case fails (see Fig. 4). If a test case fails during UAT, the test case should be completed again once the eCOA provider has confirmed that the issue has been resolved. If it is between the sponsor and the eCOA provider that the issue will stay unresolved in the system, then it should be noted in the UAT summary report (results). Otherwise, UAT should not be considered finished until all test cases have been passed by a tester and all issues from the findings log addressed.

Fig. 4
figure 4

Example of test script execution

If a test case fails due to a script error, retesting of the test case may not be required. The UAT team should identify whether a retest is required for a test case failure due to script error. For example, if a script contains a typographical error or is poorly written but the test case still proves and supports the scope of the test, it is acceptable to amend the script and pass the test case.

Before UAT approval, a UAT summary or report should be created by the UAT testing team (sponsor or designee) summarizing the results of testing including any known issues that will not be corrected in the system before the system is launched into production.

A UAT approval form should be signed by the sponsor or a representative from any other testing party (i.e., sponsor’s designee). UAT should not be considered completed until this form is signed.

Once UAT has been completed, all UAT documentation (e.g., UAT Test Plan, completed test cases, UAT Summary Report, and UAT approval form) should be archived for maintenance as essential study documents. As a final step for closure of UAT, the sponsor and/or designee should review the agreed-upon UAT performance metrics. The Metrics Champion Consortium (MCC) has a standard set of UAT metrics designed to assess the performance of the UAT [8]. It is recommended the sponsor (or designee) utilize the MCC metrics to support the evaluation of the UAT.

Conclusion

In summary, although UAT may be performed differently among eCOA providers and sponsors, the end goal is to ensure proper documentation of UAT activities. Various techniques may be used depending on the nature of the eCOA system and the study. Rigorous and complete testing will facilitate successful system deployment, while thorough documentation of UAT will meet requirements for regulatory inspection. Completing the full UAT process using these best practices will help reduce the risk that a system does not meet the expectations of the stakeholders within a study. A thorough UAT process will also minimize the risk of inaccurate or missing data due to undetected flaws in the system that could jeopardize the results of the study and product approval. Following these best practice recommendations and completing UAT in its entirety will help support a high quality eCOA system and ensure more reliable and complete data are collected, which are essential to the success of the study.