Keywords

1 Introduction

Testing is an important phase in the software lifecycle where the systems developed are tested to uncover errors and gaps in program function, behavior, and performance [1]. The software developed is tested to find out whether stakeholder requirements are matched and to ensure that it is bug-free. The testing process involves the execution of the software components using manual or automatic tools in order to evaluate one or more dimensions of interest.

Testing includes a set of activities that can be planned in advance and can be conducted systematically. For this reason, a testing methodology and testing tools need to be defined in the testing process.

The TOOP project ran pilots in three different domains: General Business Mobility (GBM), eProcurement and Maritime and in fifteen Member States (MS) [2].

For the purposes of the GBM pilot, it is considered that someone (Legal or Natural Person) requires data about their company to use in a service (to issue a certificate for their company for instance). Instead of them filling their information manually, the service that they are using (called Data Consumer – DC) can get their information for them through the TOOP service. To do this, a Concept Request is sent which contains information about who is participating in this data exchange and what data is required. This request is sent to a service which can provide this kind of data (Data Provider – DP). The DP then sends a Concept Response which contains the data that has been requested back to the DC through TOOP along with some information about it.

For the purposes of the Maritime pilot, it is considered that someone (Legal or Natural Person) requires a certificate for their or their company’s ship and crew. Instead of them filling their information manually and submitting it, the service that they are using (DC) is able to get their information for them through the TOOP service. To do this, a Document Request is sent which contains information about who is participating in this data exchange and what certificates are required. This request is sent to a service which can provide this kind of data (DP). The DP then sends back to the DC a Dataset Response which contains either the certificates that have been requested or a list of all the available certificates that the user of the DC can choose from. This request for data is done in two steps. In the first step, the DC receives a list of certificate IDs and in the second step the DC uses those IDs to request the actual certificates from the DP as requested by the user of the service.

For the purpose of the eProcurement pilot, the objective is to get qualification evidences from DPs for Economic Operators (EOs) that are submitting a tender and need to satisfy specific criteria using the existing national European Single Procurement Document (ESPD) or eTendering Service. The retrieval may take place at any phase of the process (pre-award, award or post-award). An EO and a Contracting Authority (CA) send Concept Requests generated from the ESPD Response to a DP that sends back a Concept Response which contains in the final step the qualification evidences.

MS piloting in the same domain need to test against each other and this adds complexity to the testing process, hence the need of the well-defined and structured approach. Therefore, TOOP defined a testing methodology and developed a set of testing tools to facilitate the process of testing between the different MS in the three different domains piloting in TOOP. The testing methodology has first been generically defined, and adapted to the context of each pilot. It is also updated all along the project in association with the technical updates. The testing methodology starts at a low level that is more technical, to finish at a higher level by testing the connections between the different MS in a same piloting domain. This is done by organizing connectathons. Connectathons are being used in Integrating the Healthcare Enterprise (IHE), which is a world-wide initiative that enables healthcare IT system users and suppliers to work together to enable interoperability of Information Technology (IT) systems [3]. The Connectathon gives vendors an opportunity to test the interoperability of their products in a structured and rigorous environment with peer vendors. It also enables the IHE Technical Framework itself to be tested in the form of trial implementation/deployment settings. Participating companies test their implementation of IHE Integration Profile specifications against those of other vendors using real-world clinical scenarios [4]. The connectathons organized in TOOP have been adapted taking into account the TOOP environment, goals and requirements. The aim of using Connectathon in TOOP is to demonstrate that the deployed MS systems of the DPs and the DCs are interoperable and have fully implemented the TOOP technical specifications.

The next section describes the infrastructure and testing tools developed in TOOP to help MS developers to monitor their progress in terms of components deployment. An overview of the testing methodology from test preparations to connectathons is presented in the third section. The testing process along with the tools used in each step is analyzed in the fourth section, whereas test monitoring is presented in the fifth section. The results documentation is presented in detail in the sixth section. Finally, main conclusions are presented in the last section of the chapter.

2 Infrastructure and Testing Tools

2.1 Overview of TOOP Architecture

From the TOOP reference architecture described in previous chapter, the TOOP solution architecture was developed [5], as a set of fully implementable technical specifications, along with a suite of common software components (common components) that physically implements the solution architecture and can be used in the pilot environments by the participating MS, as well as a set of testing tools needed for the onboarding of pilots and the verification of end-to-end transaction capabilities achieved by each MS system connected to TOOP.

The TOOP solution architecture depicted in Fig. 1 below, includes MS systems that act as DC or DP and components that are either deployed nationally or centrally.

Fig. 1.
figure 1

TOOP solution architecture overview

The MS DC system is the system that is going to request and consume data from the DPs. The DC system authenticates the user via the eIDAS node [6]. It then consults the Criterion & Evidence Type Rule Base (CERB), which is a central authoritative system that maps specifics sets of data as evidence that prove specific requirements/criteria, to identify the proper evidence type that can be requested as an evidence for a specific Data Subject. The DC discovers the DPs that can provide the evidence they require by querying the Data Services Directory (DSD), which is a core service that acts as a catalogue of datasets that the DPs can provide upon request.

The Registry of Authorities (RoA), which is a core service that acts as a catalogue of procedures that the DCs can execute, is used in order to show whether a DC is authorized to request evidence for a specific procedure.

The Service Metadata Publisher (SMP) services provide the metadata about the eDelivery access point(s) (AS4 gateways) used by DCs and DPs in the evidence exchange. The SMP provides the access point metadata and BDXL is used to find the location of the SMP.

Both DC and DP model their messages according to the TOOP Exchange Data Model (EDM). The TOOP EDM uses the functional capabilities provided by the RegRep V4 Query Protocol to model the data request and response as queries.

The different infrastructure and testing tools that TOOP developed to facilitate the deployment of the TOOP artefacts in the MS are summarized in the following Table 1.

Table 1. TOOP infrastructure and testing tools

The subsections below describe in detail the TOOP infrastructure and testing tools developed in order to monitor successful implementation of TOOP technical specifications and interoperable data exchange between the different Member States.

2.2 Connector

The TOOP connector is a software artefact developed by TOOP that includes different functionalities of the DC or the DP system and it was developed in order to facilitate the onboarding process of the MS systems in the TOOP infrastructure along the different releases. The TOOP connector is designed as a simplification for piloting countries to act as a glue between the national DC/DP software and the shared standard components (SMP, AS4, DSD). The connector needs to be installed by each of the MS when a new release is provided. The TOOP component offers different interfaces for different architectural use cases. The following Fig. 2 shows the TOOP connector in orange.

Fig. 2.
figure 2

TOOP connector (Color figure online)

More details are provided in the following Fig. 3. The specific interfaces with the DP or the DC system are depicted with the orange arrows.

Fig. 3.
figure 3

TOOP connector detailed (Color figure online)

The different connector APIs with their name, relative URI and description are presented in the following Table 2.

Table 2. Connector APIs

2.3 Playground

The TOOP playground provides the infrastructure used for testing the pilot implementations. It simulates the behavior of a DC (fictitious MS Freedonia) and a DP (fictitious MS Elonia). Therefore, each MS can connect and test the data exchange with the fictitious MS. The aim of the TOOP playground is to emulate a virtual Europe for a more realistic deployment environment, for testing the developed TOOP artefacts and improving the reliability of each TOOP component.

The playground consists of:

  • A reference Data Consumer (Freedonia),

  • A reference Data Provider (Elonia),

  • Core services (DSD, RoA, SMP, CERB),

  • A distributed logging service (Tracker).

The following Fig. 4 presents an overview of the playground. In the left part of the figure, Freedonia includes the DP system, the eIDAS node of the fictitious MS and the AS4 gateway of the DP. In the middle part, there are the core services of the playground and in the right part, Elonia includes the DC system, the eIDAS node of the fictitious MS and the AS4 gateway of the DC.

Fig. 4.
figure 4

Playground overview

2.4 Simulator

The TOOP simulator is the main local testing tool. It aims in facilitating the development of DC and DP services, by simulating the whole infrastructure. It also makes it possible for an MS to test its DC and DP only by using its own environment and mocking up the behavior of the respective DP or DC. Then, the MS can do transactions using only their infrastructure.

Three different simulation modes are possible:

  • DC mode (simulating the infrastructure and the DP),

  • DP mode (simulating the infrastructure and the DC), and

  • Sole mode (simulating the infrastructure).

The following Fig. 5 on the left and Fig. 6 on the right show the TOOP simulator working in DC mode and in DP mode respectively. The TOOP simulator working in DC mode allows a MS to test its DC deployed system only by using its own environment and mocking up the behavior of the respective DP. Respectively, the TOOP simulator working in DP mode allows a MS to test its DP deployed system only by using its own environment and mocking up the behavior of the respective DC.

Fig. 5.
figure 5

TOOP simulator – DC mode

Fig. 6.
figure 6

TOOP simulator – DP mode

2.5 Reference DC and DP systems

The reference DC system is called Freedonia and the reference DP system is called Elonia.

Freedonia DC is a test DC implementation, which supports all types of queries and document types as defined by the pilots. It provides a UI for initiating TOOP data requests (https://dc-freedonia.acc.exchange.toop.eu/). It is used mainly for the connectivity testing step, as it will be discussed in the next section. It is also available as a war file, a standalone application and as a Docker Image, for facilitating local testing.

Elonia DP is a test DP implementation, which supports all types of queries and document types as defined by the pilots. It is discoverable through the discovery process of TOOP. It is also used mainly for the connectivity testing step. As for Freedonia DC, it is also available as a war file, a standalone application and as a Docker Image, for facilitating local testing.

2.6 Playground Tracker

The playground package tracker supports connectivity testing: when executing each step of the test scenario of a specific pilot, a MS uses it to check the transaction log on both DC and DP. It is a distributed logging service and it is used for testing purposes, providing the ability to see log messages from both ends of a test transaction. The tracker has been created to enable a user to see the actual message exchange of TOOP, that is not visible from the frontend.

The following Fig. 7 shows an example of the package tracker and the logs it displays. The primary objective of the tracker is to be used as a demonstration/presentation tool and as a tool to examine the exchange of messages in TOOP during the development of the common components.

Fig. 7.
figure 7

Playground package tracker

The principle, as presented in the diagram below (Fig. 8), is that the TOOP common components hosted in the TOOP playground will notify the package tracker at various points during the TOOP data provisioning process. The package tracker collects and presents the received messages in a sequential manner to the user.

Fig. 8.
figure 8

Package tracker diagram

3 Testing Methodology: From Test Preparation to Connectathons

TOOP has defined a 4-step process to conduct tests, checking the readiness and maturity of the pilot MS implementations. Each testing step must be properly executed by every MS that implements a pilot, and the results are properly gathered and documented for completeness and monitoring. It includes the test preparation verifying a checklist, the technical tests to be done at MS level (onboard testing), the tests with fictional MS Elonia and Freedonia (connectivity testing), and the connectathon tests between the MS consisting in connectathons. More specifically, the TOOP testing methodology includes the following steps as these are presented in the following Fig. 9:

  1. 1.

    Preparation for testing, where a technical checklist verifies whether the national environment is ready for testing.

  2. 2.

    Local testing, where the MS use the TOOP simulator to verify their own TOOP environment by doing automatic testing.

  3. 3.

    Connectivity testing, where the DCs can use the datasets of the Elonia fictional MS and the DPs can use their datasets with the fictional MS.

  4. 4.

    Connectathons, where the DC MS connect with valid data using dataset of another DP.

Fig. 9.
figure 9

The TOOP testing methodology

The methodology has been applied several times in the project lifetime. For the GBM pilot, a first session already took place from December 2018 to February 2019 with the participation of less MS, a second session took place from May 2019 to April 2020. Finally, the third session started in May 2020 until the end of the project. For the Maritime pilot, the methodology was applied as the MS were getting ready to pilot and for the eProcurement pilot, testing took place during the last period of the project from autumn 2020 until March 2021.

The four steps of the TOOP testing methodology are detailed in the following subsections.

3.1 Preparation for Testing: Checklist Before Testing

As part of the preparation for testing, the very first thing that the different MS have to do before starting to test is to verify that they have completed the checklist provided to them. The checklist consists of eight elements to verify at the MS technical level. The elements are presented in the following Table 3.

Table 3. Testing preparation checklist

3.2 Local (Onboard) Testing

The second step of the end-to-end testing process includes automatic tests at the MS level using the TOOP simulator described in previous subsection (see Sect. 2.4). At this stage, the MS tests its own environment, a detailed user guide is provided on the wiki to help them using it.

3.3 Connectivity Testing

Before starting the connectathons, the different MS have the possibility to test their own DC and DP with fictional countries: Freedonia and Elonia. This is connectivity testing.

Fig. 10.
figure 10

Screenshot of Freedonia

Connectivity testing enables to prove that the DC/DP system implemented by a MS is able to communicate properly under a sandbox environment which has been implemented by the TOOP team.

Elonia is a DP system that simulates a DP in the fictional country Elonia. A DC can do connectivity testing requesting data from the Elonia DP, using the Elonia’s dataset.

Freedonia is a DC system simulating a DC in the fictional country of Freedonia. A DP can do connectivity testing triggering TOOP Requests from Freedonia DC towards his DP implementation. A screen shot of Freedonia can be seen above in Fig. 10.

3.4 Connectathon

Connectathon test sessions are organized via conference calls with shared screen. The environment where a TOOP connectathon takes place is a controlled and neutral environment where ready DC MS can test with ready DP MS (ready means the MS has passed successfully the three steps before connectathons). The connectathon is an opportunity for all the MS to identify errors in their implementations and to improve them. There is no negative effect in the case of an error in the implementation, on the contrary it serves as an incentive for improvement [7]. The improvement can be a refinement in the TOOP specifications or in the specific implementation at the MS deployed system.

A typical connectathon session in the GBM pilot is described below, where Greece participates as DC, Slovakia participates as DP, and Sweden participates as both DC and DP. Greece that is only DC is ready to test as well as Slovakia (only DP) and Sweden (DC and DP). Greece, Slovakia and Sweden have realised successfully each of the three steps before the connectathon (checklist before testing, onboard testing and connectivity testing according to Elonia and Freedonia). The two DPs (Slovakia and Sweden) have provided their dataset.

The connectathon session can start:

  1. 1.

    Greece DC shares its screen and starts testing with Slovakia, using Slovakia DP dataset. The tests are done with a valid identifier and in a second step with an invalid identifier to be sure the correct global error message is displayed.

  2. 2.

    Greece continues testing with Sweden using Sweden DP dataset, with valid and false identifier.

  3. 3.

    Then Sweden shares its screen as DC and tests using Slovakia DP dataset with valid and false identifier.

The results of the connectathons are then reported on the TOOP pilot wiki, and a report is sent by mail to all the piloting MS.

An example with the respective screenshots and playground tracker details is shown below for the connection between Germany DC and Austria DP. First, Germany DC shares its screen, copies Austria’s DP identifier and selects Austria in the list (see Fig. 11).

Fig. 11.
figure 11

Connectathon between Germany DC and Austria DP - selection of Austria DP

Then, Germany requests the corresponding DP information through TOOP (see Fig. 12).

Fig. 12.
figure 12

Connectathon between Germany DC and Austria DP - request information

In the following Fig. 13, one can see that the data request is in progress between Germany and Austria.

Fig. 13.
figure 13

Connectathon between Germany DC and Austria DP - request data in progress

The data requested is then received from Austria DP, and Germany needs to agree to receive it (see Fig. 14).

Fig. 14.
figure 14

Connectathon between Germany DC and Austria DP - agree on data requested to be used

Finally, the information is visible on Germany DC, and the test is successful (see Fig. 15).

Fig. 15.
figure 15

Connectathon between Germany DC and Austria DP - successful result

When the connectathon is finished, a reporting is done with the current results as presented in Sect. 5.

4 Testing Process

This section presents an overview of the steps of the testing methodology along with the tools that are being used in each step. Local MS system testing (Subsect. 4.1) and local infrastructure testing (Subsect. 4.2) form part of the local testing step.

4.1 Local MS System Testing

In the following Fig. 16, the MS DC system checks whether it is able to create messages to be sent using the TOOP simulator and receive messages from the TOOP simulator.

Fig. 16.
figure 16

Local MS system testing with the use of the TOOP simulator

4.2 Local Infrastructure Testing

In the following Fig. 17, the DC using a local DC instance checks whether it is able to send a request to the playground (Elonia), using its own deployed infrastructure (SMP, AS4 gateway).

Fig. 17.
figure 17

Local infrastructure testing using a local DC instance

4.3 Connectivity Testing

In the following Fig. 18, the DC using its deployed system (and not a local DC instance) checks whether it is able to send a request to the playground (Elonia), using its own deployed infrastructure (connector, SMP, AS4 gateway).

Fig. 18.
figure 18

Connectivity testing

4.4 MS to MS Connectathon

In the following Fig. 19, the DC using its MS deployed system checks whether it is able to send a request to all the other DPs using its own deployed infrastructure and accept back the response provided (both TOOP error and successful TOOP responses).

Fig. 19.
figure 19

Connectathon

5 Test Monitoring

As there are many MS deploying their systems either with the role of DC, or with the role of DP or with both roles (DC and DP) and getting ready for connecting using different releases that are available in the project, it is necessary to supervise the process, monitor the status of each MS participating in the testing process and organize the respective testing sessions in a structured way.

The testing manager is responsible for this role. She supervises the testing process, monitors the status of each MS participating in the testing process, and plans each connectathon. The testing manager interacts with each MS in order to keep track of the MS that are ready to participate in a connectathon, and plans the agenda of each connectathon. The agenda includes the connections along with the specific test cases that need to be tested. A DC ready MS will test with all the DP ready MS. This is repeated for all DC ready MS and the results are registered. Different reporting views of the results are adopted, as described in detail in the next section.

For the pilot test status tracking, a pilot test monitoring factsheet was initially developed, and is described in Subsect. 5.1. The test monitoring factsheet was applied in consecutive testing sessions, it was adapted and improved taking into account the feedback from the MS and the developers’ teams and it was also aligned with the new releases of the TOOP components and the evolvement of the solution architecture and the adoption of the new EDM. This led to the identification of specific pilot milestones for each testing step and ways of verifying them. The pilot milestones are described in Subsect. 5.2.

5.1 Pilot Test Monitoring Factsheet

For this reason, monitoring information is asked to the MS, based on the four steps defined in the testing methodology subsection. Each MS has to indicate information for each pilot they are participating in:

  1. 1.

    Readiness to participate in the connectathon. Information is captured (updated) whether the MS is a DC and/or DP, if they are using eIDAS, and if they are ready to participate in a connectathon. The MS that are ready to participate in a connectathon proceed with filling the rest of the monitoring document.

  2. 2.

    Check list. The MS indicates information relevant to the check list part of the testing method. The following information is filled in:

  3. TOOP Connector installation: if it is done, current version and date it was installed, next version and date planned to install.

  4. TOOP SMP installation: if it is done, current version and date it was installed, next version and date planned to install.

  5. Document Type Identifier installation: if it is done, current version and date it was installed, next version and date planned to install.

  6. AS4 gateway installation: if it is done, which compatible AS4 gateway is installed (e.g. HolodeckB2B), which version and when it was installed.

  7. Provision of the MS own dataset: if it is done, and if yes, when.

  8. Ordering and installation of received PKI certificates used by keystore(s) TOOP services/components: if it is done and when.

  9. Registration of the DC/DP supported document type capabilities and gateway endpoints to SMP: if it is done and when.

  10. Registration of SMP to SML: if it is done and when.

  11. 3.

    Onboard testing. The MS indicates if they have performed successfully onboard testing as a DC and/or as a DP, what was the result (classified as (1) passed, (2) partly passed or (3) failed), and if it is not already done, when it is planned to be done. There is also space available for comments.

  12. 4.

    Connectivity testing. The MS indicates if they have performed connectivity testing. This means that if they act as a DC, they have to indicate the connectivity testing result of their DC system to the fictional Elonia MS, and if it is yet not done, they need to inform on the planned date. If they act as a DP, they have to indicate the connectivity testing result from the fictional MS Freedonia to their DP system, and if it is not yet done, they need to inform on the planned date. There is also space available for comments.

  13. 5.

    Connectathon. This part of the monitoring document includes information on the participation of the MS as a DC or as a DP or both as a DC and DP in the last connectathon and the respective results. There is also space available for comments.

5.2 Pilot Milestones Check

Five pilot milestones were identified along the four steps of testing for better monitoring the progress in the deployment of the TOOP components in each MS system. Milestone 1 aligns with testing step 1: preparation for testing, milestones 2 and 3 align with testing step 2: local testing, milestone 4 aligns with testing step 3: connectivity testing and milestone 5 aligns with testing step 4: connectathon. More specifically:

  • Milestone 1: the MS must integrate the new EDM in their piloting system.

  • Milestone 2: it concerns the transaction implementation; the MS system should be able to create messages to be sent using the TOOP simulator and to receive messages from the TOOP simulator.

  • Milestone 3: the system’s infrastructure (connector, SMP and AS4 gateway) must be properly deployed and correctly configured locally.

  • Milestone 4: it is about the playground connectivity: DCs and DPs will test that their system deployed can connect to the playground (the fictive countries of Elonia and Freedonia).

  • Milestone 5: it is about the connectathon where DCs and DPs will test with other MS. The MS system is able to communicate and execute correctly a transaction using each own system and infrastructure.

Fig. 20.
figure 20

Pilot milestones

The pilot milestones together with the way to verify them (as a DP, or as a DC) are presented in Fig. 20 above. The figure also presents what input is necessary to be provided by the team that develops the common components (Common Components Task Force – CCTF).

Milestones 1, 2 and 3 are checklist. The answers of the DCs and DPs can be one of the following:

  1. 1.

    Yes: the milestone is achieved.

  2. 2.

    Partly: the development to achieve the milestone is in progress but not yet finished.

  3. 3.

    No: the milestone is not achieved. It can be skipped, e.g. in the case of Austria that skipped milestone 2 and went directly to milestone 3.

  4. 4.

    Planned: the milestone is not started yet but is planned to be started.

  5. 5.

    Not decided: this is in case an MS has not taken a decision whether to proceed with the update of their system with the next release of the components. This can be due to resource reasons or change of policy.

For milestone 4, connectivity test with the playground needs to be completed. For milestone 5, testing during connectathon needs to be done. For both milestones 4 and 5, the results can be classified as:

  1. 1.

    Passed: the MS successfully passed the test.

  2. 2.

    Partly passed: the connection has been tested but partly passed and needs to be retested.

  3. 3.

    Failed: the MS did not pass the test.

  4. 4.

    Planned: the test not yet done.

  5. 5.

    Not decided: the MS has not taken the decision to proceed with the test.

Fig. 21.
figure 21

Pilot milestones MS status

For a direct interaction and support of the MS technical teams, recurrent technical and testing calls are put in place each week. The calls consist of two parts. During the first part of the call, the MS technical members are presented the latest technical updates of the TOOP components, and they can ask questions. The second part of these calls is dedicated to testing milestone 4 and milestone 5, with the MS that are ready. The participants really appreciate these calls that are dedicated to them and help them in completing their work.

During these calls, the status of the MS respective to the milestones is presented in a dashboard visualisation (see above Fig. 21). The three first columns show the MS and the pilot it is participating in (General Business Mobility and/or eProcurement). If the MS is green, it means that the MS participated in at least one connectathon, being ready for milestone 5. Orange means that the MS is about to be ready. No color indicates that the MS did not start any milestone for the moment. The other columns are grouped by milestone. For each milestone, the status of the MS achievement for the specific milestone according to the classification presented above is shown as a DC and as a DP. If a role (DC, or DP) is not applicable, then the cell is marked in grey.

6 Results Documentation

Results are reported in different views in a consistent way, easily traceable back in what was done in each connectathon, and easily comparable regarding the progress in each connectathon. Different reporting tables and graphs present the results, and a summary of the successful connections is presented in a map of results. The following subsections present the different reporting methods.

6.1 Reporting to MS

After each connectathon, an email is sent to all piloting MS with the results of the current connectathon. The email summarizes textually the results of the connectathon with the addition of a table presenting the results of connections done during the connectathon. A link is also provided to the pilot wiki where more information can be found. The results are classified as (1) passed, (2) partly passed, and (3) failed.

6.2 Global Results Table

After each connectathon, a global results table is updated. The columns of this table include the following information:

  • The DC MS.

  • The DP MS that are going to provide data to the specific DC MS.

  • The status of each DC-DP connection. This is classified as (1) to be tested, (2) retested, and (3) already passed.

  • The connectathon results of the current connectathon using two test cases: (1) valid identifier, (2) invalid identifier, and comments in case of failure.

  • The connectathon results of the last connectathon (same information is provided as for the current connectathon).

  • The general results updated with current connectathon results: results using test case (1) valid identifier, results using test case (2) invalid identifier and the date of the last connectathon.

The following Table 4 presents a snapshot of the global results table after a connectathon that took place on September 23rd, 2020.

Table 4. Connectathon global results table

6.3 Reporting Tables

Two types of tables are updated after each connectathon summarising the status of the pilots: the MS implementation status and the connectathon status. These two tables are used to communicate results within the project but also to external stakeholders in a tabular way. The MS implementation status table presents the status of each milestone per MS and per pilot. The connectathon status table presents the status of the connectathon per MS and per pilot. The status in both tables can be (1) completed (coloured dark green), (2) in progress (coloured light green), (3) planned (coloured yellow), or (4) not started (coloured orange). Examples of these tables below reflect the results of a connectathon that took place on September 23rd, 2020 (see Table 5 and 6).

Table 5. MS implementation status
Table 6. MS connectathon status

6.4 Reporting Graphs

The connectathon results are visible through different graphs presenting a different kind of information. The following graphs present a result summary for the current connectathon with a valid identifier (on the left, see Fig. 22) and with an invalid identifier (on the right, see Fig. 23). In both figures, the percentage of successful connections is shown in green, the percentage of partly passed connections is shown in orange, and the percentage of failed connections is shown in red.

Fig. 22.
figure 22

Reporting graph - result summary with valid identifier (Color figure online)

Fig. 23.
figure 23

Reporting graph - result summary with invalid identifier (Color figure online)

The following graph (Fig. 24) presents the results for each MS (DC and/or DP) with a valid identifier. For each MS DC and each MS DP, the number of connections is shown (in green the number of connections that were successful, in orange the number of transactions that was partly passed, and in red the number of transactions that failed).

Fig. 24.
figure 24

Reporting graph - results by MS with valid identifier (Color figure online)

Other graphs are also updated after each connectathon, presenting the progress of results such as the one below presenting the total number of connections at each connectathon, including successful connections in green, partly passed connections in orange and failed connections in red. In Fig. 25, one can see that the number of connections increases from one connectathon to another, and the number of green connections also increases. The objective is ideally to have all connections successful which will be the end of a connectathon session. This might not be possible for instance if a MS pilot becomes inactive at one time. In Fig. 26, one can see the same kind of graph as above but per MS. Only a part of this last graph is presented.

Fig. 25.
figure 25

Reporting graph - progression of results with valid id over time (Color figure online)

Fig. 26.
figure 26

Progression of results by MS: MS DC and MD SP with a valid identifier

6.5 Map of Results

A map of connections is visible on the TOOP website [8] and can be seen below (Fig. 27). This interactive map realized by the TOOP communication team enables to view the MS participating in the GBM pilot. A table on the right shows which MS are connected as DC and which MS are connected as DP. The table offers the possibility to select the countries for which the user wants to see the connections that are then visible as green arrow (successful connections) on the map.

Fig. 27.
figure 27

Map of connections (Color figure online)

7 Conclusions

As presented in the current chapter, the TOOP testing methodology, process, monitoring, and reporting has been a structured effort to monitor the progress of different pilots, deploying different releases of TOOP components and participating in different domains and has been a very useful instrument to monitor and to support and to improve the quality of the pilots. It was first put in place for the General Business Mobility pilot starting with few participating MS, it was adapted to be used by the Maritime pilot participants and it was further adapted and used by the eProcurement pilot during the last session of testing during the project.

The methodology is also updated to respond to the needs of the common components development team, in parallel to the new releases of TOOP Components, and to respond to the needs of the MS.

Each two weeks, the piloting MS have been meeting via conference call to realize the well-known connectathons that also provide the possibility to keep good contact between the MS, and to identify possible issues that can be then corrected by the common components development team.

As a generic approach, this structured testing methodology can be applied in different context after a small adaptation.