1 Introduction

Keeping older adults independent and healthy, improving at the same time their quality of life and strengthening their autonomy so that entry to long-term residential care is delayed for as long as possible, will help reducing the financial burden on the health and social care systems in Europe. In addition, older adults typically prefer living at their homes as long as possible (Centers for Disease Control and Prevention 2014; Mulliner et al. 2020). Unluckily, they often need to be institutionalized due to age-related problems, such as cognitive impairment (Sanford 2017; Morley 2018), neurodegenerative disorders (Hou et al. 2019), and functional disability (Verbrugge et al. 2017; Partridge et al. 2018).

One of the most promising solutions to phase the aforementioned challenges is providing older adults with specially tailored housing (Daniel et al. 2009). When equipped with relevant Information and Communication Technology (ICT) senior houses/apartments can turn into smart living environments, decreasing the need for support or care and potentially increasing independent living and quality of life (Haux et al. 2014; Siegel and Dorner 2017).

Several initiatives have been pursued worldwide, using Information and Communications Technologies (ICTs) to develop very advanced and disruptive products, solutions and services to improve quality of life of population in general and of the elderly in particular (Konstantinidis et al. 2010; Giokas et al. 2014; Haux et al. 2014; Kouris et al. 2020). The European Commission (EC) is also very active in addressing the above mentioned challenges. Many technologically advanced projects have been funded by its Framework Programmes FP7 and Horizon 2020 with specifically designed objectives for the ‘Active and Assisted Living’ and the ‘European Innovation Partnership for Active and Healthy Ageing’ (European Commission 2016).

Unfortunately, radically new technologies often mismatch the user needs and preferences, and the way that they are tackled by products sometimes fails to adapt to user needs (Peek et al. 2016; Wang et al. 2019). Co-creation approaches, fostering user involvement during the whole development process, could, in fact, respond to user needs and reach high acceptability, usability and satisfaction. Co-creation is a concept that offers new opportunities for innovation processes (Frow et al. 2015) and has emerged as a significant potential for business product design, enriching research processes (Battersby et al. 2017) and having significant impact on sustainable innovation (Greenhalgh et al. 2016). Despite the growing interest, researchers indicate the need for systematic approach in tools and processes for co-creation and effective collaboration among partners (Aarikka-Stenroos and Jaakkola 2012). However, performance evaluations of such development approaches are sparse (Cowdell et al. 2020).

The lack of well reported methodology assessment, prevents design teams and EU project consortiums from adopting these methodologies. Most of the design studies followed by EU funded projects exploited two main approaches: (i) user involvement early enough in the design process where pilot partners define the use cases from their perspective (Borghese et al. 2019; Henwood et al. 2019; Zacharaki et al. 2020) or/and (ii) web-based interviews by technical partners to help focus the scope of the pilot (Ferrari et al. 2020). These approaches often fail to create value and are not suitable to be extended for long term activities.

On the other hand, the Agile development is widely adopted in business to improve development performance by frequent, efficient and effective adaptation in user requirements and corresponding changes (Lee and Xia 2010) but is not yet widely applied in research projects. While the collaboration and communication of developing team with the end-users is a way to achieve better fit of ICT products with the real user and market needs (Nakki et al. 2011) it is often difficult to be achieved in distributed consortia. This communication can be enhanced by complementing Agile technology development with user-centred design activities, such as co-creation methodologies throughout the research and development phase.

A currently on-going initiative, funded by the European Union, is the CAPTAIN project (Coach Assistant via Projected and TAngible INterface) (captain-eu.org), a research and innovation action aimed at developing an advanced technology to help older adults to overcome some of their frailties and limitations (Konstantinidis et al. 2019). The project is currently developing a new technology designed to turn the home of older adults into an ubiquitous, gentle assistant, providing intuitive interaction, guidance and help for independent living, whenever and wherever it is needed, leading to physical, cognitive, mental and social well-being. The system makes use of projected augmented reality and real-time 3D sensing technologies to monitor and “comprehend” the user and the indoor space in order to provide contextualized and personalized coaching and instructions. Solutions are designed for non-invasive user and environmental sensing including emotional and behavioural recognition, indoor location and gait analysis, physical and cognitive training, progress monitoring. Exploiting this information, CAPTAIN is developing behavioural and artificial intelligence (AI) algorithms to provide personalized advice, guidance and follow-up for key age-related issues in daily life which impact a person’s ability to remain active and independent at their home (Beristain Iraola et al. 2020).

To achieve its objectives, tackling at the same time the challenges of Agile and co-creation in research projects, CAPTAIN introduces a truly user-centred co-design philosophy (Petsani et al. 2019) based on constant involvement of older adults and other stakeholders in the design, development, and testing stages (Fig. 1). The co-creation principles and development methodologies are combined within an Agile framework, adapted to fit the needs of distributed consortia like CAPTAIN (Tessarolo et al. 2019). Agile methodology provides opportunities to assess the direction of the individual components of CAPTAIN throughout shorter development lifecycle (referred to as “Sprints”). This short-term development cycles enables the consortium to adapt to changes in user’s needs and wants and understand the radically new concept of using projected and tangible interfaces for assisting older adults. An approach based on incremental, iterative delivery and empirical feedback was considered more suited in the innovative character of CAPTAIN, based on radically new ICT concepts in order to effectively deploy a technology responding to real user needs as well as reaching a high user acceptability, usability, and satisfaction. The proposed framework was developed in an empirical way, being susceptible to changes when considered necessary.

Fig. 1
figure 1

Schematic drawing of the CAPTAIN new development framework based on Co-creation and Agile methodologies. The participants profiles are indicated in the legend at the right side of the image

This paper aims at presenting the Agile approach adopted by the CAPTAIN project for developing advanced assistive technologies in the health and well-being domain and at evaluating the perceived effectiveness of the development process by both the developing team and the stakeholders involved in the co-creation and testing. This can provide new evaluation tools and promote a rational introduction of such development methodologies in other EU-funded projects in the health and well-being domain.

2 Methods

2.1 The Captain Project

The CAPTAIN project is a currently ongoing 3-years research and innovation action within the H2020 framework. The project consortium is composed of 15 partners, including five pilot-sites making available their Living Laboratories (LLs) where to pursue the co-creation and testing activities of the CAPTAIN technologies. The LLs are located at different geographical locations within Europe (Greece, Italy, Spain, Ireland, Cyprus) and involve primary and secondary end-users and other stakeholders, to form a highly motivated community (the CAPTAIN stakeholder network) with strong bonds with the local territories. This allows to engage stakeholders in the processes of iterative technology design, implementation and evaluation. The development team is also located in different geographic locations within Europe (Greece, Spain, France, Estonia) which poses challenges in the Agile development and communication among partners.

The CAPTAIN Stakeholder Community consists of people who can potentially use CAPTAIN (older adults and their caregivers), and of those who can provide suggestions and feedback for refining the overall CAPTAIN framework (e.g., service providers, nursing home management staff and patient associations). Stakeholders were considered as the only sources of requirements throughout the project’s lifecycle, playing a substantial role as co-designers and co-creators.

Three main project actions were planned involving the stakeholders network: ACTION 1, “Production of experimental datasets for training algorithms”; ACTION 2, “Pilot trials in LLs (co-creation iterations)”; ACTION 3, “Pilot trials in real homes” (Tessarolo et al. 2019). The planned timeline of each single action within the timeframe of the CAPTAIN project is shown in Fig. 2b. Action 1 accommodates the production of experimental datasets for feeding artificial intelligence algorithms and speeding up algorithms development. The existence of non-artificial data is mandatory for the effective development and the subsequent fine-tuning of software-related components. These activities are indeed propaedeutic to the software development, integration of different system components and development of AI algorithms for the coaching functionality. Therefore, LLs stakeholder network is exploited for the production of the required experimental datasets, driven by the needs of the technical partners and the coaching objectives. CAPTAIN’s consortium agreed to use self-reporting diary as tools in a prospective cohort observational study and collected and inventory of case studies describing older adults’ habits across a five months period.

Fig. 2
figure 2

a) Planned timeline of the three scheduled actions within the timeframe of the CAPTAIN project (project start at M0 and planned project end at M36). All the three “actions” are based on stakeholders involvement and ACTION 2 (in red) is devoted to design, development, and testing of technology applying user-centered co-design philosophy using an iterative Agile approach and a blend of lean-start-up and SCRUM development methodologies. Action 2 is broken down, in order to present the six scheduled “Sprints” (in yellow). Questionnaire administered to the CAPTAIN team are indicated in the diamonds. Q1 is the “Partners perspective pre-assessment questionnaire”, realized administering items listed in Tables 1, 2, and 3. Q2 is the “Partners perspective post-assessment questionnaire”, realized administering again items listed in Tables 1, 2, and 3 at the end of the first three Sprints. Q3 is the “Questionnaire used for monitoring the CAPTAIN team morale across the sprints”. b) Timeline schedule for a typical CAPTAIN Sprint exploded into subtasks. The typical full sprint planned duration was 12 weeks (yellow bar). The duration of the sprint subtasks ranged from one to three weeks (green bars). Main events are reported as coloured diamonds. The questionnaire Q4 for the “Participants’ satisfaction” was administered at the end of the review subtask

Table 1 Questionnaire for the evaluation of the partners’ perspective about project involvement. Created to measure three dimensions (teamwork, requirements, planning) of the Comparative Agility Assessment tool (Williams et al. 2010)
Table 2 Questionnaire items for the evaluation of the partners’ perspective about project requirement elicitation, design and development process. Created to measure three (quality, culture, knowledge sharing) of the dimensions of Comparative Agility Assessment tool (Williams et al. 2010)
Table 3 Questionnaire items for the overall perception of the development process realized according to past project methodologies and CAPTAIN methodology

Action 2 is active during the whole participatory design phase where the stakeholders, with emphasis on the older adults, visited the LLs at pilot sites, figuring out the true user needs, and acted as co-creators, interacting and giving feedback to all the intermediate released versions of the CAPTAIN technology. A total of six iterations were planned.

Action 3 is aimed at realizing a pilot study for testing CAPTAIN technology in real settings (older adults’ homes and protected apartments in nursing homes) with target users. A multidimensional evaluation is planned, including user acceptability, ease of use, user satisfaction and perceived usefulness of the technology adapting.

At the time of this study report, ACTION 1 was completed, the first three planned iterations (tout of six) in ACTION 2 were performed, while ACTION 3 was not activated yet.

2.2 The Envisaged Captain Technology

The CAPTAIN technology includes the use of projection and speech generation as a means to provide information to the user, while it employs speech recognition and interaction through movement for gathering inputs from the user. In order to empower the users, the system is based on the I-Change model (de Vries 2017) that guides users to set their goals, achieve the selected goals, and finally get feedback for the achievement process. This functionality is based on the creation of SMART goals from experts in the domain of health and wellbeing (Beristain Iraola et al. 2020). Following a plugin based architecture, any specialized organization can create an additional SMART goal that could be delivered by the system. The plugin describes the SMART goal to be achieved and the schedule of interventions the user is invited from the CAPTAIN coach to perform with the aim to achieve the goal.

2.3 User Centered Co-Design And Agile Methodology Tested Within The Captain Project

The captain project became the testbed for testing a new methodological framework based on Co-creation and Agile methodology in a large and distributed organization like a fifteen partners European consortium.

The active and continuous involvement of the CAPTAIN Stakeholder community in the design, development and evaluation stages was intended to go beyond the usual waterfall approaches, which typically foresees the user contribution limited to the initial requirement elicitation process and eventually for assessing the finalized technology (Royce 1970). In CAPTAIN, a hybrid approach leveraging the concepts of Design Thinking, Lean StartUp approach and SCRUM Agile framework (Schwaber and Beedle 2002; Schwaber and Sutherland 2017) was designed and implemented.

Design Thinking is a method consisting of five steps: “Empathize”, “Define”, “Ideate”, “Prototype”, “Test” (Rowe 1987). It can be used by designing teams to resolve real issues by creating practical, meaningful and creative ideas for a particular group of people (Brown 2008; Bjögvinsson et al. 2012). In the “Empathize” step the team aims to gain empathetic understanding of the target group and its needs while in the “Define” the design team tries to come up with a concrete definition of the main problem they are trying to solve. The “Ideate” process provides useful tools to generate ideas to solve the defined problem and also make a first “Prototype” in the next step. At the end of the Design Thinking process the designing team and the targeted users have to “Test” the prototype solution. In CAPTAIN, the Design Thinking process has primarily explored and identified the target users’ real needs in order to come up with insights on creating real value.

The Lean StartUp methodology is an approach that was created to support StartUps to deliver products for their customers as fast as possible (Ries 2011). This method provides insights on when to change direction of a product (pivot) and when to persevere, reducing the waste in effort and time (Ries 2011). The definition of “StartUp” can be extended and fruitfully exploited within projects such as CAPTAIN, that are aimed at developing, in a limited timeframe and budget, radically new technology for “customers” (e.g. older adults) that are not familiar with it. This brought the consortium to consider the deployment of the Lean StartUp approach within the CAPTAIN methodology. In Captain, the Lean approach was indeed adopted in order to deliver a functional prototype frequently enough to the stakeholder community, in order to collect feedback and readjust the technology (Kupiainen et al. 2015).

Finally, the use of SCRUM (Schwaber and Beedle 2002; Schwaber and Sutherland 2017) has instead helped in organizing work across technical partners to collaborate towards delivering high value (Srivastava et al. 2017). The hybrid approach was adopted to facilitate CAPTAIN in solving effectively and with high flexibility the complex project’s developments required to achieve its goals.

As mentioned before, within ACTION 2, the CAPTAIN development process was organised following iterative cycles referred as “Sprints” (Schwaber and Beedle 2002; Schwaber and Sutherland 2017). As sketched in Fig. 2b, each Sprint had a planned duration of about 12 weeks, consisted of interdependent steps that might overlap, and pursued clear goals and objectives, (e.g.: requirements elicitation, testing, evaluation, validation). Each single Sprint was based on the coordinated implementation of a series of events: “Sprint planning”, “Design of technology”, “Development of technology”, “Pre-review”, “Laboratory technical assessment”, “Preparation of the LLs sessions”, “Technical field testing”, “Review”, “Post-review”, and “Retrospective” (Tessarolo et al. 2019). The Sprint was therefore initiated with the “Sprint planning”, where the objective of that specific Sprint was defined. In this event, the increment of technology to be delivered to the CAPTAIN stakeholders’ community, which use-case should be satisfied, and the data to be collected during testing were also defined. The technological partners that were involved in the design and development of the technology to be tested were recruited. The event served as input for the “Design of the technology” phase, providing additional information to be submitted to the EC evaluation and to be added in the informative for participants. It also facilitated the preparation of the LL session. During the “Design of technology” the technological specification for hardware/software components to be developed and tested in that specific Sprint were defined. This phase took into consideration the last available release of the technology and the feedback from previous Sprints. The next phase, “Development of technology”, was aimed at implementing the planned increment of technology according to the prioritization list of collected requirements. This phase was accompanied by the weekly “SCRUM meeting”, a 15-minute time-boxed event held among the CAPTAIN team to optimize the team’s collaboration and performance.

The “Pre-review” identified the variables of interest and indicators to be used during the co-creation-testing protocols. A dedicated “Pre-review meeting” allowed the technical partners to explain what the technology increment of the Sprint was and what feedback should be sought to the LL partners. The release of the Sprint testing protocol was an internal milestone. Once the previously reported phases were completed, “Laboratory technical assessment” was performed to debug and test in laboratory conditions the last CAPTAIN release. Collected measurements helped to optimize the technological modules and to minimize technical inconveniences during the subsequent testing session with stakeholders in LLs setting. Parallel to the previous phases, “Preparation of the LL session” took place at pilot sites in order to address all the possible issues related to contact the stakeholder, define the modalities of meeting with the stakeholders and set the final session agenda. In addition the assessment tools were finalized. Immediately before exposing the technology to the whole stakeholders community, the “Technical field testing” was realized to verify the Sprint protocol with only 2–3 stakeholders at a single LL. A quick feedback was released to all LLs alerting to skip testing of components that present any identified residual technical issue. Given all the previous phases satisfactorily completed, the “Review” phase was initiated to run the co-creation/testing sessions at all pilot sites. The execution of the co-creation-testing session engaged older adults and all relevant stakeholders in the LLs according to various working methodologies, including design thinking workshops, focus groups, and one to one interview. The “Post-review” phase summarized the output of the session, making a synthesis of the stakeholders’ feedback according to the endpoints defined in the Sprint planning and to assessment plan indications. The goal of the review meeting was that the technical partners understand clearly the feedback collected from the stakeholders. The result of the Sprint “Post-review” was a revised user priority requirements list as an internal process milestone.

Eventually, the “Retrospective” was realized as an opportunity for the consortium to inspect the process and create an updated plan to improve the forthcoming Sprint. During the “Retrospective” discussion, the consortium identified possible improvements such as: (i) optimization of the Sprint procedure (meeting the end-users), (ii) changes on the technical implementation plan, (iii) proposal from pilot partners about the structure of the input information given by the technical partners (Sprint “Pre-Review”), (iv) instructions from the technical partners about the feedback document provided by the pilot partners (Sprint “Post-Review”).

The temporal arrangement of each Sprint activity in a typical CAPTAIN Sprint has been previously detailed (Tessarolo et al. 2019). The duration of each phase was planned in advance and is summarized in Fig. 1, but was susceptible to change and adaptation based on the “Retrospective” discussion, and on the stakeholders availability (e.g. holiday seasons and local festivities).

2.4 Evaluation And Monitoring Of The Captain Co-Creation And Agile Methodology

Given the innovation of the proposed adaptation of Agile framework for the CAPTAIN project needs, it was necessary to evaluate the whole process and monitor the involvement and satisfaction of partners (CAPTAIN team members) and participants (stakeholders involved in the co-creation process) across the Sprints. The evaluation of the proposed framework was performed at two levels: (1) as a pre-post assessment in order to compare the CAPTAIN development methodology with other existing methodologies experienced by CAPTAIN team partners in previous EU projects; (2) as a longitudinal assessment throughout the Agile iterations (Sprints), using actionable metrics to detect the team morale and the participants (stakeholders) engagement and satisfaction.

Within the pre-post assessment, dedicated questionnaires were administered to compare the CAPTAIN Agile methodology with the more traditional “Waterfall” methodology. This comparison was realized in terms of partners’ approval and satisfaction of the overall development process.

2.4.1 Partners’ perspective on development process: CAPTAIN vs. previous Waterfall experiences

The partners’ perspective on the CAPTAIN development process was collected by administering to team representatives of each project partner an anonymous, structured, on-line questionnaire at two different predefined time points (Q1 and Q2 in Fig. 2): at the kick-off meeting of the project (time M0) and after the conclusion of the first three co-creation iterations (Sprints) (time M18). The questionnaire included two sets of 12 items each, in order to collect the partner’s perception in terms of participant involvement (Table 1) and in terms of perceived efficiency, innovation, improvement in communication and overall acceptability of the development process (Table 2). Questionnaire items were constructed leveraging the indicators created to measure the seven dimensions of Comparative Agility Assessment tool (Williams et al. 2010). Table 1 items focused on “teamwork”, “requirements”, and “planning”. Table 2 items were selected among those created to assess “quality”, “culture”, and “knowledge sharing”.

Respondents were asked to rate each questionnaire item according to a 5-point Likert scale ranging from “True” to “False” and from “Strongly agree” to “Strongly disagree” for Tables 1 and 2 items respectively. At time M0 partners were asked to answer to questionnaires based on their past experience on EU projects adopting the most traditional waterfall development process, while at time M18 they were asked to express their perception about the CAPTAIN development process. Collected answers were elaborated by associating a numerical score ranging from − 2 to + 2, where negative values represented “False” or “Strongly disagree” ratings and + 2 represented “True” or “Strongly agree” ratings. The neutral ratings such as “Neither false nor true” or “Neither agree nor disagree” were associated to a 0 score. The average scores per each item across all respondents was then computed at both time points in order to compare waterfall development experience to CAPTAIN process.

2.4.2 Partners’ satisfaction questionnaire: CAPTAIN vs. previous overall experiences in other EU projects

A second set of four questions were defined in order to address the satisfaction of the partners for the development process comparing their overall experience in previous projects and their recent experience within the CAPTAIN project. The four items of the questionnaire are reported in Table 3 and were rated according to a 10-points Likert scale indicating the level of satisfaction from “Not at all” (1) to “A lot” (10). The same questionnaire was administered to project partners at time M0 for evaluating their experience from any kind of development methodology experienced before CAPTAIN (including also possible different Agile approaches), and at time M18 to collect their feedback about CAPTAIN development process (Q1 and Q2 in Fig. 2). Average values of the Likert scores were then calculated among all respondent partners for each questionnaire item at both time points.

Two additional open questions were also proposed to the partner representatives at M18 (Q2 in Fig. 2) in order to capture the main strengths and limitations they experienced in the CAPTAIN Agile methodology. To collect partners’ feedback, the following two questions were posed: “What are the three main strengths, based on your experience, for user requirements process design and development in the CAPTAIN project?”; “Specify three major problems that you and your team faced during the user requirements elicitation, design and development process in CAPTAIN project”. Narrative answers were analysed to extract key-concept and evaluate the most frequently recognized strengths and limitations reported by respondents.

2.4.3 Captain Team Morale along Sprints

The satisfaction of the CAPTAIN team about the innovative Agile framework adopted throughout the projects was evaluated longitudinally across the first three Sprints of the project by using a dedicated anonymized self-administered questionnaire. This questionnaire was focused at monitoring the team morale, the participants’ engagement and the satisfaction about the process pursued during each specific Sprint making use of actionable metrics. The questionnaire was based on items suggested by Christiaan Verwijs’s article for measuring SCRUM team morale (Verwijs 2012) and grounded on validated tools for morale assessment in the military field (Boxmeer et al. 2007). We considered the basic concepts of team morale in the Agile SCRUM process and adapted the questionnaire in order to better fit the CAPTAIN framework. The questionnaire (Q3 in Fig. 2) was administered to the representatives of the 15 project partners immediately after the completion of the first (M12), second (M15), and third Sprint (M18). It consisted of 9 question items (Table 4) in which the partner representative was asked to rate his/her satisfaction with a 10-points Likert scale indicating the level of agreement from “Not at all”/“Never” (scored 1) to “A lot”/“Always” (scored 10).

Table 4 Questionnaire items for the evaluation of the partners’ team morale across the first three co-creation and testing Sprints of the CAPTAIN project.

As recommended by Verwijs, the average score per respondent was calculated over the full set of questions (Verwijs 2012). The team morale was eventually computed as the average of the individual averages. In addition to this analysis, we also sub-grouped the collected responses into 3 groups according to the following criteria: (i) ratings from 7 to 10 have been labelled as “Positive”, (ii) ratings from 5 to 6 as “Neutral”, (iii) ratings from 1 to 4 as “Negative”. The cumulative percentage of positive, neutral and negative answers were calculated for each questionnaire item across the total respondent participants. Data were collected and processed for each one of the three considered Sprints using the same method described above.

2.4.4 Participants engagement and satisfaction about the co-creation process

The participants’ engagement and motivation were also constantly studied in order to follow a holistic approach in the evaluation of the proposed framework. As CAPTAIN participants in the Sprints, i.e. the CAPTAIN stakeholders community, were the sole source of requirements, their voluntary engagement was crucial and they played a substantial role as co-creators. Among the CAPTAIN stakeholders network, older adults (> 60 years old) in need of guidance were the primary end-users of CAPTAIN technology and the ones who were mostly engaged in LL activities. Informal caregivers and healthcare professionals represented the secondary end-users of the CAPTAIN technology and were also included as part of CAPTAIN stakeholders community.

Two simple actionable metrics were implemented in order to monitor the overall user engagement and to evaluate participants’ satisfaction in the co-creation process.

On one hand, the number of participants in each Sprint was duly recorded at each LL site, guaranteeing their anonymity. To better characterize and monitor the network composition across time, participants profiles were separated in three main relevant categories: older adults, healthcare professionals and informal caregivers. The total number of participants for each category was computed pooling together the features obtained from each of the five LLs at pilot sites. Aggregated numbers were calculated for each Sprint iteration.

On the other hand, every participant was asked to report his/her satisfaction about the LL sessions by choosing the emoticons that best represented his/her evaluation among five possible different feeling status (Q4 in Fig. 2). The emoticons were matched with a 5-point Likert scale, where 1 represented the complete dislike and 5 indicated that the participant fully enjoyed the co-creation session.

3 RESULTS

3.1 THE CAPTAIN AGILE METHODOLOGY AND THE USER INVOLVEMENT IN CO-CREATION ACTIVITIES

CAPTAIN methodology implemented a full participatory design and user centric philosophy, reaching the active involvement and collaboration of all the relevant intended users in all the three Sprint development iterations. User participation in CAPTAIN development process required that the user’s role was changed from being informants to being responsible participants or co-designers in the design process. CAPTAIN stakeholders took active part in the exploration of needs and possibilities and in the design and prototyping as well as the organizational implementation of CAPTAIN technology. In this perspective, CAPTAIN went far beyond usual participatory design and requirement elicitation techniques by transposing Agile requirements elicitation and development methodologies through participatory design throughout the lifecycle process of the development. The CAPTAIN network of active stakeholders was effectively built relying on a relevant number of participants who already trusted the five partners’ LLs involved in the project. Primary end-users and secondary end-users supported effectively the participatory design, contributing in a substantial way to the requirements elicitation, design and development processes. The older adults were asked to follow protocols containing daily activities, interaction with smart devices, virtual tangible surfaces and the other CAPTAIN modules and sub-systems. Agile methodology was applied during the whole development process and a total of six Sprints were planned in order to properly optimize CAPTAIN technology. The first three Sprints were successfully concluded within the first half of the project (M0-M18) and data referring to this project period were included in this study.

3.2 Partners Perspective On Development Process And Team Morale Across Sprints

3.2.1 Partners’ perspective on CAPTAIN development process

A total of 14 representatives from 11 project partners filled in the questionnaires about their experience on the development process at both M0 and M18. The pool of respondents had a consolidated experience in EU project participation, having participated in an average of 7.2 projects since an average time of 8.5 years before joining CAPTAIN. In past projects, they experienced an average of 2.5 different methodologies for the design and implementation of past systems and solutions. Each respondent followed the most traditional waterfall development process in at least one past project.

The partners’ perspective on the development process is summarized in Figs. 3 and 4, reporting the average Likert score obtained for each questionnaire item of Tables 1 and 2 respectively. In all but one questionnaire item (1p), a higher score was associated to the development process adopted in the CAPTAIN project in respect to previous experiences using the waterfall approach. It is also worth noting that the CAPTAIN Agile process always resulted in average positive ratings when participants were asked for their involvement in the development process (questionnaire items 1a-1 l). One single negative average score was obtained for the item 1x related to the CAPTAIN experience in project requirement elicitation, design and development process, and more specifically when asked about their agreement with the statement “I feel that the current methodology does not need to be improved”. Conversely, the majority (16/24, 66.7%) of questionnaire items regarding partners involvement resulted in a negative score when referring to their previous experiences using the waterfall development process.

Fig. 3
figure 3

Partners’ perspective about project involvement. The average Likert score among all respondents (N = 14) is presented for each item of the Table 1. Data obtained at M1 for past experiences using waterfall development process (blue) is compared to data obtained at M18 regarding CAPTAIN Agile methodology (red). Partner representatives were asked to select among the following possible answers: “True” (scored + 2); “More true than false” (scored + 1); “Neither true nor false” (scored 0); “More false than true” (scored − 1); “False” (scored − 2)

Fig. 4
figure 4

Partners’ perspective about project requirement elicitation, design and development process. The average Likert score among all respondents (N = 14) is presented for each item of the Table 2. Data obtained at M1 for past experiences using waterfall development process (blue) is compared to data obtained at M18 regarding CAPTAIN Agile methodology (red). Partner representatives were asked to select among the following possible answers: “Strongly agree” (scored + 2); “Somewhat agree” (scored + 1); “Neither agree nor disagree” (scored 0); “Somewhat disagree” (scored − 1); “Strongly disagree” (scored − 2)

3.2.2 Partners’ satisfaction questionnaire: CAPTAIN vs. previous overall experiences

The “Partners’ satisfaction questionnaire” was administered to the very same pool of respondents participating to the “Partners’ perspective questionnaire”. The overall partners’ satisfaction for the development process used in CAPTAIN project and their satisfaction about any other previous experience with development methodologies in different projects is reported in Fig. 5. The average scores for the four questionnaire items (2a-2d) were in between 5 and 8, indicating a fair, but sub-optimal ratings in terms of satisfaction. The comparison between ratings obtained for CAPTAN methodology in respect to any other development methodology experienced in previous EU projects elicited a higher satisfaction for the CAPTAIN Agile process across all the four questionnaire items. The highest average score and the largest differential between CAPTAIN and previous EU project experience was obtained for the questionnaire item 2a, indicating a higher probability that the partner will propose the CAPTAIN development approach in upcoming projects.

Fig. 5
figure 5

Partners’ overall perception of the development methodology. The average Likert score among all respondents (N = 14) is presented for each item of the Table 3. Data obtained at M1 for past experiences using any development process (deep blue) is compared to data collected at M18 regarding CAPTAIN Agile methodology (red). Partner representatives were asked to associate to each item a numerical score from 1 (“Not at all”) to 10 (“A lot”)

The analysis of the two additional open questions devoted to collect strengths and limitations of the CAPTAIN Agile methodology allowed us to identify a set of key points for both positive and negative aspects. Key-points most frequently evidenced as strengths by the project partners were the following: “a real implementation of a user involvement process realizing a user centric development” (reported by 11 respondents out of 14, 78.6%); “high flexibility on pivoting technological requirement and project needs” (6/14, 42.9% ); “effective requirement elicitation process with good reflection of user needs” (4/14, 28.6%); “high user engagement and satisfaction with user perception of inclusion in the development process” (4/14, 28.6%); “high commitment of the project team, on both technical and pilot partners side” (2/14, 14.3%). In addition to these key aspects, other strengths such as the “active participation of all partners to actively solve problems”, the “increased communication between partners”, and the “opportunity to have an early release of technology prototypes in the project” were recognized. As secondary positive effects, respondents also pointed out the “high potential for generating scientific publication from early user involvement” and the possibility to have “benefits in user socialization as a result of participating in co-creation sessions”.

On the other hand, most frequently recognized limitations in the CAPTAIN Agile development process included: “complexity in the efficient management of users during co-creation sessions and in solving possible issues during the demonstrations of technology released at an early stage” (reported by 10 respondents out of 14, 71.4%); “need for frequently adapting the activity time schedule due to possible delays in technology development and user availability” (6/14, 42.9%); “need for extra effort in coordinating technical partners and maintaining effective communication between technical partners and pilots” (6/14, 42.9%); “very short time-frame for technology development” (3/14, 21.4%); “difficulties in effective transposition of user requirements into technology prototypes” (3/14, 21.4%); “difficulties in dealing with an evolving project scope that could substantially differ from the initially planned one” (3/14, 21.4%); “complexity of the requirement elicitation process due to several iterations also in the advanced stage of technology development” (2/14, 14.3%). It was also evidenced a possible “increase in the complexity of the requirement prioritization process across the whole development process”.

3.2.3 CAPTAIN Team Morale along Sprints

The number of respondents to the questionnaire for the evaluation of the partners’ team morale (Table 4) varied across the tree Sprints, being namely 19, 12, and 25 for the 1st, 2nd, and 3rd Sprint respectively. Respondents included the following professional profiles: developer, project manager, researcher, pilot facilitator, and pilot coordinator. For all the Sprints there was a balanced representation for both the technical and the pilot sites partners. Respondents’ experience in EU projects was on average 4.8, 3.7 and 6.2 years for the 1st, 2nd, and 3rd sprint respectively. Team morale resulted to be good across the three Sprints, with an average rating across respondents of 7.70, 7.50, and 7.68 for the 1st, 2nd, and 3rd Sprint respectively. The results about “Positive”, “Neutral”, and “Negative” scores are summarized in Fig. 6 using a 100% stacked column chart to allow comparisons of the average level of partner satisfaction over the full set of question items (3a-3i). The answers collected for the very same question over the three Sprints were grouped to facilitate the observation of possible changes within the development process evolution.

Fig. 6
figure 6

CAPTAIN Team morale across the three Sprints. Partner representatives were asked to associate to each item a numerical score from 1 (“Not at all”) to 10 (“A lot”). Responses are grouped into “Positive” (ratings from 7 to 10, in green), “Neutral” (ratings from 5 to 6, in orange), and “Negative” (ratings from 1 to 4, in red). The cumulative percentage of positive, neutral and negative answers are shown for each item of the Table 4 questionnaire and broken down according to the 1st (I), 2nd (II) and 3rd (III) Sprint. Data were collected from a total of 19, 12, and 25 respondents for the 1st, 2nd, and 3rd Sprint respectively

Overall, the team was generally satisfied with the CAPTAIN Agile methodology. In particular, more than 80% of the team was proud of the work done (questionnaire items 3a and 3b), satisfied of the support received from the other members of the team (item 3 g) and with the contribution to the development of the project (item 3 h).

Slightly fewer positive scores can be found in the question on efficiency and quickness (item 3c) showing only 50% of positive responses during the first Sprint. However, the percentage of satisfied respondents increased across the 3 Sprints, reaching the 67% and 68% in the second and third Sprint respectively. It is also worth noting that the percentage of dissatisfied respondents never exceed 20% in any of the questionnaire items and Sprints.

3.3 PARTICIPANTS FEEDBACK: THE USER PERSPECTIVE ON THE CO-CREATION PROCESS

As shown in Fig. 7, the number of participants was continuously increasing from the 1st to the 3rd Sprint, ranging from a minimum of 91 to a maximum of 132 stakeholders. The increasing number is an indication of participants’ engagement and satisfaction, as people have become part of the CAPTAIN stakeholders community and also invited their friends to join. The majority of the stakeholders (60.5%) were older adults aged 60 or more (mean age 71.6 years, range 60–87 years), self-reporting the need of guidance in one or more of the following CAPTAIN’s intervention area: physical activity, nutritional habits, social participation, and cognitive training. The other participating stakeholders were represented by healthcare professionals involved in care and assistance of older adults (25.4%), and informal caregivers (14.0%) having direct experience in assistance and care of relatives or friends in the silver age.

Fig. 7
figure 7

Number of participants (stakeholders) in each of the three different co-creation sessions (Sprints). Data are broken down according to the participant profile

Participants have overall enjoyed all three LL sessions as the positive answers exceed 90% in all three Sprints (Fig. 8). The percentage of very satisfied participants (5 in the Likert scale) keeps rising from 1st to 3rd Sprint. Negative answers (the worst value of the emoticon scale) exist only in the 2nd Sprint, but in small percentage (2%). The intermediate answer (Likert score 3) was not reported by any participant.

Fig. 8
figure 8

Participants’ satisfaction about the procedure followed during the CAPTAIN co-creation sessions, reported in the three different Sprints. Answers were captured in a 5-point Likert scale using emoticons. The percentage ratio of answers over the total number of participants is presented for each Sprint. Refer to Fig. 7 for the number and profile of respondents for each Sprint

4 Discussion

A wide range of technologies, including smartphones, sensors, etc., are available today for the homecare support. Bouma et al. discussed the improvements that technology could bring to the lives of older adults, recognizing that technology can augment older adults’ ability to perform their routine tasks more effectively, give them access to information they often require, and help them stay more connected to their family and caregivers (Bouma et al. 2004). However, modern information and communication technologies are not designed to be used by older people as their user interfaces require prior knowledge of interaction metaphors that could be difficult for them to master. Lack of familiarity and technology anxiety can be major barriers for technology adoption in older ages (Vaportzis et al. 2017) and often result in a limited use of the technological assistance within the home environment. Although older adults are increasingly using ICT technologies in their everyday lives, research and empirical evidence shows that the majority of them are not fully accustomed to its use. Recent research in the Ambient Assisted Living (AAL) domain has explored the use of projecting guidance directly into the environment (Guerrero et al. 2019). However, only a few empirical studies have attempted to define the type of projection-based user interfaces (UIs) that would be most suitable for older people. When looking at how computer use has increased over the last years and how human-computer interaction has been studied, there is still a clear gap between age groups, which should be addressed (Charness and Boot 2009). Hawthorn pointed out that the acceptance and use of new technologies is often difficult due to convoluted guides and structures (Hawthorn 1998). Several studies have highlighted that designing technologies that explicitly consider older users should be seen as one of the most important tasks (Pisoni et al. 2016). In addition to designers’ ability to make products more desirable for any given market, a dedicated design was recognized to potentially improve older people’s quality of life (Rogers and Czaja 2004).

To overcome these limitations, the CAPTAIN project implemented a specific Agile participatory design methodology, lasting for a relevant part (from M7 to M30) of the triennial project, where the stakeholders, with emphasis on the older adults, were visiting the LLs at pilot sites, figuring out the true user needs, acting as co-creators, interacting and giving feedback to all the intermediate released versions of the CAPTAIN frameworks. Wilkinson et al. have previously used a participatory design approach to the initial stages of an European project aiming to create technology assistance for older people using wheelchairs (Wilkinson and De Angeli 2014). To progress further, the CAPTAIN methodology implemented a modified Agile framework to fully exploit the potentialities of the participatory design, through an iterative process that was not included in Wilkinson et al. example. Agile was chosen due to its simplicity and flexibility, emphasizing on empirical feedback, team self-management, while striving to build properly tested product increments within short iterations. This approach was considered essential to progressively check with users the delivery of new intuitive ways of human-computer interaction and, at the same time, to keep engaged the stakeholders network across the whole project duration. Agile and co-creation methodologies were recently proposed also in ACCRA (Agile Co‑Creation for Robots and Aging) H2020 where an Agile co-creation, experimentation and evaluation methodology was applied for developing assistive robots for older adults integrating co-creation process with Agile programming project (Fiorini et al. 2019). Other existing projects that were founded under the same EU topic, also tended to approach user-centred design methodologies. More specifically, the EU project “Council of Coaches” (Akker et al. 2018) is using an iterative user-centred design methodology in order to identify the context-of-use for technology in the targeted audience (Broekhuis 2018). However, they do not address the internal flow of communication of context, requirements and needs. “WellCo” (Göransson et al. 2017), “Empathic” (Brinkschulte et al. 2018), “NESTORE” (Angelini et al. 2018) and “Holobalance” (Kouris et al. 2018) EU projects are following a design methodology where the first version of the system is co-designed and then implemented and tested, without specifically addressing the margin left for applying changes in every phase and how this will be considered. The EU project “vCare” (Kropf et al. 2020) has adopted a methodology of presenting consecutive prototypes to the end-users, beginning with contextual and moving to functional ones. Overall, the use of co-creation methodologies is prevalent in a lot of very recent initiatives aiming to produce innovative technologies. Although, the use of innovative methods in the design process is clear in all the above mentioned projects, they do not particularly address the issues of communication of requirements with the development team and the measurement and adaptation of the methodology used based on the consortium needs.

The CAPTAIN development methodology required intensive effort to properly coordinate the activities among technical and pilot partners, carefully considering also the logistic and time constraints in working with a number of stakeholders distributed over several pilot sites. Agile development in such physical and organization distributed environments has been recognized as posing a major challenge (Ramesh et al. 2012), but it is frequently a necessity for European consortia. To face this challenge, a study plan was defined early in the project to provide an operative framework defining activities, responsibilities and timelines for all the project partners involved in the different phases in order to allow structured and coherent involvement of participants, development and testing of the technology and effective data collection and evaluation (Tessarolo et al. 2019). The CAPTAIN study plan coordinated and synchronized activities among all the different technological partners and LLs at pilot sites, while being flexible enough to accommodate both the Agile methodology and the need for effectively implement project monitoring and a multidimensional technology assessment.

Despite this planning effort, CAPTAIN’s new methodology still requires multiple adaptation and revision, and a higher capability in swift problem solving. Agile framework is, at its very core, an empirically developed framework that needs to go through multiple adaptation cycles. One of its basic advantages is that it enabled the consortium to come up with quicker and more effective answers to problems that arose while keeping focus to providing value to the end-users by making use of co-creation methodologies. However, several challenges were also identified and needed to be addressed as the methodology is evolving. In addition to this, the distributed nature of the consortium partners makes direct communication more difficult and sometimes was time-consuming, as the team cannot meet frequently in-person.

In this context, monitoring the team status using stable and reliable actionable metrics was essential for the achievement of project objectives. The evaluation of the team morale was suggested as more appropriate than other metrics such as team happiness. Happiness was found to be too subjective, and susceptible to fluctuation possibly due to other factors not related to team and project tasks (Verwijs 2012). Team morale is more focussed on the team and the tasks, and can provide a more appropriate picture whether the team is working smoothly and feeling well and was also proposed as an appropriate tool for measuring the perceived group performance (Boxmeer et al. 2007). In the CAPTAIN’s experience, we found that this metric was quite robust in terms of possible bias due, for example, to the initial enthusiastic reaction of team members to a new and innovative development approach, like the one adopted in CAPTAIN, that can physiologically decrease along the project. Data presented in Fig. 6 showed an overall stability of team morale across the three Sprints.

However, considering only the team point of view was not sufficient to monitor the development process in a project based on a real participatory design leveraging on user involvement. The engagement and satisfaction of participants in the co-creation session was, indeed, of primary importance for the development cycles. Stakeholders community was the main source of requirements for the CAPTAIN project and their participation played a substantial role in all the development phases. One of the participants stated at the 3rd Sprint that: “All the things that we were discussing are becoming real and that moves me”. This statement fully reflect CAPTAIN philosophy, as the main goal of the whole procedure is to develop value for the end-user. Furthermore, many participants mentioned that they “learned new things throughout the process” and that “it helped them stay engaged and motivated”, which is critical for providing meaningful input and valuable output. These feelings were well captured by the measurement of both an engagement metric (i.e. the number of participants to the co-creation sessions) and the survey of participants satisfaction via the five-points Likert scale using emoticons.

Implementing metrics to monitor participants’ perception across the whole length of a multiple iteration development process was relevant for the CAPTAIN project to engage users for long periods, despite difficulties in keeping the user motivated and proactive for a long time.

5 Conclusions

CAPTAIN project successfully went beyond usual participatory design and requirement elicitation techniques by applying a structured methodological approach incorporating Agile requirements elicitation and development methodologies through participatory design throughout the lifecycle process of the development. To this aim, CAPTAIN built a network of active stakeholders that supported the participatory design through multiple iterations where primary and secondary users interacted with the new versions of the CAPTAIN system, providing their feedback, suggesting changes or new features. This process, despite requiring a substantial investment in terms of technical coordination, activity planning and communication among technical and pilot partners, resulted in an effective implementation of user needs, with high satisfaction and engagement from both partners’ and stakeholders’ perspective. The feedback obtained from both team members and stakeholders involved in co-creation activities proved that this mixed methodology was feasible within a representative distributed European consortium. Furthermore the whole development process was well perceived by both team members and stakeholders despite the fact that this is not frequently applied in EU projects but was primarily developed for use in companies development processes.

Provided that robust management tools are set in place for constantly monitoring the development process and properly incorporating the concepts of Agile into the activity plan of the project, the proposed methodological framework could guide development activities within distributed organizations (e.g. European projects consortia) in order to comply more effectively with the evolving user needs and expectations.