Keywords

1 Introduction

Population ageing, or what Kaplan et al. (2017 p. v) call the “longevity revolution”, bears testimony to the remarkable modern health and medical advancements that have been achieved globally (WHO, 2015, 2020). While longevity is to be celebrated, the reality is that the growing number of older individuals’ age-related care needs are also increasing exponentially (see Chap. 1). The World Health Organization (WHO) proposed a framework for including older individuals in the provision of affordable and accessible care and primary health services, and for doing so in age-integrated societies and communities (WHO, 2015, 2020) where people of all ages have access to infrastructure (e.g. housing, safe neighbourhoods, and physical spaces for recreation, etc.), services (basic and municipal), transport, the social environment (education, recreation, physical, and spiritual activities (Kaplan et al., 2017; Walsh et al., 2017; Warth 2016; WHO, 2007), and technology (Lui et al., 2009; Menec et al., 2011). In age-inclusive communities and societies, people at every stage of life attend to their commonalities or shared interests in a trusting and reciprocal, caring manner (Annan, 1998; Kaplan et al., 2017; United Nations, 2002). Intergenerationality is, therefore, implicit in the notion of age-inclusiveness and is promoted by social connectedness, engagement, and respect (Annan, 1998; Steels, 2015).

The use of information and communication technology (ICT) is globally considered a feasible approach for providing age-integrated services (WHO, 2015). In developed countries, for example, technology is used to link older individuals with their healthcare teams, communities, and social services, while also providing healthcare workers with useful information (Calvert Jr et al., 2009; Cerrito et al., 2015; WHO, 2015). Planning and implementing appropriate ICT interventions (eInterventions) to enhance older individuals’ inclusivity assume that relevant knowledge about their use of technology is available, but this is not always the case. In this chapter, we present the longitudinal iterative process we followed to develop a questionnaire on older persons’ cell phone use for a variety of research purposes. We provide a version (online and included at the end of this chapter) for further revisions and development—to the best of our knowledge, this is the first online questionnaire developed specifically to collect this kind of information for developing country conditions.

Specifically, the wide uptake of cell phones in sub-Saharan Africa presents unexplored opportunities to promote and improve access to service delivery for all, including older individuals. This is particularly relevant in an emerging country such as South Africa, where the increasing numbers of older individuals, the rise in non-communicable diseases, and wavering (instrumental as well as emotional) support from younger people present obstacles to the appropriate delivery of social and healthcare services to older individuals (see Chap. 1). Because little is known about how older individuals in South Africa use cell phones, and no relevant questionnaires accessible for the purpose of obtaining this information could be found, a dedicated questionnaire was developed, following a pragmatic approach. Pragmatism assumes that knowledge (in this instance, regarding older individuals’ cell phone use) is obtained through iterative processes to find solutions for problems in physical and social contexts (in our case, related to service delivery) (Campbell, 2011; Dixon, 2019; Rorty et al., 2004).

Several self-designed questionnaires for collecting information about the cell phone use of older individuals have been referenced in the literature, but the full questionnaires are for the most part not provided—only descriptions of the sections included in the questionnaires are reported (Lee 2007; Rahim et al. 2020). To address this gap, we decided to develop our own and make it publicly available. In addition, we believed that reporting on the developmental process would assist researchers with similar needs to draw up further relevant questionnaires. As Foxcroft (2004) advised, the research context, the target population, potential sociocultural influences, and the appropriate method of administration need to be considered throughout the process of developing a questionnaire.

Our first questionnaire (Version 1) was developed in 2014 to obtain baseline data of older individuals’ cell phone use in South Africa, as part of a small self-funded study entitled Older Individuals’ Cell Phone Use and Intra/Intergenerational Networks (iGNiTe). Version 1 was subsequently adapted to create Version 2 in 2017, when funding had been obtained for the project called we-DELIVER: Holistic Service Delivery to Older People by local government through ICT. Based on our findings, we then developed Version 3—the AGeConnect questionnaire—which we present in this chapter. The sequential development of the three versions of the questionnaire is shown in Fig. 5.1 and the process we followed is discussed in detail in the rest of this chapter.

Fig. 5.1
figure 1

Sequential development of versions of a questionnaire on older persons’ cell phone use applied to different research processes

2 The iGNiTe Questionnaire (Older Individuals’ Cell Phone Use and Intra/Intergenerational Networks)

The need to obtain information about older South Africans’ cell phone use stimulated a discussion among three social gerontologists (Jaco Hoffman, Doris Bohman, and Vera Roos) and resulted in the development of the iGNiTe questionnaire. The items that they suggested for inclusion were based on their collective social gerontological expertise (sociology, nursing, and psychology) (V. Roos personal communication, 6 February 2017). Items were not organized according to specific categories in the questionnaire but served to collect information on the following topics:

  • Biographical information: items related to older individuals’ age, gender, place of residence, level of education, and household composition;

  • Items required for application of The South African Advertising Research Foundation’s (SAARF) Universal Living Standards Measure (SU-LSM™) (this measure, developed as a research tool, has become a widely used segmentation tool in South Africa (Haupt, 2017; South African Audience Research Foundation, 2017): the original measure consisted of 25 questions that classified the population into levels from 1 to 10, with 1 indicating very low income and minimal access to services, and 10 indicating high income and full access to services; the latest version of the SU-LSM™ consists of 29 questions and provides a more refined version of living standards, including ownership of certain household items (Eighty20, n.d.; Haupt, 2017; SAARF, 2017);

  • Cell phone information: items about older participants’ access, ownership and use of cell phones, as well as details about their functionalities;

  • Cell phone user patterns: items including questions about older individuals’ use of specific cell phone functions, ranging from basic to more advanced, and frequency of use;

  • Social networks around older persons’ cell phone use: items on social arrangements around older individuals’ cell phone use;

  • Cell phone use competence: items related to older individuals’ self-perceived knowledge, skills, and attitude;

  • An open-ended question at the end of the questionnaire: this asked older participants how they had experienced participating in the technology-based questionnaire about their cell phone use.

In 2014 a concurrent mixed methods research design (see Fetters et al., 2013) was applied and the iGNiTe questionnaire was administered to older participants (n = 128) in three communities in the Potchefstroom area (120 km south-west of Johannesburg) in the North West province of South Africa. In addition to the questionnaire, three qualitative methods were employed to collect further information from a total of 52 older individuals, who participated in semi-structured interviews (n = 23), focus groups (n = 10), and the visual data-collection Mmogo-method® (Roos, 2008, 2016) (n = 19).

2.1 Participants and Data Collection

Purposive sampling was used to identify three day-care centres for older persons in the Potchefstroom area in close proximity to the researchers, and criterion sampling was applied to select the older participants (see Patton, 2002). Participants were selected based on the following inclusion criteria: persons 60 years or older who had access to a cell phone, and who did not present with any observable cognitive impairments preventing them from interacting coherently with the researchers. Version 1 of the questionnaire was uploaded onto digital devices (cell phones or tablets) to capture responses directly on the SurveyToGo application (dooblo.net, 2005). Informed by the idea that age-inclusiveness is promoted through intergenerationality (Kaplan et al., 2017), we invited students familiar with the vernacular and sociocultural context of the older participants to be trained as fieldworkers (see Chaps. 3 and 4). Drawing on pragmatism and Dewey’s (1998) notion that communication is transformation, we assumed that the communication processes between the younger and older people could alter or redirect the older individuals’ relationship with technology positively (Dixon, 2019). Unfortunately, owing to problems including lack of transport and child care responsibilities, some older individuals—mainly from low-resourced areas around Potchefstroom (Promosa and Ikageng) to which they had previously been removed—were unable to attend on the day the data were collected. Consequently, data skewed towards people with higher LSM levels were obtained. The biographical information of participants is provided in Table 5.1.

Table 5.1 Characteristics of the iGNiTe participants (n = 128)

Results were skewed towards the majority of the selected older participants living in Potchefstroom (almost 60%) who had completed 12 years of formal education including those with postgraduate degrees. The remaining participants reported primary level educational levels or no formal education. In this sample of older participants, about half reported higher LSM scores (57.5% on levels 8 to 10) and the rest reported LSM levels 4 to 7.

The findings from the semi-structured interviews, focus groups, and the Mmogo-method® gave information (first reported as part of master students’ dissertations; later published in articles) about the following topics related to the older participants’ cell phone use: lack of basic skills and knowledge to use cell phones compensated for by applying various relational strategies (Steyn et al., 2018); perceived level of competence in using cell phone devices and different cell phone features (Leburu et al., 2018); assistance from younger people with cell phone use (Leburu et al., 2018; Scholtz, 2015); and reasons for using cell phones (Lamont et al., 2017). The findings of the analysed data which are reported in the published articles informed revisions to Version 2 of the questionnaire.

2.2 Statistical Analysis and Results from the iGNiTe Questionnaire

The sample was described from the results of a frequency analysis and descriptive statistics, including means and standard deviations. The content validity of the questionnaire was confirmed by the three social gerontologists (Jaco Hoffman, Doris Bohman and Vera Roos), who reviewed the subject matter. Reliability was determined with Cronbach’s alpha (α), with the suggested acceptable cut-off value of α > 0.70 (Field, 2018). Because no factor structure existed, reliability was calculated for the questionnaire as a whole, and for two possible subscales identified by visual inspection. These two subscales were labelled Frequency of feature use (including items 15 to 28, e.g. “How often do you make and receive calls?”, “How often do you go on the internet?”, “How often do you take photos?”), and Attitude towards the phone (including items 32.1 to 32.7, e.g. “The phone menu is understandable”, “My airtime limits my functions”, “I know how to work with my phone”). The reliability coefficients were found to be unacceptable for one potential subscale (attitude towards phone: α = 0.64) but acceptable for the complete questionnaire and the other potential subscale (iGNiTe questionnaire: α = 0.89; and frequency of feature use: α = 0.78).

Exploratory factor analyses (EFAs) were applied to explore the factor structure of the questionnaire. The iGNiTe questionnaire did not contain specific sections: the list of 41 items began with the biographical questions, followed by the rest in no particular order. EFAs were conducted on the two visually identified possible subscales: Frequency of feature use, and Attitude towards the phone. Based on the EFA results, it was suggested that Frequency of feature use could be split into three factors: “Basic feature use”, “Advanced texting and imaging”, and “Internet-dependent features”. The number of items could be reduced. For example, item 17 did not load on a factor at all (“How often do you send and receive an MMS?”). Items could also be added (e.g. “How often do you look at the time?”) to collect more detailed information where necessary, or rephrased (e.g. “How often do you play music/radio?”) based on the different ways phone technology has changed. The potential subscale for Attitude towards the phone indicated a one-factor structure, with only four of the seven items loading on the factor. The suggestion was to re-evaluate whether the items included did in fact measure attitude and whether they were clear and unambiguous. The intent of the EFAs was to explore the factor structure of the questionnaire and to provide suggestions that might improve model fit and reliability levels of the questionnaire, thereby increasing the quality of data collected in future.

Although these suggestions from the iGNiTe results were considered for the development of the second we-DELIVER version, it was noted that the very small ratio (1:3.12) between the number of items (41) and the sample size (n = 128), was a definite limitation to a confident interpretation of the results from the EFAs.

3 The we-DELIVER Questionnaire (Holistic Service Delivery to Older People by Local Government through ICT)

In 2017, funding was obtained to gather data about older South Africans’ cell phone use to promote municipal service delivery. The small self-funded iGNiTe study was deliberately expanded to include a wider range of communities. Continuously revising and modifying a solution based on the outcomes of actions in order to address the problem appropriately is in line with a pragmatic approach (Dixon, 2019). Accordingly, the iGNiTe questionnaire was revised drawing on the results of the original statistical analyses and transdisciplinary consultation by the research team, which consisted of senior and junior researchers, as well as student fieldworkers, from subject disciplines: law, public administration, demography and population studies, development studies, social work, psychology, language studies, biokinetics, information systems and socio-gerontology (see Chap. 4 for a detailed discussion). Qualitative findings obtained from the iGNiTe study further informed revisions. The specific focus of the we-DELIVER project on service needs informed inclusion of items in this regard (https://we-deliver.github.io/team). In Version 2 (we-DELIVER), items were arranged in sections, and items with specific topics were added or revised for greater clarity. For example, reference to the use of multimedia messaging services (MMSs) was removed, and questions about taking selfies and making voice recordings were added. Table 5.2 summarizes the changes made for the we-DELIVER questionnaire.

Table 5.2 Changes made to the iGNiTe questionnaire (Version 1) in developing the we-DELIVER questionnaire (Version 2)

The we-DELIVER questionnaire was developed to obtain specific information about older individuals’ cell phone use and their needs for municipal services. Five questions relevant to addressing the we-DELIVER project informed the revisions of Version 1:

  1. 1.

    Which cell phones and cell phone functionalities do older persons use in the context of multigenerational families?

    • To how many cell phones do older persons have access?

    • Which types of cell phones are used?

    • To whom do the cell phones belong?

    • Who else has access to the cell phones?

    • Who chose the cell phones being used?

    • Who pays for the data and airtime?

  2. 2.

    What are the cell phones used for?

    • Basic cell phone features?

    • Advanced cell phone features and imaging?

    • Internet-dependent cell phone features?

    • Care needs and relational regulation?

  3. 3.

    What is older persons’ self-perceived competence (knowledge, skills, and attitudes) with regard to their use of cell phones?

  4. 4.

    What service needs are addressed by using a cell phone?

  5. 5.

    What are the intergenerational patterns around older persons’ cell phone use?

3.1 Structure of the we-DELIVER Questionnaire

The structure of the we-DELIVER questionnaire (Version 2) is presented in Fig. 5.2.

  • Biographical information: age, gender, level of education, living arrangements, and household size.

  • About the cell phone: items related to access and ownership.

  • Cell phone user patterns: participants’ self-reported ability to use the phones’ different features, categorized as basic, advanced, and internet-dependent.

  • Cell phone user patterns: care and relational regulation consisted of items about reasons for using cell phones in relation to making and receiving contact with people. Specific items about social, health, and emergency service needs were included under this heading to answer the research questions guiding the we-DELIVER project.

  • Perceived competence (knowledge, skills, and attitude).

  • Intergenerational patterns: items about how contact is made, who is contacted, and frequency of contact.

Fig. 5.2
figure 2

Structure of the we-DELIVER questionnaire

3.2 Translation and Pilot Study

Two socio-gerontologists (Vera Roos and Jaco Hoffman) with extensive research and practical experience of topics related to issues affecting the lives of older persons, together with a transdisciplinary research team (consisting of first- and second-language Setswana-speakers) formulated the additional and revised items for the we-DELIVER questionnaire. The questionnaire was translated from English into Setswana by researchers in African languages at the Mahikeng campus of the North-West University (NWU). Setswana is the main language used in Lokaleng and Ikageng, two of the selected communities. The translated version was given to 15 mother-tongue Setswana speakers to verify its comprehensibility by identifying ambiguous or unclear wording. It was translated back into English by a translator affiliated to the NWU language directorate. The transdisciplinary research team compared the two English versions to check for accuracy and the appropriateness of the translation for the particular setting for which it was intended. This process was followed to check the language accuracy of linguistics experts who were not familiar with the target languages, to examine the quality of the translation, and to detect potential errors (see Foxcroft & Roodt, 2018).

After piloting the translated questionnaire with older persons (n = 27), the research team discussed issues related to the wording of items that lacked equivalent constructs in the indigenous languages. For example, there is no word in Setswana for “air conditioner”. Another issue concerned the use of concepts that were not familiar in rural contexts, such as access to home security services (one of the SU-LSM™ questions); some participants in the pilot study responded affirmatively by saying that they owned a dog. The transdisciplinary input was used to revise, simplify, and finalize the questionnaire. Finally, to ensure consistent quality, a community psychologist, who was not part of the research team but was familiar with Setswana and the relevant sociocultural context, checked the questionnaire word by word to ensure that the phrasing would yield accurate information for addressing the research questions, and would also be easily understandable so as to encourage optimal participation. Revisions were made before the questionnaire was translated into Sesotho and isiZulu (the main languages used in the other target communities) by a lecturer affiliated to the languages department of the NWU’s Vanderbijlpark campus and who was familiar with the vernacular and sociocultural context of the research communities.

3.3 Data Collection and Participants

Questionnaires were uploaded on digital devices (cell phones or tablets) and trained student fieldworkers captured the participants’ responses (n = 302) on SurveyAnalytics (https://www.surveyanalytics.com).

Purposive sampling (see Patton, 2002) was used to select communities located close to the three NWU campuses (in Mahikeng, Potchefstroom and Vanderbijlpark). The older participants were selected by criterion sampling (see Patton, 2002) and included persons 50 years or older who had access to a cell phone, and who did not present with any observable cognitive impairments that would have prevented them from interacting coherently with the researchers. The older individuals who participated in the we-DELIVER project resided in Lokaleng, Ikageng, and Sharpeville (see Chap. 3 for a detailed discussion of the research communities), of whom 15 (5.0%) lived in unspecified areas. Four participants did not indicate where they live. Participation was skewed towards women; fewer than a quarter of the participants were male. In this sample, 70% of the participants had completed primary school educational levels or had no education, and only 2.0% had completed a postgraduate education. Almost half (48.2%) lived in households of 5 or more people, including those who lived with their children (54.9%) and/or their grandchildren (57.2%). With regard to the SU-LSM™, only 7.4% reported LSM levels of 8 to 9, whereas the majority (75.6%) indicated levels 4 to 7, and almost a quarter noted LSM levels 1 to 3. Table 5.3 provides information about participants’ characteristics.

Table 5.3 Characteristics of the we-DELIVER participants (n = 302)

3.4 Statistical Analyses and Results

Results of the data analyses are discussed in detail in Chap. 6, here results pertaining specifically to the revision of the we-DELIVER questionnaire are presented. They informed revisions and the development of Version 3 (AGeConnect). Included in this section, therefore, are the results for means with standard deviations, internal consistency, confirmatory factor analyses, and exploratory factor analyses.

3.4.1 Descriptive Statistics and Reliability

SPSS 26 (IBM Corporation, 2020) was used to calculate the descriptive statistics and reliability coefficients. The means (M) are reported with their associated standard deviations (SD) to assist with interpretation of the meaningfulness of the calculated averages. The M for each variable is calculated according to that specific measurement scale and should therefore not be compared directly with other means. The generally recognized range for an acceptable SD is anywhere between −1.00 and + 1.00. An SD outside that range is interpreted as being too widely distributed for its M to be meaningful (Field, 2018). Cronbach’s alpha (α) was used to compute reliability coefficients with a suggested cut-off point for acceptable reliability of 0.70 (Field, 2018). Reliabilities for subsections of the we-DELIVER questionnaire are provided with their descriptive statistics in Table 5.4.

Table 5.4 Descriptive statistics and reliability coefficients

The number of working cell phones per household was reported to be, on average, just above 2 (M = 2.24, SD = 1.78), with most participants indicating that they sometimes used a cell phone (Scale: 0–2; M = 1.07, SD = 0.62), especially its basic features (Scale: 0–3; M = 2.45, SD = 0.89). Advanced and internet-dependent features were used much less (Scale: 0–3; M = 0.38, SD = 0.60, and M = 0.29, SD = 0.54, respectively). Regarding levels of knowledge and skill as well as attitude toward cell phones, participants reported the following (on a scale of low, medium, high): Average self-perceived level of knowledge = 1.64 (SD = 0.71); Average self-perceived level of skill = 1.40 (SD = 0.63); and Average self-reported attitude toward cell phones = 1.94 (SD = 0.97).

There were two subsections in which reliability was found to be below the preferred 0.70 threshold: “Frequency of use of basic features” (α = 0.49), and “Perceived level of skill” (α = 0.63). “Frequency of use of advanced features” and “Frequency of use of internet-dependent features” resulted in acceptable alphas of 0.76 and 0.70, respectively. “Perceived level of knowledge” showed a reliability index of 0.87, while the subsection “Attitude” achieved an alpha of 0.83. On closer inspection of the two unsatisfactory subsections, it was not possible to pinpoint any specific item in either that might have influenced their reliability coefficients negatively. Before adapting or removing items could be considered, however, model fit needed to be investigated.

3.4.2 Confirmatory Factor Analysis (CFA)

CFAs were conducted to attempt to confirm the proposed factor structures of the applicable latent variables of the we-DELIVER questionnaire. The robust maximum likelihood estimator (MLR) was specified, because it considers the skewness and kurtosis found in the data. CFAs were conducted for two subsections: “Frequency of use of features” (containing three factors: basic, advanced, and internet-dependent features) and “Knowledge, skill, and attitude” (three factors). For measurement of “Frequency of use of features” the scale Never, Once a month, Once a week, Once a day, and A few times a day was used, regardless of whether participants used it themselves or asked someone to help them. The results of the two CFAs are provided in Table 5.5. Fit statistics reported include chi-square (χ2; with lower values indicating better fit) and degrees of freedom (df), as well as the root mean square error of approximation (RMSEA; acceptable <0.08; excellent <0.05), the comparative fit index (CFI; acceptable >0.90; excellent >0.95), the Tucker-Lewis index (TLI; acceptable >0.90; excellent >0.95), and the standardized root mean square residual (SRMR; acceptable <0.08) (Wang & Wang, 2020).

Table 5.5 Fit statistics of confirmatory factor analyses

Neither of the two 1-factor models (Frequency of feature use, and Knowledge, skill, and attitude) could be used for further analysis. “Frequency of feature use” produced a non-positive definite latent variable covariance matrix, indicating a negative or residual variance for a latent variable, a correlation between two latent variables larger than or equal to 1.00, or a linear dependency among more than two latent variables. Also, even though “Knowledge, skill, and attitude” achieved acceptable levels for CFI, TLI and RMSEA, the SRMR value was too high (SRMR = 0.134). A correlation between Knowledge-item 11 (I know how to check my cell phone balance) and Skill-item 6 (I can check my cell phone balance on my own), was measured as 0.994, suggesting that these two items could be combined, as they were measuring the same information. Two Skill items and three Attitude-items did not load well on their respective factors (loadings should be β > 0.35), indicating that some items could be removed without jeopardizing the strength of the constructs. The high correlation between the variables Knowledge and Skill (r = 0.989) might also be an indication that participants did not distinguish between the two concepts, or that the phrasing of the items made the distinction unclear.

Because of the described problems with the two models, it was decided to carry out EFAs on the two factors (Frequency of feature use, and Knowledge, skill, and attitude).

3.4.3 Exploratory Factor Analysis

Mplus 8.6 (Muthén & Muthén, 1998–2021) was used to explore the factor structure of the items. The same steps were followed for both. First, an initial EFA was performed to ascertain the number of possible factors (with Eigenvalues >1.00) contained within the specified items. Then the number of factors to be extracted was specified and the resulting model fit compared. Last, a new factor structure was suggested, if needed, and items to be removed or adapted indicated.

Frequency of Feature Use The three factors (Basic, Advanced, and Internet-dependent) contained 20 items in total. Corresponding with Eigenvalues larger than 1.00, one to five factors were programmed to be extracted from the 20 items. After inspection of the separate EFAs, it was found that the item “listen to the radio/music” did not load very strongly on a specific factor in any of the EFAs, but instead showed several significant cross-loadings between different possible factors. For future use the decision was made to split this item into “Listen to music” (under basic features) and “Listen to the radio” (under advanced and data-dependent features). This was because the participant could already have had music stored on the phone but would have to use data in order to connect to a radio station. Inspection of the possible factor structure solutions extracted from the data included comparison of the models’ Akaike information criteria (AICs) and sample-size adjusted Bayesian information criterion (ABICs) values (with lower values indicating better fit) (Wang & Wang, 2012). As model fit improves with the possibility of more factors, these comparisons had to be balanced with the patterns of significant loadings of the respective items for the five possible factor structure solutions. Finally, the most appropriate solution was to split the items into four factors for use in the next version of the questionnaire:

  • Basic feature use (3 items: make and receive calls; look at the time; look at the date and calendar);

  • Intermediate feature use (5 items: send and receive SMSs; use the alarm clock; set reminders, e.g. for appointments, to take medication; give and receive family news; listen to music saved on the cell phone);

  • Advanced and data-dependent feature use (11 items: use WhatsApp etc.; play games; send voice notes e.g. on WhatsApp; use the calculator; send and receive email; use Google to search for information; access Facebook [and/or other social media platforms, e.g., Twitter, Instagram]; use internet banking; read local and/or international news; listen to the radio; Watch TV/videos, e.g. YouTube, Netflix); and

  • Imaging feature use (3 items: take photos; take selfies; look at photos).

It was also apparent from the statistical results that participants sometimes not only used the features themselves, but also asked someone else for help. Therefore, the measurement scale was changed, and Yes and No replaced with categories to indicate frequencies. A choice of Never would indicate a No answer, but choosing any of the other options implied a Yes answer. The categories indicating frequency of feature use were also revised, because they referred to different time intervals, such as a day, a week, or a month. For consistency, the categories were changed to time intervals related to a month: Once a month, A few times a month, Every day of the month. The same time intervals were also used to indicate the how often the participants would ask others to assist them with cell phone features.

Perceived Knowledge and Skill The 17 items of perceived levels of knowledge and skill were used to determine the possible number of factors they contained. The applicable Eigenvalues indicated a possibility of three factors. The outcomes of the EFAs showed that two items either did not load significantly at all or cross-loaded significantly on the explored factors: “I require assistance to explore new features” and “I am not competent enough to use all my cell phone features”. These items were removed. As seen from the CFA results, the participants did not seem to distinguish between knowledge and skill. It was decided to change the format of the answers and provide three options in order to gather information on the two concepts combined: “Not at all”, “With difficulty”, and “With ease”. After each grouping of items, a question was added regarding the participant’s interest in learning more about the combination of features.

The best solution was to split the remaining 15 items into three factors for the new version of the questionnaire:

  • Basic competence (4 items: Can you: switch a cell phone on and off; make calls; operate cell phone independently, and lock and unlock). The question whether participants would like to learn more about the basic features was added;

  • Advanced competence (8 items: Can you: send messages; use advanced features, e.g. WhatsApp, Facebook; take photos; create new contacts; connect to the internet; explain different features to others; use almost all features; and use new features). The question whether participants would like to learn more about the advanced features was added; and

  • Data/airtime management competence (4 items: Can you: upload airtime; buy airtime using a cell phone; buy data using the cell phone; and check the airtime/data balance). The added question was also included to determine if participants would like to learn more about the data/airtime management features.

Attitude This was measured with 13 items, which were used in an EFA to determine if there might be more than one factor present within the construct. Based on Eigenvalues higher than 1.00, three factors were possible. Two items did not load significantly onto any factor for any factor combination: “I see my cell phone as a dangerous gadget” and “I don’t like cell phones”, and they were removed from the revised questionnaire.

A three-factor solution was suggested by the EFA outcomes, which also complements the theoretical base of three components of attitude (Matteson et al., 2016):

  • Affective component (How do you feel about cell phones) (4 items: I like cell phones; I like to use a cell phone; my cell phone is easy to use; my cell phone is very important to me);

  • Cognitive component (How do you think about cell phones?) (5 items: A cell phone makes things easier; a cell phone is a wonderful instrument for communicating with people; a cell phone is helpful in reminding me of important things, e.g. appointments; I prefer less complex cell phones; I prefer pushbuttons, not touchscreens);

  • Behavioural component (Why do you use cell phones?) (3 items: A cell phone makes me more independent; a cell phone makes me feel competent; I learn new things on cell phones).

Results of the statistical analysis, transdisciplinary input and consideration of relevant literature and theory, informed the revision of the we-DELIVER questionnaire to develop AGeConnect. Changes made are presented in Table 5.6.

Table 5.6 Changes to the we-DELIVER questionnaire to inform the AGeConnect questionnaire

4 AGeConnect Questionnaire (Age-Inclusive eConnections Between Generations for Interventions and Cell Phone Technology)

Here we present the AGeConnect questionnaire (Roos et al., 2022). The online version (https://ageconnect.questionpro.com/) has self-directed instructions, but for the MS Word version at the end of the chapter we suggest application guidelines.

4.1 Structure of the AGeConnect Questionnaire

The structure in Fig. 5.3 presents the different sections of the questionnaire:

  • Biographical information: age, language, gender, place of residence, level of education;

  • Household structure: living arrangements;

  • Cell phone information, use and access: items related to access and ownership;

  • Cell phone user patterns: use of specific cell phone features, divided into four subsections: Basic, Intermediate, Advanced and data-dependent, and Imaging features;

  • Competence: divided into three subsections: Basic, Advanced, and Data/airtime management competence;

  • Attitude: divided into Affective component (What do you feel about cell phones?), Cognitive component (What do you think about cell phones?), and Behavioural component (Why do you use cell phones?);

  • Interpersonal contact using cell phones: items related to actions performed to make contact with, and be contacted by, other people;

  • An open-ended question at the end of the questionnaire asked older participants how they had experienced participating in a technology-based questionnaire about their cell phone use.

In the construction of the online questionnaire, certain logics were used to allow users to skip irrelevant questions to save time based on their answers. This process is illustrated in Fig. 5.4 where the question asks older participants how often they use a cell phone. If a participant selects the option “Never”, irrelevant follow-up questions are excluded, as illustrated in Fig. 5.5, which shows the back-end programming with the instruction to skip to the next relevant question.

Fig. 5.3
figure 3

Structure of the AGeConnect questionnaire

Fig. 5.4
figure 4

Screenshot of question showing various options

Fig. 5.5
figure 5

Screenshot of back-end programming with the instruction to skip irrelevant options and move on to the next question

4.2 Guidelines for Using the AGeConnect Questionnaire

The purpose of the AGeConnect questionnaire was to gather information on how older persons use cell phones in relation to their close and distant social relationships, their care and social needs, and their perceived levels of competence to use basic and advance cell phone features.

Items under specific headings can be revised to fit the specific context, such as:

  • Biographical information

    • Which language is predominantly spoken in your home?

    • What is the name of the place where you live?

    • What is your highest level of education?

  • Cell phone information, use and access

    • Who is your service provider?

    • How are the network services paid for?

The AGeConnect questionnaire was designed for digital completion by the participants themselves or with the assistance of trained (younger) fieldworkers. In the digital version (compiled on QuestionPro https://www.questionpro.com), when a question is answered, the applicable follow-up questions open and irrelevant questions are skipped.

Training for younger people on how to use the questionnaire should include ways to create an optimal interpersonal context before setting out to capture older participants’ responses on digital devices (see Chap. 7). Although the questionnaire was designed to be completed in a conversational manner, younger facilitators need consciously to refrain from using leading prompts. It is also recommended that younger people who administer the questionnaire should be familiar with the vernacular and sociocultural context of the older participants (see Chap. 3).

When collecting data, the younger facilitators should select only the relevant option and not offer all possible answers provided for a particular question. For example, in response to item 3.6: “If the cell phone belongs to you, who chose it for you?” the participant could respond “My friend”, which informs a follow-up question, such as: “Is your friend younger or older than you or the same age as you?” Based on the answer, the person administering the questionnaire would then capture the relevant response. The final question relating to how older participants experienced the data collection session reveals descriptive qualitative data. The rationale for including this question was to allow for coding and for improving items or the process of application for future purposes. In addition, it was intended as a means for obtaining valuable insight into this age-inclusive manner of data collection.

The questionnaire may be used by any researcher interested in the fields of gerontology or the utilization of mobile technology. Build into the design is its potential to be revised for addressing related research questions in future. As such, the version of AGeConnect described in this chapter represents opportunities for continuing work in progress, and should not be regarded as final.

5 Conclusion

Promoting age-integrated societies and communities effectively through technology depends on including people of all ages in age-appropriate and context-specific ways. Achieving this ideal calls for knowledge of older individuals’ cell phone use to enable inclusivity, and, where relevant, through supportive facilitation by younger people who are familiar with the sociocultural contexts of the older persons. This approach not only yields useful data to develop technology artefacts or for planning interventions, but also demonstrates technology use and through facilitated intergenerational engagement in optimal interpersonal contexts can help to get the buy-in of older adults for use of such technology in future.

This chapter ventured into the uncharted territory of self-designed questionnaire development to capture older individuals’ responses regarding their cell phone use in a context characterized by diversity. The longitudinal development of our data-collection tool is transparently reported, as we designed and revised our questionnaires to fit their specific purpose. The rigorous processes that we followed to ensure reliability and validity included: statistical analyses, transdisciplinary input, consultation of recent literature reviews (including context-relevant qualitative studies), and inclusion of items based on relevant theory. This part of the larger study sets the scene for using the first—to our knowledge—online, digital questionnaire for the South African context, with the aim of yielding much-needed quantifiable information about older individuals’ cell phone use as the basis for developing eInterventions. Finally, by investigating the psychometric properties of the AGeConnect (Version 3) questionnaire, we invite revisions to stay abreast of ever-evolving technology developments and to find creative and effective ways—for example, through trained younger people who can offer supportive facilitation—to deal with the digital divide and to keep advancing older individuals’ inclusivity.