Journal of Business and Psychology

, Volume 32, Issue 3, pp 283–300 | Cite as

The Knowledge, Skills, Abilities, and Other Characteristics Required for Face-to-Face Versus Computer-Mediated Communication: Similar or Distinct Constructs?

  • Julian Schulze
  • Martin Schultze
  • Stephen G. West
  • Stefan Krumm
Original Paper

Abstract

Purpose

This study investigated the convergence of knowledge, skills, abilities, and other characteristics (KSAOs) required for either face-to-face (FtF) or text-based computer-mediated (CM) communication, the latter being frequently mentioned as core twenty-first century competencies.

Design/Methodology/Approach

In a pilot study (n = 150, paired self- and peer reports), data were analyzed to develop a measurement model for the constructs of interest. In the main study, FtF and CM communication KSAOs were assessed via an online panel (n = 450, paired self- and peer reports). Correlated-trait-correlated-method minus one models were used to examine the convergence of FtF and CM communication KSAOs at the latent variable level. Finally, we applied structural equation modeling to examine the influence of communication KSAOs on communication outcomes within (e.g., CM KSAOs on CM outcomes) and across contexts (e.g., CM KSAOs on FtF outcomes).

Findings

Self-reported communication KSAOs showed only low to moderate convergence between FtF and CM contexts. Convergence was somewhat higher in peer reports, but still suggested that the contextualized KSAOs are separable. Communication KSAOs contributed significantly to communication outcomes; context-incongruent KSAOs explained less variance in outcomes than context-congruent KSAOs.

Implications

The results imply that FtF and CM communication KSAOs are distinct, thus speaking to the consideration of CM KSAOs as twenty-first century competencies and not just a derivative of FtF communication competencies.

Originality/Value

This study is the first to examine the convergence of context-specific communication KSAOs within a correlated-trait-correlated-method minus one framework using self- and peer reports.

Keywords

Computer-mediated communication Face-to-face communication Communication competence KSAO Correlated-trait-correlated-method minus one model [CT-C(M-1) model] 

Introduction

The knowledge, skills, abilities, and other characteristics (KSAOs) to communicate effectively are key to interpersonal and professional success (Payne 2005; Spitzberg and Cupach 1984). Communication and related KSAOs play a prominent role in several generic competency models (e.g., Bartram 2005; Stevens and Campion 1994). This emphasis on communication KSAOs in the workplace derives from the ubiquity of teams and other collaborative and interactive work forms (e.g., Gilson et al. 2015). Communicating effectively has been shown to predict important organizational outcomes including leadership, individual, and team performance (e.g., Aguado et al. 2014; De Vries et al. 2010; Hertel et al. 2006; Riggio and Taylor 2000; Young et al. 2000).

With the rise of technology and globalization, however, traditional forms of interpersonal communication are changing drastically. Virtual teamwork, dyadic collaboration around the globe, and the frequent use of communication technology, even among colocated co-workers, have brought additional challenges for employees in the twenty-first century (Gilson et al. 2015; Johnson et al. 2001). Several authors have emphasized the multiple challenges related to computer-mediated (CM) communication, such as achieving consensus with asynchronous media (e.g., Dennis et al. 2008), developing relationships and gathering contextual information in cue-deprived forms of communication (e.g., Mesmer-Magnus et al. 2011; Pauleen and Yoong 2001), and transferring complex information over restricted channels (e.g., Walther and Bazarova 2008).

A key question arises in light of these changing and challenging communication requirements: Do FtF and CM communication require similar or different skills? This question can be operationalized as: Are FtF and CM communication KSAOs identical or distinct constructs? The current study addresses this question with the goal of contributing to (a) a more detailed understanding of context-specific and general aspects of communication KSAOs (c.f., Keyton 2015), and (b) a deeper understanding of the assessment of communication KSAOs in different contexts.

FtF and CM Communication

As a result of globalization and technological developments, CM communication has become prevalent in today’s interactions in the workplace (e.g., Gilson et al. 2015). For the purpose of the current study, we define CM communication as “… any human symbolic text-based interaction conducted or facilitated through digitally-based technologies” (Spitzberg 2006, p. 630). Common text-based technologies include instant messaging, forums, chat, and email. In general, CM communication departs in various ways from FtF communication (e.g., less perceived naturalness; missing nonverbal communication cues) and therefore demands a higher cognitive effort from the interacting individuals (Kock 2004). We focus in the current study on communication through text-based media as this is one of the most frequently used and challenging forms of CM communication (Lenhart et al. 2005; Walther and Bazarova 2008).

Several theories in the realm of CM communication exist which focus on differences between FtF and CM communication. Media richness theory, for instance, orders communication along a richness continuum: FtF communication is highest in richness (potentially including body language and facial expressions), whereas text-based communication is much lower in the richness hierarchy (Daft and Lengel 1986; Maruping and Agarwal 2004). Media naturalness theory assumes that FtF communication is evolutionarily the oldest mode of interaction and is perceived as the most natural way to communicate, as it allows for conveying facial expressions, observing body language, and listening to speech (Kock 2004). Electronic propinquity theory, as another example, also recognizes the reduced bandwidth of CM communication and posits that these channel restrictions produce a lessened feeling of nearness that needs to be addressed by communicators (Korzenny 1978; Walther and Bazarova 2008). Lastly, social information processing theory also acknowledges the lack of nonverbal communication cues in CM communication (Walther 1992). But in contrast to classical approaches, the theory predicts that individuals can use proxies for missing social and nonverbal communication cues (e.g., in the form of smileys or punctuation). This view has been supported in recent studies that have analyzed how individuals adapt to text-based communication media by using unconventional orthography (e.g., “yeeees” as substitute for auditory information; Kalman and Gergle 2014) or nonstandard punctuation such as multiple exclamation marks (Vandergriff 2013). Thus, multiple theoretical perspectives and empirical studies agree that CM communication through text-based media differs from FtF communication and can be particularly challenging.

FtF and CM Communication KSAOs

Generally, competent communication can be broadly defined as “… a process through which interpersonal impressions are shaped and satisfactory outcomes are derived from an interaction” (Spitzberg and Hecht 1984, p. 576). Several models and conceptualizations of FtF communication KSAOs have been proposed (see Spitzberg 1988). We based the current study on the model introduced by Spitzberg (1983, 2006) for three reasons. (a) It has a strong conceptual basis (Spitzberg 1983, 1988, 2006). (b) Its structure has been supported in numerous studies (e.g., Spitzberg 2011; Spitzberg and Brunner 1991; Spitzberg and Hecht 1984). (c) It can be applied to both the FtF and the CM context (Spitzberg 2006).

Spitzberg’s (1983) communication KSAO model1 consists of several components: motivation, knowledge, attentiveness, expressiveness, composure, and coordination. Spitzberg (2006, p. 637) offered a metaphor that aptly portrays the interplay and meaning of these constructs. “An actor needs to be motivated to give a good performance. Being motivated, however, is insufficient if the actor does not know the script which is to be enacted or the context in which the script is to be played out. Even motivation and knowledge are still insufficient unless actors have the acting skills requisite to translate their motivation and knowledge into competent action.” According to Spitzberg (1983, 2015), acting skills include attentiveness (i.e., showing interest and empathy in conversations), expressiveness (i.e., expression of emotions, use of gestures, use of humor), composure (i.e., displaying certainty and forceful expression of own opinions), and coordination (i.e., topic maintenance, follow-up comments). According to this model, this set of KSAOs predicts several communication outcomes—the attractiveness, appropriateness, effectiveness, satisfaction, and clarity of a conversation (Spitzberg 2006). The model is presented in Fig. 1.
Fig. 1

Spitzberg’s model of communication competence (adapted from Keyton 2015). Please note that we did not include coordination in the current study because items of this component could not be easily adapted to face-to-face and text-based computer-mediated contexts. Note: KSAOs knowledge, skills, abilities, and other characteristics

Spitzberg’s (2006) communication KSAO model proposes that CM and FtF communication are based on the same fundamental set of KSAOs. “… FtF and CMC interaction are more similar than they are different. Both can be explained by the same general model components, and, in most cases, the components of this model require only minor adaptation to the particular technological features of the context” (Spitzberg 2006, p. 652). Only one empirical study to date has investigated the similarity between FtF and CM communication KSAOs. Hwang (2011) examined the effect of FtF communication competence on mediated communication competence (i.e., mobile phone and instant messaging) using structural equation modeling and found only a moderate standardized regression coefficient of 0.29, thus suggesting there is a substantial distinction between FtF and CM communication KSAOs. Such findings are also consistent with research and theory in personality that maintains that individual differences in human behavior should be seen as being contextualized (Mischel 2009). Consistent behavioral patterns are most likely to be observed in functionally equivalent categories of situations (Mischel and Shoda 1995; see also Holtz et al. 2005; Lievens et al. 2008; Schmit et al. 1995 for empirical examples).

To date, only Hwang (2011) has directly compared FtF and CM communication KSAOs. No comparison has been made that examines the set of KSAOs specified by Spitzberg’s (1983) model. Based on (1) findings provided by Hwang (2011), (2) theory and evidence suggesting differences between the FtF and the CM communication contexts, and (3) research in personality psychology showing that the prediction of behavior should be contextualized, we posit that:

Hypothesis 1

FtF communication KSAOs and CM communication KSAOs are distinct constructs.

Further, we also assume that the predictive validity of communication KSAOs will improve when the context is taken into account. This assumption is supported by Shaffer and Postlethwaite’s (2012) meta-analysis on frame-of-reference testing; aligning a test’s frame of reference (e.g., “in written communication”) to the target (e.g., effectiveness of written communication) generally improved tests’ predictive validity. Within the domain of CM communication, Hertel et al. (2006) assessed traditional communication skills and used them to predict performance in virtual teams. Their results showed that traditional communication skills, which are generally predictive of traditional team performance (cf. Stevens and Campion 1994), were unrelated to virtual team performance. This reasoning leads to our second hypothesis:

Hypothesis 2

Context-congruent communication KSAOs will explain a larger amount of variance in communication outcomes than context-incongruent communication KSAOs.

Methods

Sample Characteristics

Self- and peer reports of FtF and CM communication KSAOs and of communication outcomes were collected through an online survey tool. Researchers have criticized self-reports on communication KSAOs for having little validity with regard to actual communication performance (McCroskey and McCroskey 1988). Therefore, peer reports were also gathered to cross-validate and increase the generality of the results. Moreover, peers might observe other aspects of communication behavior than the targets (Funder and West 1993). To assess the distinctiveness of FtF and CM communication across different types of raters, each participant (target) was instructed to invite a peer who would be capable of rating both the target’s FtF and CM communication KSAOs. A pilot study was conducted to develop appropriate measurement models for self- and peer reports (Study 1) before the distinctiveness of FtF and CM KSAOs and their predictive validity were assessed (Study 2).

Study 1 (Pilot)

From an initial sample of 87 respondents, a paired sample of 75 targets and peers (total n = 150) was obtained. Collection of data was carried out in January and February 2015. Participants in the pilot sample were recruited via university mailing lists and personal invitations. Targets were instructed to complete the online questionnaire and to recruit a peer who could rate the target’s communication KSAOs. Psychology students received course credit for participation and for recruiting a peer. Targets’ mean age was 33.64 years (SD 15.89); peers’ mean age was 34.60 (SD 16.42) years. More women (targets: 77 %; peers: 55 %) than men participated. The majority of participants held a university degree (targets: 56 %; peers: 53 %). Targets and peers were asked to rate how well they knew the other person on a scale from 1 to 100 (with 100 indicating very good knowledge). This resulted in a mean rating of 84.72 (SD 14.86; Median 90) among targets and of 85.72 (SD 21.79; Median 92) among peers.

Study 2

From an initial sample of 275 respondents, which was recruited during May to September 2015, a paired sample of 225 targets and peers (total n = 450) was obtained. An online panel provider was asked to send out invitations for participation. Again, individuals were instructed to fill out the questionnaires and to invite a peer to also rate the target’s communication KSAOs. Targets’ mean age was 46.51 years (SD 15.19); peers’ mean age was 42.72 years (SD 15.46). More women (targets: 67 %; peers: 61 %) than men participated. Again, both groups reported high levels of education (targets and peers with a university degree: 54 and 50 %, respectively). The assessment of the acquaintance between target and peer revealed a mean of 85.26 (SD 14.38; Median 90) for target reports and of 80.52 (SD 17.91; Median 85) for peer reports.

Assessment Instruments and Procedure

The assessment instruments and procedure were identical for studies 1 and 2. We gathered self- and peer reports of motivation, knowledge, attentiveness, expressiveness, and composure2 as part of the communication KSAOs included in the questionnaire provided by Spitzberg (2006). Additionally, the five major outcome variables of attractiveness, appropriateness, effectiveness, satisfaction, and clarity were assessed. All communication KSAOs and outcomes were measured in both contexts (FtF and CM). The same items were used for both contexts, but were adapted to fit the targeted context. For example, the original item “I am very motivated to use computers to communicate with others” was reformulated to fit either the FtF communication context (i.e., “I am very motivated to communicate with others face-to-face.”) or the CM communication context (i.e., “I am very motivated to communicate with others via digital media.”). Importantly, the same item stem (FtF or digital media) was used throughout the adaptation process. Participants were repeatedly instructed to think of digital media by means of any human symbolic text-based medium (e.g., chat, email, as in the original questionnaire introduction by Spitzberg 2006). Finally, a five-item measure of general media usage (cf. Spitzberg 2006) was administered to targets only (e.g., “I am a heavy user of digital media for communication.”). Each item was rated on a five-point Likert scale ranging from one to five. A complete list of items is presented in “Appendix section.” In addition to the FtF and CM versions of the communication KSAOs questionnaire and the general media usage items, we assessed age, gender, and education of the participants.

The set of questionnaires was administered in randomized order to avoid serial effects. Participants were informed that they would remain anonymous. Targets received automated feedback on their CM communication KSAOs at the completion of the study. Targets were asked to invite a peer, whom they believed was capable of rating both their FtF and CM communication KSAOs and communication outcomes, to complete the set of questionnaires as reported above.

Methods and Analysis Strategy

We used a multi-step procedure to test our hypotheses. In Study 1, we developed measurement models for communication KSAOs and outcomes. Confirmatory factor analyses (CFA) were conducted for each construct–rater combination separately. The measurement models for KSAOs and outcomes were modeled in accordance with the theoretical framework of Spitzberg (2006) and are depicted in Fig. 2. Robust maximum likelihood fit statistics (estimator “MLR”) are reported as some variables were not normally distributed (West et al. 1995). The full-information maximum likelihood method (FIML; Enders 2010; Graham 2009) was chosen to account for partially missing data on the questionnaire items. Peers (<10%) who provided demographics but no data on the questionnaire items were dropped from the analyses (as FIML does only account for partially missing data) (see Table 1 for the sample sizes for each of the models). Models were evaluated based on common fit criteria, specifically (a) the comparative fit index (CFI), (b) the root mean square error of approximation (RMSEA), (c) and the standardized root mean square (SRMR). Model fit was considered acceptable at CFI > .900 (preferably >.950), RMSEA < .10 (preferably <.05), and SRMR < .10 (Browne and Cudeck 1993; Hu and Bentler 1999; Kline 2005; MacCallum et al. 1996; West et al. 2012). The R-Statistics program (version 3.2.5; R Core Team 2016) and the R-package lavaan were used to fit all models reported in this article (version 0.5–19; Rosseel 2012). A stepwise procedure was carried out when establishing measurement models as parameter estimates change with each model modification (MacCallum 1986). Specifically, for each item, the mean factor loading across all four conditions was computed (FtF and CM communication KSAOs and outcomes as reported by the participants and their peers). Using an iterative procedure, the item with the lowest mean factor loading was excluded and the model was re-specified to check the improvement in fit across the four conditions. This procedure was carried out until adequate model fit in all four conditions was obtained (see :“Appendix section” for the list of items included in the final measurement models).
Fig. 2

Left side Initial confirmatory factor analysis models (upper part communication knowledge, skills, abilities, and other characteristics; lower part communication outcomes) with all indicators included. Right side Modified models with selected items. Factor loadings of the first indicators were set to 1. If only two indicators were available, factor loadings were set to equality to ensure identifiability of the model (see Eid et al. 2003)

Table 1

Summary of fit statistics for the multi-step procedure

 

χ2

χ2p value

CFI

RMSEA

SRMR

Study 1: final modified CFA models

 Self-rating

  Model for FtF KSAOs (n = 75)

87.754

.007

.920

0.083

0.088

  Model for CM KSAOs (n = 75)

92.688

.003

.924

0.089

0.060

  Model for FtF outcomes (n = 75)

94.974

.002

.904

0.092

0.073

  Model for CM outcomes (n = 75)

101.420

<.001

.903

0.100

0.080

 Peer rating

  Model for FtF KSAOs (n = 67)

91.777

.003

.923

0.093

0.084

  Model for CM KSAOs (n = 69)

99.612

.001

.901

0.102

0.079

  Model for FtF outcomes (n = 67)

84.670

.013

.940

0.083

0.050

  Model for CM outcomes (n = 69)

99.530

.001

.887

0.102

0.069

Study 2: final modified CFA models

 Self-rating

  Model for FtF KSAOs (n = 225)

95.617

.001

.976

0.054

0.069

  Model for CM KSAOs (n = 225)

90.566

.004

.973

0.050

0.050

  Model for FtF outcomes (n = 225)

98.060

.001

.972

0.055

0.044

  Model for CM outcomes (n = 225)

118.540

<.001

.945

0.068

0.059

 Peer rating

  Model for FtF KSAOs (n = 206)

105.316

<.001

.958

0.063

0.065

  Model for CM KSAOs (n = 204)

130.537

<.001

.938

0.078

0.047

  Model for FtF outcomes (n = 206)

98.855

.001

.963

0.058

0.047

  Model for CM outcomes (n = 204)

76.530

.052

.985

0.040

0.040

Study 2: CT-C(M-1) modelsa

 Self-rating

  Model for KSAOs (n = 225)

382.203

<.001

.965

0.049

0.040

  Model for outcomes (n = 225)

441.604

<.001

.944

0.059

0.046

 Peer rating

  Model for KSAOs (n = 208)

446.671

<.001

.940

0.062

0.047

  Model for outcomes (n = 208)

386.342

<.001

.957

0.051

0.043

Study 2: structural equation models

 Self-rating

  FtF KSAOs on FtF outcomes (n = 225)

376.482

<.001

.966

0.045

0.060

  FtF KSAOs on CM outcomes (n = 225)

388.509

<.001

.957

0.047

0.055

  CM KSAOs on CM outcomes (n = 225)

395.443

<.001

.950

0.048

0.052

  CM KSAOs on FtF outcomes (n = 225)

365.191

<.001

.964

0.042

0.048

 Peer rating

  FtF KSAOs on FtF outcomes (n = 206)

409.856

<.001

.946

0.053

0.058

  FtF KSAOs on CM outcomes (n = 208)

376.866

<.001

.957

0.046

0.051

  CM KSAOs on CM outcomes (n = 204)

437.590

<.001

.938

0.058

0.049

  CM KSAOs on FtF outcomes (n = 208)

418.509

<.001

.938

0.054

0.052

CT-C(M-1) correlated-trait-correlated-method minus one model, CFA confirmatory factor analysis, FtF face to face, CM computer-mediated, KSAOs knowledge, skills, abilities, and other characteristics, CFI comparative fit index, RMSEA root mean square error of approximation, SRMR standardized root mean square residual

aBased on maximum likelihood (ML) instead of maximum likelihood robust (MLR) estimator for bootstrapped confidence intervals

In the first step of Study 2, the final measurement models obtained from the pilot study were replicated. In the next step, we established correlated-trait-correlated-method minus one models [CT-C(M-1); Eid 2000] to answer Hypothesis 1.3 An example of such a model is depicted in Fig. 3. The basic idea is to set one method (i.e., one communication context) as a reference method that is then contrasted with other assessment methods (here, the other communication context). In this study, FtF communication was set as the reference method and each CM communication KSAO or outcome was then contrasted against the corresponding FtF communication KSAO or outcome. Although other model formulizations would have been possible (e.g., a baseline trait-method unit model as proposed in Marsh and Hocevar 1988), we decided to use the CT-C(M-1) model for several reasons: First, this newer form of multitrait–multimethod analysis circumvents many problems of other MTMM models (e.g., identification and convergence problems, Eid et al. 2003). Second, it allows for the separation of trait and method variance; hence, the included factors are not a blend of trait and method (Geiser et al. 2008; Geiser et al. 2012). Third, it is possible to calculate a consistency coefficient that estimates the effect size of the convergence between the two contexts. In sum, four multiple indicator CT-C(M-1) models (Eid et al. 2003) were specified: two models for communication KSAOs (for self- and peer ratings separately), and two models for communication outcomes (again, for self- and peer ratings separately). In total, the models each contained five trait factors (i.e., FtF communication KSAOs or communication outcomes) and five corresponding CM-specific factors (i.e., residual factors that represent the deviations of each CM communication KSAO or outcome from the FtF context; see Fig. 3). For each trait, a CM-specific factor must be established, because the deviation of the CM context to the FtF context will vary from trait to trait. This means that we do not expect that the deviation of CM communication from FtF communication is the same—i.e., we do not expect it to be perfectly correlated across all traits (e.g., motivation, knowledge, attentiveness; see Eid et al. 2003 for a similar model establishment). Correlation coefficients between the CM-specific factors will be reported to investigate the adequacy of this assumption (i.e., correlations between CM-specific factors will be lower than 1).
Fig. 3

Example correlated-trait-correlated-method minus one model (communication knowledge, skills, abilities, and other characteristics) with the FtF communication context as reference method. Factor loadings of the first indicator of each construct were set to 1. If only two indicators were available, factor loadings were set to equality to guarantee identifiability of the model (see Eid et al. 2003). Trait-method correlations and errors not depicted to avoid clutter. Note: FtF face to face, CM computer-mediated

After model specification, the reliability of aggregated models (i.e., total scales) as well as consistency and method specificity coefficients was calculated according to the formulas provided by Eid et al. (2003). The consistency coefficient for the true-score variables can be interpreted as a measure of the convergence between FtF and CM communication traits, whereas the method specificity points to the nonshared variance between the traits. Both coefficients add up to 1. A consistency coefficient above .50 implies that there is more shared than nonshared variance between the contexts and thus speaks to the similarity rather than the distinctiveness of KSAOs. Moreover, the square root of consistency can be interpreted as the correlation between the true score of the CM communication context and the corresponding true score of the FtF communication trait (Eid et al. 2003). Thus, a consistency coefficient above .50 translates into a correlation of at least .70. These arguments lead us to assume that a consistency coefficient below .50 can be considered as an indicator of the distinctiveness of the constructs and provides an answer to our first hypothesis. To strengthen the results of the analyses, 95 % confidence intervals around the point estimates of the consistency coefficients based on 1000 bootstrap samples were calculated. Confidence intervals that do not include the .50 cutoff value further speak to the distinctiveness of KSAOs across contexts.

In the second step, structural equation modeling was used to study the influence of communication KSAOs on communication outcomes and to provide an answer to Hypothesis 2. The structural model for the analyses is depicted in Fig. 4 and portrays the potential influence of the five communication KSAOs on each of the five communication outcomes. Four models were specified for each rater: Two models examined the influence of communication KSAOs on communication outcomes in a context-congruent condition (e.g., FtF communication KSAOs on FtF communication outcomes) and two models in a context-incongruent condition (e.g., CM communication KSAOs on FtF communication outcomes). In accordance with Hypothesis 2, we assumed that the context-congruent approach would explain more variance in outcomes than the context-incongruent approach.
Fig. 4

Example structural equation model with communication knowledge, skills, abilities, and other characteristics as predictors of the communication outcomes. Only the structural part of the model and only some of the 25 regression paths are depicted to avoid clutter

Results

Study 1

Measurement Models

The pilot sample was analyzed to develop measurement models for communication KSAOs and outcomes. None of the models initially fitted to the full set of items exhibited a satisfactory model fit (see left side of Fig. 2 for a representation of the initial measurement models). By iteratively excluding items with the lowest mean factor loading from the measurement model, acceptable fit in all conditions was obtained. Fit statistics are summarized in Table 1 (all CFI indices close to .900 or above,4 all RMSEA values <.10, all SRMR coefficients <.10). The final modified models are depicted in Fig. 2 (right side).

Study 2

Replication of Measurement Models

The measurement models developed in the pilot sample were replicated using the larger independent sample of Study 2. Fit statistics are shown in Table 1, and descriptive statistics are summarized in Table 2. All latent and manifest correlations of the final scales are presented in Table 3. Model fit was satisfactory for both contexts (FtF and CM) as well as for targets and peers (all CFI indices >.900, all RMSEA values <.10, all SRMR coefficients <.10).
Table 2

Descriptive statistics of the main measurement scales for both self-and peer reports

 

Self-report

Peer report

M

SD

α

M

SD

α

KSAOs

 FtF

  Motivation

4.12

0.83

.91

4.13

0.87

.88

  Knowledge

4.20

0.81

.84

4.42

0.68

.80

  Attentiveness

4.48

0.56

.79

4.31

0.72

.71

  Expressiveness

4.05

0.82

.83

4.25

0.79

.81

  Composure

3.72

0.84

.91

4.02

0.76

.87

 CM

  Motivation

3.56

0.93

.88

3.74

0.89

.88

  Knowledge

4.15

0.74

.83

4.35

0.74

.86

  Attentiveness

3.99

0.84

.86

4.10

0.77

.77

  Expressiveness

3.85

0.74

.75

4.08

0.85

.84

  Composure

3.68

0.66

.85

3.80

0.68

.84

Outcomes

 FtF

  Attractiveness

3.59

0.71

.85

3.91

0.69

.84

  Appropriateness

3.76

0.85

.76

3.55

1.03

.81

  Effectiveness

3.61

0.76

.90

3.90

0.69

.82

  Satisfaction

3.93

0.84

.92

3.82

0.84

.85

  Clarity

3.82

0.72

.78

4.07

0.71

.79

 CM

  Attractiveness

3.24

0.66

.82

3.65

0.69

.84

  Appropriateness

4.20

0.77

.81

3.97

0.93

.88

  Effectiveness

3.70

0.66

.85

3.87

0.70

.85

  Satisfaction

3.87

0.71

.87

3.88

0.71

.86

  Clarity

3.91

0.67

.76

4.09

0.74

.86

  General media usage

3.36

0.93

.82

   

α = Cronbach’s alpha (unstandardized; Cronbach 1951)

FtF face to face, CM computer-mediated, KSAOs knowledge, skills, abilities, and other characteristics, M mean, SD standard deviation

Table 3

Latent and manifest correlations of communication KSAOs and communication outcomes

FtF KSAOs

Self-report (n = 225)

Peer report (n = 206)

1

2

3

4

5

1

2

3

4

5

1 Motivation

 

.66

.47

.54

.56

 

.60

.53

.51

.46

2 Knowledge

.76

 

.50

.70

.68

.71

 

.50

.62

.58

3 Attentiveness

.53

.58

 

.51

.35

.67

.65

 

.52

.19

4 Expressiveness

.62

.82

.60

 

.69

.60

.78

.66

 

.61

5 Composure

.61

.78

.40

.78

 

.53

.70

.25

.72

 

CM KSAOs

Self-report (n = 225)

Peer report (n = 204)

1

2

3

4

5

1

2

3

4

5

1 Motivation

 

.49

.38

.31

.33

 

.45

.27

.19

.28

2 Knowledge

.56

 

.37

.33

.35

.48

 

.40

.48

.43

3 Attentiveness

.42

.44

 

.51

.33

.33

.50

 

.67

.33

4 Expressiveness

.37

.41

.63

 

.45

.20

.57

.81

 

.52

5 Composure

.38

.39

.35

.53

 

.33

.51

.41

.62

 

FtF outcomes

Self-report (n = 225)

Peer report (n = 206)

6

7

8

9

10

6

7

8

9

10

6 Attractiveness

 

.01

.49

.57

.42

 

.19

.39

.47

.40

7 Appropriateness

.03

 

.07

−.02

.22

.24

 

.15

.09

.16

8 Effectiveness

.56

.10

 

.62

.53

.45

.18

 

.30

.48

9 Satisfaction

.60

−.02

.66

 

.50

.48

.09

.37

 

.47

10 Clarity

.49

.28

.64

.58

 

.49

.21

.59

.54

 

CM outcomes

Self-report (n = 225)

Peer report (n = 204)

6

7

8

9

10

6

7

8

9

10

6 Attractiveness

 

.08

.29

.46

.26

 

.16

.41

.50

.38

7 Appropriateness

.13

 

.17

.28

.24

.20

 

.06

.07

.21

8 Effectiveness

.39

.20

 

.52

.52

.50

.07

 

.50

.44

9 Satisfaction

.53

.32

.59

 

.53

.56

.08

.55

 

.43

10 Clarity

.36

.28

.67

.60

 

.51

.25

.51

.48

 

Correlations within context (FtF or CM) and within rater (self or peer). Manifest correlations in italic

FtF face to face, CM computer-mediated, KSAOs knowledge, skills, abilities, and other characteristics

CT-C(M-1) Model Specification and Analyses of Convergence (Hypothesis 1)

The four specified CT-C(M-1) models (see Fig. 3 for an example model) showed an adequate model fit, for both self- and peer reports (see Table 1 for a summary of fit statistics). The indicator-specific reliabilities, consistency coefficients, and method specificity coefficients are presented in Tables 4 (targets) and 5 (peers).
Table 4

Variance components in the CT-C(M-1) model (self-rating)

 

Observed variables

True-score variables

Trait

Reliability

Consistency

Method specificity

Consistency (confidence interval)

Method specificity

Latent correlation

KSAOs

 Motivation FtF

.91

.91

 

1.00

  

 Motivation CM

.89

.05

.83

.06 (.01; .15)

.94

.24

 Knowledge FtF

.85

.85

 

1.00

  

 Knowledge CM

.82

.05

.77

.06 (.01; .17)

.94

.25

 Attentiveness FtF

.83

.83

 

1.00

  

 Attentiveness CM

.86

.12

.74

.14 (.04; .29)

.86

.38

 Expressiveness FtF

.84

.84

 

1.00

  

 Expressiveness CM

.75

.04

.72

.05 (.00; .13)

.95

.22

 Composure FtF

.91

.91

 

1.00

  

 Composure CM

.86

.09

.76

.11 (.03; .22)

.89

.33

Outcomes

 Attractiveness FtF

.86

.86

 

1.00

  

 Attractiveness CM

.83

.15

.68

.19 (.07; .34)

.81

.43

 Appropriateness FtF

.77

.77

 

1.00

  

 Appropriateness CM

.81

.23

.59

.28 (.10; .45)

.72

.53

 Effectiveness FtF

.90

.90

 

1.00

  

 Effectiveness CM

.85

.09

.76

.11 (.03; .23)

.89

.33

 Satisfaction FtF

.92

.92

 

1.00

  

 Satisfaction CM

.87

.01

.86

.01 (.00; .08)

.99

.11

 Clarity FtF

.79

.79

 

1.00

  

 Clarity CM

.76

.20

.56

.26 (.11; .44)

.74

.51

CT-C(M-1) correlated-trait-correlated-method minus one model, FtF face to face, CM computer-mediated, KSAOs knowledge, skills, abilities, and other characteristics

Table 5

Variance components in the CT-C(M-1) model (peer rating)

Trait

Observed variables

True-score variables

Reliability

Consistency

Method specificity

Consistency (confidence interval)

Method specificity

Latent correlation

KSAOs

 Motivation FtF

.88

.88

 

1.00

  

 Motivation CM

.88

.00

.88

.00 (.00; .03)

1.00

.02

 Knowledge FtF

.82

.82

 

1.00

  

 Knowledge CM

.86

.04

.82

.05 (.00; .14)

.95

.22

 Attentiveness FtF

.78

.78

 

1.00

  

 Attentiveness CM

.79

.36

.43

.46 (.25; .69)

.54

.68

 Expressiveness FtF

.82

.82

 

1.00

  

 Expressiveness CM

.84

.37

.46

.45 (.28; .63)

.55

.67

 Composure FtF

.87

.87

 

1.00

  

 Composure CM

.85

.30

.55

.35 (.20; .52)

.65

.59

Outcomes

 Attractiveness FtF

.85

.85

 

1.00

  

 Attractiveness CM

.85

.30

.54

.36 (.20; .53)

.64

.60

 Appropriateness FtF

.81

.81

 

1.00

  

 Appropriateness CM

.88

.46

.42

.52 (.32; .70)

.48

.72

 Effectiveness FtF

.82

.82

 

1.00

  

 Effectiveness CM

.85

.30

.56

.35 (.18; .54)

.65

.59

 Satisfaction FtF

.90

.90

 

1.00

  

 Satisfaction CM

.87

.23

.64

.26 (.13; .43)

.74

.51

 Clarity FtF

.79

.79

 

1.00

  

 Clarity CM

.86

.39

.46

.46 (.28; .63)

.54

.68

CT-C(M-1) correlated-trait-correlated-method minus one model, FtF face to face, CM computer-mediated, KSAOs knowledge, skills, abilities, and other characteristics

For self-reports, reliabilities of the communication KSAOs on the latent level ranged from .75 to .91 and were therefore suitable for further investigation (Eid et al. 2003). The latent correlations between CM-specific factors ranged from .38 to .61 and therefore support the assumption of CM trait-specific deviations from the FtF context. Consistency coefficients of the true-score variables ranged from .05 (convergence of FtF and CM expressiveness) to .14 (convergence of FtF and CM attentiveness), the latter corresponding to a .38 correlation between the latent variables. Importantly, none of the confidence intervals around the point estimate of consistency included the .50 cutoff value. These results speak to the distinctiveness of the KSAOs and thus lend support to Hypothesis 1.

For peer reports, reliability coefficients for KSAOs reported by the peers varied from .78 to .88. The latent correlations between CM-specific factors ranged from .26 to .80, again speaking to the adequacy of modeling CM-specific factors separately. The consistency coefficients ranged from .00 (convergence of FtF and CM motivation) to .46 (convergence of FtF and CM attentiveness), the latter corresponding to a correlation of .68 between the latent variables. Thus, none of the point estimates of the consistency coefficient exceeded the cutoff value of .50. Peer reports showed higher convergence than self-report data with regard to attentiveness, expressiveness, and composure; the confidence intervals for the consistency coefficients included the cutoff value of .50 for these constructs. Thus, these results still provide support for the hypothesis that the KSAOs are distinguishable, but not as clearly as indicated by self-report data (Hypothesis 1).

With regard to the communication outcomes, reliabilities varied from .76 to .92 (self-reports) and from .79 to .90 (peer reports). The latent correlations between CM-specific factors ranged from −.05 to .66 (self-reports) and from .00 to .98 (peer reports). The consistency coefficients of the true-score variables ranged from .01 (convergence of FtF and CM satisfaction, self-reports) to .28 (convergence of FtF and CM appropriateness, self-reports). None of the confidence intervals included the .50 cutoff value for the consistency coefficient. With regard to peer reports, consistency coefficients varied from .26 (convergence of FtF and CM satisfaction) to .52 (convergence of FtF and CM appropriateness, peer reports). The point estimate for the consistency coefficient of “appropriateness” across FtF and CM contexts was the only coefficient that exceeded the cutoff. With the exception of the satisfaction scales, the confidence intervals included the .50 cutoff value. Thus, again, these results support the differentiation of communication outcomes across contexts, but less clearly as compared to the self-report data.

Influence of Communication KSAOs on Communication Outcomes (Hypothesis 2)

The results of the structural equation models relating communication KSAOs to communication outcomes are depicted in Table 6. All eight structural equation models led to satisfactory model fit (see Table 1 for a summary of fit statistics). Overall, each communication component yielded at least one significant contribution to a communication outcome, thus highlighting the general usefulness of the different facets of communication KSAOs and the overall theoretical model. Importantly, context-congruent KSAOs had on average a higher predictive value for communication outcomes (R2 ranging from .10 to .72, mean = .46) than context-incongruent KSAOs (R2 ranging from .09 to .32, mean = .19), thus supporting Hypothesis 2.
Table 6

Influence of communication KSAOs on communication outcomes for self- and peer reports in either the context-congruent or the context-incongruent condition

 

FtF outcomes

CM outcomes

Attr

Appr

Eff

Sat

Cla

Attr

Appr

Eff

Sat

Cla

Self-report

 FtF

  Motivation

−0.09

−0.25

−0.01

0.35

0.25

0.35

−0.09

0.40

0.28

0.45

  Knowledge

0.28

0.38

0.01

0.11

0.20

0.28

0.33

−0.11

−0.02

0.03

  Attentiveness

0.20

0.17

0.07

−0.03

0.20

0.32

0.28

0.29

0.36

0.35

  Expressiveness

−0.01

−0.05

0.02

0.31

0.15

−0.07

−0.32

−0.12

−0.24

0.03

  Composure

0.37

−0.34

0.67

0.22

0.43

0.01

−0.05

0.56

0.30

0.18

  R2

.43

.10

.52

.72

.48

.12

.09

.19

.10

.15

 CM

  Motivation

0.29

0.14

−0.15

0.42

0.39

0.10

0.18

0.22

0.39

0.02

  Knowledge

0.37

−0.12

0.15

0.45

0.33

0.12

−0.06

0.07

0.17

−0.03

  Attentiveness

−0.07

0.34

−0.04

0.02

0.16

0.13

0.30

0.04

0.30

0.38

  Expressiveness

−0.01

0.08

−0.09

−0.06

0.13

0.29

−0.13

0.20

0.07

0.33

  Composure

0.22

−0.11

0.39

0.13

0.20

0.13

−0.11

0.29

0.16

0.20

  R2

.15

.15

.14

.18

.25

.35

.10

.40

.69

.57

Peer report

 FtF

  Motivation

−0.18

−0.29

−0.20

0.58

−0.07

−0.14

0.05

−0.22

−0.08

−0.26

  Knowledge

0.36

0.02

0.24

0.34

0.21

0.01

−0.01

−0.10

0.01

0.14

  Attentiveness

0.65

0.94

0.24

−0.07

0.36

0.35

0.62

0.29

0.47

0.44

  Expressiveness

−0.08

−0.39

−0.07

−0.06

0.08

0.38

−0.43

−0.04

−0.38

0.05

  Composure

−0.04

0.01

0.57

−0.04

0.37

−0.17

0.02

0.52

0.49

0.27

  R2

.52

.41

.47

.56

.61

.27

.23

.17

.17

.32

 CM

  Motivation

−0.13

−0.07

0.29

−0.02

−0.19

0.10

−0.24

0.04

0.34

−0.06

  Knowledge

0.14

0.04

0.16

−0.02

0.05

0.16

−0.09

0.30

0.15

−0.04

  Attentiveness

0.52

0.63

0.16

0.20

0.11

0.36

1.04

0.20

−0.02

0.15

  Expressiveness

−0.21

−0.30

−0.18

0.03

0.26

0.21

−0.67

−0.12

0.23

0.46

  Composure

0.07

−0.09

0.52

0.20

0.23

0.00

0.05

0.41

0.22

0.23

  R2

.18

.17

.27

.11

.28

.45

.36

.46

.48

.51

Significant predictors (standardized; p < .05) of the communication outcomes are depicted in bold

R2 is the variance explained by all five included KSAOs as predictors in the structural equation models

FtF face to face, CM computer-mediated, KSAOs knowledge, skills, abilities, and other characteristics, Attr attractiveness, Appr appropriateness, Eff effectiveness, Sat satisfaction, Cla clarity

Ancillary Analyses

We also conducted cross-rater analyses to inspect in how far self- and peer-reported KSAOs are predictive of other rated outcome variables (e.g., CM KSAOs rated by targets predicting CM outcomes rated by peers). All eight structural equation models provided acceptable model fit (lowest CFI = .944; highest RMSEA = .050; highest SRMR = .055). The explained variance in the outcomes was generally low to moderate. More precisely, self-reported KSAOs as predictors of peer-reported outcomes in the context-congruent condition yielded R2s ranging from .04 to .20 (mean = .10). The same was true for peer-reported KSAOs predicting self-reported outcomes (R2s ranging from .03 to .10; mean = .07). A similar picture resulted when applying context-incongruent KSAOs and outcomes (e.g., CM KSAOs predicting FtF outcomes). In this case, self-reported KSAOs predicting peer-reported outcomes yielded R2s ranging from .02 to .16 (mean = .08). Similarly, peer-reported KSAOs explained variance in self-reported outcomes ranging from R2 = .02 to .10 (mean = .06). This speaks to the different views of the self- and other ratings and underlines the necessity of our approach to examine the convergence of FtF and CM communication in both, self- and other reports.

In an additional ancillary analysis, we used general media usage of the targets as the criterion and the KSAOs (reported by targets and peers) as predictors. All four models (CM and FtF KSAOs, targets and peers) yielded acceptable model fit (lowest CFI = .938; highest RMSEA = .062; highest SRMR = .059). For targets and peers, CM communication KSAOs explained a substantial amount of variance in the criterion (self-report: R2 = .49; peer report: R2 = .18). The amount of explained variance dropped severely when FtF communication KSAOs were used to predict general media usage (self-report: R2 = .07; peer report: R2 = .04). Thus, these analyses provide further evidence (in addition to evidence reported by Spitzberg 2011) for the validity of the instruments assessing CM communication KSAOs.

Discussion

The current study examined whether FtF and CM communication KSAOs were distinct constructs. In line with our theorizing, analyses using CT-C(M-1) models suggested that FtF and CM communication KSAOs are more distinct than they are similar. Also congruent with our hypothesis, results obtained from structural equation modeling showed that context-congruent KSAOs were more predictive of communication outcomes than their incongruent counterparts for each rater separately.

The most important finding from the present study emerged from the inspection of convergence between the FtF and the CM communication KSAOs and the outcome variables in the CT-C(M-1) models. Convergence between the traits was low to moderate, pointing to the distinctiveness of the constructs (Hypothesis 1). There was considerable nonshared variance between the contexts as indicated by the high method specificity coefficients. These findings are in line with previous research (Hertel et al. 2006; Hwang 2011). The measures are highly reliable on the latent level; therefore, attenuation due to low reliability does not provide an explanation of the low to moderate convergence. The current study is the first to administer the same assessment of communication KSAOs framed in two different contexts (FtF and CM), ruling out the alternative explanation of insufficient mapping between FtF and CM communication KSAOs assessments.

Structural equation models provided support for Spitzberg’s (1983, 2006) theoretical model assuming a set of KSAOs as important predictors of communication outcomes. Each component provided a significant contribution to the explained variance in self- and peer-reported communication outcomes. Importantly, and in line with results from the frame-of-reference testing literature (e.g., Schmit et al. 1995), context-incongruent KSAOs (e.g., FtF KSAOs as predictors of CM communication outcomes) on average could not explain the same amount of variance in the outcomes as the congruent counterpart, again supporting the distinctiveness of the constructs (Hypotheses 2).

These results have several theoretical implications. The current study suggests that communication constructs may increase or decrease in concurrent validity depending on the context in which they are assessed. Being expressive or showing composure may have a different meaning depending on the communication medium. To date, a comprehensive theory of the interaction between different media characteristics (e.g., richness, synchronicity; Daft and Lengel 1986; Dennis et al. 2008) and communication KSAOs is not available. Our results are consistent with considering communication KSAOs as constituting a factorial design in which KSAOs are crossed with (and may interact with) media characteristics. The observed differences in the variability of consistency coefficients across constructs indicate that further theory development is needed concerning the moderators of consistency. Currently, little theory is available to explain why motivation is less consistent from the FtF to the CM contexts than attentiveness, for example. Our results also highlight the necessity of considering the context in predicting performance in digital or virtual environments. Interactionist models of personality (e.g., Mischel and Shoda 1995) posit that stable patterns of behavior can only be expected across functionally equivalent classes of situations. If digital contexts can not be considered to be functionally equivalent to FtF contexts, prediction in the CM context with instruments assessing the FtF context will be attenuated.

The current study has two main practical implications. First, the low to moderate convergence between the communication contexts suggests that researchers and practitioners need to match the communication competence assessment to the context of interaction. This context matching will help to make better predictions with regard to important communication outcomes. In an even more practical sense, personnel selection will also benefit from an adequate competence–context match. If the predominant mode of interaction is known (e.g., working in a virtual team that is highly geographically dispersed and therefore dependent on CM communication), managers should assess communication skills in accordance with this context.

A second implication of the results is that new assessment tools need to be developed that are contextualized with regard to the mode of interaction (FtF and CM). It is very likely that a more differentiated view of media skills will be of value (see, for an example, the mobile communication competence questionnaire by Bakke 2010).

Limitations and Future Directions

We note some limitations of the present study that also point to future directions for research. Although a contextual variable was included in the research design, namely the mode of conversation (FtF vs. CM communication), different conversational environments and media devices were not separated in the questionnaire data. Future studies could investigate whether the correlational structure found in the present analysis is stable across even more specific conversational contexts (e.g., diverse specific working environments). Also, future studies might also consider different communication motives as moderators of the convergence of FtF and CM communication KSAOs (cf. Westmyer et al. 1998). Likewise, different CM modes were not distinguished in our study and were limited to text-based interactions (i.e., email, chat, forum, as suggested by Spitzberg 2006). However, some authors have recently begun to tailor CM competencies to specific media devices (e.g., mobile communication competence, Bakke 2010). Such a fine-grained, nuanced view has the potential to add to our understanding of the generalizable and context-specific aspects of the relationship of FtF and CM communication KSAOs.

Second, although we included a peer report in the research design to account for possible self-report biases (McCroskey and McCroskey 1988), both self- and peer reports can be a source of errors (e.g., halo effect, Hoyt 2000). As became obvious from our cross-rater analyses, the explained variance of communication KSAOs in other rated communication outcomes was generally low to moderate. This is in line with several meta-analyses which reported low to moderate convergence between self-reports and actual performance (Freund and Kasten 2012; Mabe and West 1982; Zell and Krizan 2014). It is also in line with previous reports of low correlations between self-reported characteristics and other reported performance (e.g., from .03 to .15 between self-reported personality and other rated job performance; Barrick et al. 2001). Thus, while it is an advantage of the current study to examine the convergence of FtF and CM communication in both, self- and peer ratings, we also acknowledge future studies would benefit from additionally including more objective measures of communication KSAOs such as work samples (Roth et al. 2005) and behavioral observations (Rubin 1982, 1985). These alternative measures could then be included as additional methods and be contrasted against self- and peer ratings to inspect convergence of methods. This would further enrich the information provided by our CT-C(M-1) models and allow for further examination of the predictive validity of the questionnaires, which is clearly needed.

Third, the questionnaires were provided through an online platform. The results reported here might not be generalizable to people with very low CM communication skills as it is questionable that they would have participated over a computer-mediated device (see Buchanan 2002 for a review of advantages and disadvantages of online survey research). Future studies could contrast online administered and paper-and-pencil-based questionnaire data within the CT-C(M-1) model framework and test the convergence of results across both conditions.

Fourth, many of the scales were comprised of only two items. Although our modified item structure was replicated in an independent sample across both self- and peer reports with adequate reliability on the latent level (i.e., measurement-error-free reliability coefficients), more than two scale items are clearly desirable as this increases construct validity and reliability (Eisinga et al. 2013). Future research should ideally attempt to develop additional items for these scales.

Conclusion

The current study examined the convergence of FtF and CM communication KSAOs. Results indicate that FtF and CM KSAOs (and outcomes) are more distinct than they are similar and should not be treated as a single common construct.

Footnotes

  1. 1.

    Spitzberg refers to his model as “communication competence model.” However, this model also includes constructs which researchers may not necessarily consider as “competencies” (e.g., motivation). Although several umbrella terms may apply, we prefer KSAOs as a very broad term including other characteristics (such as motivation).

  2. 2.

    Not all items of the original Spitzberg (2006) questionnaire could be included because of respondent burden and the inapplicability of some of the items for both communication contexts. For example, the coordination item “I am skilled at prioritizing (triaging) my email traffic” has no clear counterpart for FtF communication.

  3. 3.

    We did also test for a series of competing models: (a) a general factor model linking all variables of the KSAOs to one factor, (b) a two-factor model consisting of a FtF and a CM context factor, and (c) a five-factor model consisting of separate factors for each type of KSAOs, but without separating the contexts. However, none of these models achieved acceptable model fit in any condition. Details about these analyses can be obtained from the first author upon request.

  4. 4.

    The model for CM communication outcomes (peer reports) exhibited a close fit with CFI = .887 that was considered to be acceptable for this initial phase.

Notes

Acknowledgments

We would like to thank Michael Eid for his valuable advice on interpreting the CT-C(M-1) models and Manuel Trumpfheller for his help in collecting the data.

Supplementary material

10869_2016_9465_MOESM1_ESM.xlsx (83 kb)
Supplementary material 1 (XLSX 83.4 kb)

References

  1. Aguado, D., Rico, R., Sánchez-Manzanares, M., & Salas, E. (2014). Teamwork competency test (TWCT): A step forward on measuring teamwork competencies. Group Dynamics: Theory, Research, and Practice, 18(2), 101–121. doi:10.1037/a0036098.CrossRefGoogle Scholar
  2. Bakke, E. (2010). A model and measure of mobile communication competence. Human Communication Research, 36(3), 348–371. doi:10.1111/j.1468-2958.2010.01379.x.CrossRefGoogle Scholar
  3. Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Personality and performance at the beginning of the new millenium: What do we know and where do we go next? International Journal of Selection and Assessment, 9, 9–30. doi:10.1111/1468-2389.00160.CrossRefGoogle Scholar
  4. Bartram, D. (2005). The great eight competencies: A criterion-centric approach to validation. Journal of Applied Psychology, 90(6), 1185–1203. doi:10.1037/0021-9010.90.6.1185.CrossRefPubMedGoogle Scholar
  5. Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage.Google Scholar
  6. Buchanan, T. (2002). Online assessment: Desirable or dangerous? Professional Psychology: Research and Practice, 33(2), 148–154. doi:10.1037/0735-7028.33.2.148.CrossRefGoogle Scholar
  7. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. doi:10.1007/BF02310555.CrossRefGoogle Scholar
  8. Daft, R. L., & Lengel, R. H. (1986). Organizational information requirements, media richness and structural design. Management Science, 32(5), 554–571. doi:10.1287/mnsc.32.5.554.CrossRefGoogle Scholar
  9. De Vries, R. E., Bakker-Pieper, A., & Oostenveld, W. (2010). Leadership = communication? The relations of leaders’ communication styles with leadership styles, knowledge sharing and leadership outcomes. Journal of Business and Psychology, 25(3), 367–380. doi:10.1007/s10869-009-9140-2.CrossRefPubMedGoogle Scholar
  10. Dennis, A. R., Fuller, R. M., & Valacich, J. S. (2008). Media, tasks, and communication processes: A theory of media synchronicity. MIS Quarterly, 32(3), 575–600.Google Scholar
  11. Eid, M. (2000). A multitrait-multimethod model with minimal assumptions. Psychometrika, 65(2), 241–261. doi:10.1007/BF02294377.CrossRefGoogle Scholar
  12. Eid, M., Lischetzke, T., Nussbeck, F. W., & Trierweiler, L. I. (2003). Separating trait effects from trait-specific method effects in multitrait-multimethod models: A multiple-indicator CT-C (M-1) model. Psychological Methods, 8(1), 38–60. doi:10.1037/1082-989X.8.1.38.CrossRefPubMedGoogle Scholar
  13. Eisinga, R., Grotenhuis, M. T., & Pelzer, B. (2013). The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown?. International Journal of Public Health, 58, 637–642. doi:10.1007/s00038-012-0416-3.CrossRefPubMedGoogle Scholar
  14. Enders, C. (2010). Applied missing data analysis. New York, NY: Guilford.Google Scholar
  15. Freund, P. A., & Kasten, N. (2012). How smart do you think you are? A meta-analysis on the validity of self-estimates of cognitive ability. Psychological Bulletin, 138(2), 296–321. doi:10.1037/813a0026556.CrossRefPubMedGoogle Scholar
  16. Funder, D. C., & West, S. G. (1993). Consensus, self-other agreement, and accuracy in personality judgment: An introduction. Journal of Personality, 61(4), 457–476. doi:10.1111/j.1467-6494.1993.tb00778.x.CrossRefPubMedGoogle Scholar
  17. Geiser, C., Eid, M., & Nussbeck, F. W. (2008). On the meaning of the latent variables in the CT-C (M-1) model: A comment on Maydeu-Olivares and Coffman (2006). Psychological Methods, 13(1), 49–57. doi:10.1037/1082-989X.13.1.49.CrossRefPubMedGoogle Scholar
  18. Geiser, C., Eid, M., West, S. G., Lischetzke, T., & Nussbeck, F. W. (2012). A comparison of method effects in two confirmatory factor models for structurally different methods. Structural Equation Modeling: A Multidisciplinary Journal, 19(3), 409–436. doi:10.1080/10705511.2012.687658.CrossRefGoogle Scholar
  19. Gilson, L. L., Maynard, M. T., Young, N. C. J., Vartiainen, M., & Hakonen, M. (2015). Virtual teams research 10 years, 10 themes, and 10 opportunities. Journal of Management, 41(5), 1313–1337. doi:10.1177/0149206314559946.CrossRefGoogle Scholar
  20. Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576. doi:10.1146/annurev.psych.58.110405.085530.CrossRefPubMedGoogle Scholar
  21. Hertel, G., Konradt, U., & Voss, K. (2006). Competencies for virtual teamwork: Development and validation of a web-based selection tool for members of distributed teams. European Journal of Work and Organizational Psychology, 15(4), 477–504. doi:10.1080/13594320600908187.CrossRefGoogle Scholar
  22. Holtz, B. C., Ployhart, R. E., & Dominguez, A. (2005). Testing the rules of justice: The effects of frame-of-reference and pre-test validity information on personality test responses and test perceptions. International Journal of Selection and Assessment, 13(1), 75–86. doi:10.1111/j.0965-075X.2005.00301.x.CrossRefGoogle Scholar
  23. Hoyt, W. T. (2000). Rater bias in psychological research: When is it a problem and what can we do about it? Psychological Methods, 5(1), 64–86. doi:10.1037/1082-989X.5.1.64.CrossRefPubMedGoogle Scholar
  24. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. doi:10.1080/10705519909540118.CrossRefGoogle Scholar
  25. Hwang, Y. (2011). Is communication competence still good for interpersonal media?: Mobile phone and instant messenger. Computers in Human Behavior, 27(2), 924–934. doi:10.1016/j.chb.2010.11.018.CrossRefGoogle Scholar
  26. Johnson, P., Heimann, V., & O’Neill, K. (2001). The “wonderland” of virtual teams. Journal of Workplace Learning, 13(1), 24–30. doi:10.1108/13665620110364745.CrossRefGoogle Scholar
  27. Kalman, Y. M., & Gergle, D. (2014). Letter repetitions in computer-mediated communication: A unique link between spoken and online language. Computers in Human Behavior, 34, 187–193. doi:10.1016/j.chb.2014.01.047.CrossRefGoogle Scholar
  28. Keyton, J. (2015). Outcomes and the criterion problem in communication competence research. In A. F. Hannawa & B. H. Spitzberg (Eds.), Communication competence (pp. 585–604). Berlin: de Gruyter Mouton. doi:10.1515/9783110317459-024.Google Scholar
  29. Kline, R. B. (2005). Principles and practices of structural equation modeling. New York: Guilford.Google Scholar
  30. Kock, N. (2004). The psychobiological model: Towards a new theory of computer-mediated communication based on Darwinian evolution. Organization Science, 15(3), 327–348. doi:10.1287/orsc.1040.0071.CrossRefGoogle Scholar
  31. Korzenny, F. (1978). A theory of electronic propinquity mediated communication in organizations. Communication Research, 5(1), 3–24. doi:10.1177/009365027800500101.CrossRefGoogle Scholar
  32. Lenhart, A., Madden, M., & Hitlin, P. (2005). Teens and technology: Youth are leading the transition to a fully wired and mobile nation. Washington DC: Pew Internet & American Life Project. Retrieved from http://www.pewinternet.org/files/old-media/Files/Reports/2005/PIP_Teens_Tech_July2005web.pdf.pdf.
  33. Lievens, F., De Corte, W., & Schollaert, E. (2008). A closer look at the frame-of-reference effect in personality scale scores and validity. Journal of Applied Psychology, 93(2), 268–279. doi:10.1037/0021-9010.93.2.268.CrossRefPubMedGoogle Scholar
  34. Mabe, P. A., & West, S. G. (1982). Validity of self-evaluation of ability: A review and meta-analysis. Journal of Applied Psychology, 67(3), 280–296. doi:10.1037/0021-9010.67.3.280.CrossRefGoogle Scholar
  35. MacCallum, R. (1986). Specification searches in covariance structure modeling. Psychological Bulletin, 100(1), 107–120. doi:10.1037/0033-2909.100.1.107.CrossRefGoogle Scholar
  36. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. doi:10.1037/1082-989X.1.2.130.CrossRefGoogle Scholar
  37. Marsh, H. W., & Hocevar, D. (1988). A new, more powerful approach to multitrait-multimethod analyses: Application of second-order confirmatory factor analysis. Journal of Applied Psychology, 73(1), 107–117. doi:10.1037/0021-9010.73.1.107.Google Scholar
  38. Maruping, L. M., & Agarwal, R. (2004). Managing team interpersonal processes through technology: A task-technology fit perspective. Journal of Applied Psychology, 89(6), 975–990. doi:10.1037/0021-9010.89.6.975.CrossRefPubMedGoogle Scholar
  39. McCroskey, J. C., & McCroskey, L. L. (1988). Self-report as an approach to measuring communication competence. Communication Research Reports, 5(2), 108–113. doi:10.1080/08824098809359810.CrossRefGoogle Scholar
  40. Mesmer-Magnus, J. R., DeChurch, L. A., Jimenez-Rodriguez, M., Wildman, J., & Shuffler, M. (2011). A meta-analytic investigation of virtuality and information sharing in teams. Organizational Behavior and Human Decision Processes, 115(2), 214–225. doi:10.1016/j.obhdp.2011.03.002.CrossRefGoogle Scholar
  41. Mischel, W. (2009). From personality and assessment (1968) to personality science, 2009. Journal of Research in Personality, 43(2), 282–290. doi:10.1016/j.jrp.2008.12.037.CrossRefGoogle Scholar
  42. Mischel, W., & Shoda, Y. (1995). A cognitive-affective system theory of personality: Reconceptualizing situations, dispositions, dynamics, and invariance in personality structure. Psychological Review, 102(2), 246–268. doi:10.1037/0033-295X.102.2.246.CrossRefPubMedGoogle Scholar
  43. Pauleen, D. J., & Yoong, P. (2001). Relationship building and the use of ICT in boundary-crossing virtual teams: A facilitator’s perspective. Journal of Information Technology, 16(4), 205–220. doi:10.1177/107179190501100207.CrossRefGoogle Scholar
  44. Payne, H. J. (2005). Reconceptualizing social skills in organizations: Exploring the relationship between communication competence, job performance, and supervisory roles. Journal of Leadership and Organizational Studies, 11(2), 63–77.CrossRefGoogle Scholar
  45. R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/.
  46. Riggio, R. E., & Taylor, S. J. (2000). Personality and communication skills as predictors of hospice nurse performance. Journal of Business and Psychology, 15(2), 351–359. doi:10.1023/A:1007832320795.CrossRefGoogle Scholar
  47. Rosseel, Y. (2012). Lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. doi:10.18637/jss.v048.i02.CrossRefGoogle Scholar
  48. Roth, P. L., Bobko, P., & McFarland, L. A. (2005). A meta-analysis of work sample test validity: Updating and integrating some classic literature. Personnel Psychology, 58(4), 1009–1037. doi:10.1111/j.1744-6570.2005.00714.x.CrossRefGoogle Scholar
  49. Rubin, R. B. (1982). Assessing speaking and listening competence at the college level: The communication competency assessment instrument. Communication Education, 31(1), 19–32. doi:10.1080/03634528209384656.CrossRefGoogle Scholar
  50. Rubin, R. B. (1985). The validity of the communication competency assessment instrument. Communications Monographs, 52(2), 173–185. doi:10.1080/03637758509376103.CrossRefGoogle Scholar
  51. Schmit, M. J., Ryan, A. M., Stierwalt, S. L., & Powell, A. B. (1995). Frame-of-reference effects on personality scale scores and criterion-related validity. Journal of Applied Psychology, 80(5), 607–620. doi:10.1037/0021-9010.80.5.607.CrossRefGoogle Scholar
  52. Shaffer, J. A., & Postlethwaite, B. E. (2012). A matter of context: A meta-analytic investigation of the relative validity of contextualized and noncontextualized personality measures. Personnel Psychology, 65(3), 445–494. doi:10.1111/j.1744-6570.2012.01250.x.CrossRefGoogle Scholar
  53. Spitzberg, B. H. (1983). Communication competence as knowledge, skill, and impression. Communication Education, 32(3), 323–329. doi:10.1080/03634528309378550.CrossRefGoogle Scholar
  54. Spitzberg, B. H. (1988). Communication competence: Measures of perceived effectiveness. In C. H. Tardy (Ed.), A handbook for the study of human communication: Methods and instruments for observing, measuring, and assessing communication processes (pp. 67–105). Westport, CT: Ablex Publishing.Google Scholar
  55. Spitzberg, B. H. (2006). Preliminary development of a model and measure of computer- mediated communication (CMC) competence. Journal of Computer-Mediated Communication, 11(2), 629–666. doi:10.1111/j.1083-6101.2006.00030.x.CrossRefGoogle Scholar
  56. Spitzberg, B. H. (2011). The interactive media package for assessment of communication and critical thinking (IMPACCT©): Testing a programmatic online communication competence assessment system. Communication Education, 60(2), 145–173. doi:10.1080/03634523.2010.518619.CrossRefGoogle Scholar
  57. Spitzberg, B. H. (2015). The composition of competence: Communication skills. In A. F. Hannawa & B. H. Spitzberg (Eds.), Communication competence (pp. 237–269). Berlin, B: de Gruyter Mouton. doi:10.1515/9783110317459-011.Google Scholar
  58. Spitzberg, B. H., & Brunner, C. C. (1991). Toward a theoretical integration of context and competence inference research. Western Journal of Speech Communication, 55(1), 28–46. doi:10.1080/10570319109374369.CrossRefGoogle Scholar
  59. Spitzberg, B. H., & Cupach, W. R. (1984). Interpersonal communication competence. Beverly Hills, CA: Sage.Google Scholar
  60. Spitzberg, B. H., & Hecht, M. L. (1984). A component model of relational competence. Human Communication Research, 10(4), 575–599. doi:10.1111/j.1468-2958.1984.tb00033.x.CrossRefGoogle Scholar
  61. Stevens, M. J., & Campion, M. A. (1994). The knowledge, skill, and ability requirements for teamwork: Implications for human resource management. Journal of Management, 20(2), 503–530. doi:10.1177/014920639402000210.CrossRefGoogle Scholar
  62. Vandergriff, I. (2013). Emotive communication online: A contextual analysis of computer-mediated communication (CMC) cues. Journal of Pragmatics, 51, 1–12. doi:10.1016/j.pragma.2013.02.008.CrossRefGoogle Scholar
  63. Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19(1), 52–90. doi:10.1177/009365092019001003.CrossRefGoogle Scholar
  64. Walther, J. B., & Bazarova, N. N. (2008). Validation and application of electronic propinquity theory to computer-mediated communication in groups. Communication Research, 35(5), 622–645. doi:10.1177/0093650208321783.CrossRefGoogle Scholar
  65. West, S. G., Finch, J. F., & Curran, P. J. (1995). Structural equation models with non-normal variables: Problems and remedies. In R. Hoyle (Ed.), Structural equation modeling: Issues and applications (pp. 56–75). Newbury Park, CA: Sage.Google Scholar
  66. West, S. G., Taylor, A. B., & Wu, W. (2012). Model fit and model selection in structural equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 209–231). New York: Guilford.Google Scholar
  67. Westmyer, S. A., DiCioccio, R. L., & Rubin, R. B. (1998). Appropriateness and effectiveness of communication channels in competent interpersonal communication. Journal of Communication, 48(3), 27–48. doi:10.1111/j.1460-2466.1998.tb02758.x.CrossRefGoogle Scholar
  68. Young, B. S., Arthur, W. A, Jr., & Finch, J. (2000). Predictors of managerial performance: More than cognitive ability. Journal of Business and Psychology, 15(1), 53–72. doi:10.1023/A:1007766818397.CrossRefGoogle Scholar
  69. Zell, E., & Krizan, Z. (2014). Do people have insight into their abilities? A metasynthesis. Perspectives on Psychological Science, 9(2), 111–125. doi:10.1177/1745691613518075.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Julian Schulze
    • 1
  • Martin Schultze
    • 2
  • Stephen G. West
    • 3
  • Stefan Krumm
    • 1
  1. 1.Division Psychological Assessment and Differential and Personality Psychology, Department of Education and PsychologyFreie Universität BerlinBerlinGermany
  2. 2.Division of Methods and Evaluation, Department of Education and PsychologyFreie Universität BerlinBerlinGermany
  3. 3.Department of PsychologyArizona State UniversityTempeUSA

Personalised recommendations