Introduction

Here we attempt to identify a standard for research productivity of physical therapist educators who have attained promotion in CAPTE (Commission on Accreditation in Physical Therapist Education) accredited DPT programs and departments during the year 2000 through 2016 in the western region of the United States. Previous studies of different regions have identified no significant correlations between a variety of research productivity measures and the ultimate attainment of tenure, strongly suggesting that promotion and tenure committees take into account factors other than sheer cut-offs in research metrics for determining promotion (Littman et al. 2017) and tenure. Furthermore, previous studies have indicated that the Carnegie classification of the faculty member’s institution did not significantly correlate with the metrics of faculty members promoted from Assistant Professor to Associate Professor (Littman et al. 2017). Carnegie Classification is a widely accepted framework for describing institutional research and educational capacity. It categorizes institutions based on the type and number of degrees conveyed as a robust indicator of academic output (The Carnegie Classification of Institutions of Higher Education 2015). However, Carnegie classification does not take into account non-standardized characteristics such as the individual department or program mission or goals within an institution; so while an institution may convey a large number of research or clinical doctorates to its graduates, an individual program within that institution may serve a more focused enterprise, such as those observed in the professional education of medicine, dentistry, and other healthcare fields. Notwithstanding such limitation, Carnegie classification may serve as an effective proxy measurement of institutional resource availability to faculty and it is hypothesized to strongly impact faculty productivity.

Promotion and tenure universally is seen as a critical event in the life of an academic faculty member. When faculty are hired to a “tenure-track” position such as Assistant Professor, the institution typically begins their “tenure clock,” which can be defined as that period of time during which the faculty member is expected to produce a body of work that reflects their research capabilities and future potential as an educator. The faculty member subsequently is then evaluated at multiple administrative levels based largely on this body of work and other tangible elements of their curriculum vita. Factors evaluated typically revolve around a faculty member’s research productivity, but also including teaching activities and effectiveness, as well as service activities such as recognized participation in committees within the institution and/or professional organizations outside the parent college or university. Furthermore, committees that evaluate faculty for tenure and promotion often create their own guidelines which can vary widely between academic units and the institutions that house them.

In a 2006 study, 77% of deans overseeing allied health programs ranked teaching activities as their primary consideration for granting tenure, and only 22% of these deans ranked research as primary, while 1% ranked service as the most important factor for evaluating tenure (Balogun and Sloan 2006). Earlier, in 1989, a survey of medical technology deans indicated similar results, suggesting that many, perhaps most, deans associated with colleges of allied health primarily ranked teaching and faculty educational attainment (degree) as more important than research activity when granting tenure. However, at the same time, surveyed faculty ranked teaching and educational accomplishment as less important (Hudson and Southerland 1989). The former results indicate that the perception of deans has remained relatively constant across time and discipline. However, there appears to be a significant disconnect between dean and faculty perceptions, suggesting that deans may not follow their own guidelines or that other individuals in the decision-making pipeline are using disparate criteria for selection for promotion.

Separating different regions of the United States for the purposes of identifying trends in scholarly activity is supported by a range of previous research, including previous studies of research productivity that have used similar methods (Barnard-Brak et al. 2011; Holcomb et al. 1990), investigations of barriers to research activity present in different regions (Hooper et al. 2012), and even analyses of differing psychological and social traits among inhabitants of different regions (Rentfrow et al. 2013). Different regions of the United States are subject to differing levels of research capacity, employment rates in different economic industries, and even population demographics and growth rates (Mackun and Wilson 2011; Current Employment Statistics 2017). Barriers to research can also be state or institution-specific, in the form of legislation, regulation, and funding or of real property availability. By analyzing academic research productivity by region, regional standards can be compared and contrasted across the United States while allowing institutions, committees, and faculty to compare against regional peer productivity at time promotion or tenure decisions are made.

There are a variety of metrics used by faculty and institutions to track research productivity, including the number of published articles, the number of times that article has been cited by other published articles, the impact factors of the journals in which a faculty publishes, and the amount of funding a faculty member receives, particularly from external sources. A variety of formulas, such as the author h-index (Hirsch 2005), have been proposed to distill this information into a quantifiable metric. For a variety of reasons, not least of all is the disparity in citation practice and funding availability between disciplines, there is ongoing contention in the academic world regarding the effectiveness of any of these measures to determine faculty productivity, particularly within multidisciplinary programs and departments.

It has also been shown that Physical Therapy faculty research productivity does not correlate with their students’ academic success when controlling for number of faculty, student undergraduate GPA and demographic profile, among other factors (Cook et al. 2015). A possible interpretation of these results is that faculty research productivity contributes substantially to a sense of prestige for the program or institution but does not materially benefit Physical Therapy student learning or outcomes. Indeed, faculty with successful research programs may receive grants to “buy out” of their teaching time in order to focus on research, possibly to the detriment of availability for student questions, and requiring the hiring of adjunct faculty to teach their courses. Despite the legitimacy of these concerns, the purpose of this study was to attempt to identify regional standards currently used to ascertain research productivity metrics underlying academic promotion and tenure decisions.

Physical Therapy faculty traditionally come from diverse academic backgrounds with a variety of different credentials, and their programs typically seek a broad, interdisciplinary and interprofessional approach to their educational offerings. This tactic apparently benefits student educational attainment and clinical preparedness, but the incorporation of multiple disciplines within one program may result in a disconnect in research output and the expectation or perceptions of research productivity along with limitations of resource availability. During the 2015–2016 academic year CAPTE accredited DPT programs nationwide employed a reported 2437 full-time and 282 part-time core faculty. Certified clinicians composed 41.9% (1196 faculty) of these faculty, and 33.2% (809 faculty) had grant funding of any amount. Professional doctoral degrees (EdD, DSc, etc.) compose 13.9% of DPT faculty degrees, while 45.9% hold a PhD in a wide assortment of fields (Artis and Chana 2016). In contrast, in M.D. programs accredited by the AAMC’s LMCE, 72.9% (121,472) faculty held an M.D. and 28.5% (47,518) held a Ph.D. of the 166,713 total M.D. program faculty (AAMC 2016). This disparity between faculty in these two healthcare related fields clearly illustrates the diverse educational background and range of specializations within the DPT academic programs which are usually housed in small programs with historic enrollment averaging between 40 and 45 students (Artis and Chana 2016).

In this study, publicly available curriculum vitae harvested directly from institutional faculty web pages were used to collect data on faculty research productivity. This method has been used previously in the literature (Barnard-Brak et al. 2011), and has been found to be consistent with data obtained through self-reporting surveys (Kaufman 2009; Littman et al. 2017; Holcomb et al. 1990). In order to distinguish barriers to academic research productivity surrounding economic factors related to institutional location, this study uses the regional breakdown of the United States as defined by the Bureau of Economic Analysis which specifically identifies regional variations in production, personal income, and real property assets. Research productivity may also be subject to geographical limitations outside the control of faculty members, such as collaborative potential and proximity to other research or industrial partners.

It is important for academic Doctor of Physical Therapy programs to create and maintain effective guidelines for tenure and promotion reviews that align seamlessly with their programmatic goals. Especially, since tenure decisions have a profound impact on faculty careers and, concomitantly, on the effective voice of DPT programs within their institutions, most of which require tenured faculty representation for considerations of institutional governance, programmatic growth, and, most notably, budgetary outlays. Accordingly, it can be argued that such guidelines are critical for the long-term advancement of the Physical Therapy profession. Thus, the overarching goal of this study is to identify research productivity metrics of successfully promoted Physical Therapy faculty and to present that data for the express purpose of the creation of clear guidelines for promotion and tenure decisions. Secondarily, a subsidiary aim of this study is to provide the benchmarks necessary to follow changes in promotion metrics over time and region.

Methods

Methods used were similar to those previously published (Littman et al. 2017). Briefly, data were collected from institutionally-linked, publicly available curriculum vitae (CV) of tenure-track (non-clinical) DPT professors currently employed at a CAPTE accredited DPT program and who earned promotion from the years 2000 through 2016 at a university located in the Western United States. The Western United States was considered to encompass three regions, (1) the Far West—comprised of the states California (CA), Nevada (NV), Oregon (OR), and Washington state (WA)—(2) the Rocky Mountain states—Colorado (CO), Idaho (ID), Montana (MT), Utah (UT), and Wyoming (WY)—and (3) the Southwest—Arizona (AZ), New Mexico (NM), Oklahoma (OK), and Texas (TX). A total of 42 subjects with publicly available CVs who received promotion from institutions in these states met these inclusion criteria and were used in this study.

Research productivity factors used in this study include number of publications, citation counts (including self-citation), journal impact factor, h-index at time of promotion, funding received, and the number of years until promotion since first attaining Assistant Professor rank. Only peer-reviewed manuscripts indexed by Thomson Reuters Web of Science and published during the years the subject served as Assistant Professor were used to determine total publications, number of times those publications were cited, and author h-index based on these publications. The 5-year journal impact factor for each subject’s publications (as reported in 2017) was collected from Thomson Reuters Journal Citation Reports (JCR). Awarded funding amount up to and including the year of promotion was collected from the subject’s CV and was included if the subject served as the principal or co-investigator for the funded project.

Each subject’s number of publications, total citations, and journal impact factor were averaged, then the median, mean and standard deviation (SD) of all subjects’ averages were calculated. The median, mean and standard deviation of all subjects’ internal and external funding, years until promotion, and h-index at the time of promotion were also calculated. Linear regressions and two-tailed ANOVA analyses (α = 0.05, β = 0.80) were performed using IBM SPSS Statistics release 24.0.0.0 (32-bit edition). No data points were removed from analysis. Confidence intervals are 95% of the mean where shown. Where observed, overlapping data points were offset horizontally against the ordinal x-axis of “Years Until Promotion”.

Results

Forty-two faculty from 16 of 50 CAPTE accredited DPT programs in the Western United States were used in this study after meeting the inclusion criteria. All data metrics were available for the 42 faculty and none were excluded from analysis. The number of universities represented by the faculty in this study compared with the total CAPTE accredited DPT universities per state are as follows: Far West: CA: 1/15, NV 1/2, OR: 1/2, WA: 1/3; Rocky Mountains: CO: 2/3, ID: 0/1, MT: 0/1, UT: 1/3, WY: 0/0; and Southwest: AZ: 3/4, NM: 0/1, OK: 0/2, TX: 6/13. Of the 42 total subjects in this study, the number of faculty per state and their proportionality in this study is as follows: Far West: CA: 1 (2.4%), NV 4 (9.5%), OR: 1 (2.4%), WA: 1 (2.4%); Rocky Mountains: CO: 11 (26%), ID: 0 (0%), MT: 0 (0%), UT: 1 (2.4%), WY: 0 (0%); and Southwest: AZ: 3 (%), NM: 0 (0%), OK: 0 (0%), TX: 20 (48%). The institutions were distributed among Carnegie Classifications M1: 7/42, M2: 1/42, R3: 7/42, R2: 15/42, R1: 2/42, Specialty medical or health institutions (SF): 10/42.

The median years to promotion since first hired as a tenure-track Assistant Professor was 6 with an average of 8.1 (4.6 SD) and ranging from 4 to 21.

The data is reported here in a style similar to that published previously (Littman et al. 2017) in order to aid in direct comparison between the regions. As described in Table 1, the median publication count for DPT faculty promoted from Assistant to Associate Professor in the Western region was 3 with an average of 5.4 (5.5) with a range of 0 to 18 publications. A total of 7 subjects (17%) had no indexed publications. The median of total citations was 25.5 with an average of 113.5 (169.1) ranging from 0 to 752 citations. A total of 9 subjects (21%) had received zero citations in an indexed publication. The median of the average 5-year journal impact factor in which a subject published was 2.603 with an average of 2.582 (1.708) and a range of 0 to 8.330. The median of average citations per manuscript was 7 with an average of 17.2 (21.5) and a range from 0 to 98.5. The median total funding received was $5145.00 with an average of $291,946.52 ($739,714.09) and a range of $0.00 to $3,428,500.00. The median of external funding was $0.00 with an average of $269,370.38 ($713,643.38) ranging from $0.00 to $3,270,000.00. Subjects had a median h-index factor at time of promotion of 2 with an average of 3.2 (3.5) ranging from 0 to 13. A total of 24 of 42 (57%) received no external funding and 8 (19%) received up to $15,750 external funding prior to promotion. Faculty with an h-index of 0 prior to promotion totaled 9 of 42 (21%), and 7 (17%) had no indexed publications. A Shapiro–Wilks test indicated that all of the individual faculty metrics were significantly non-normal in distribution (p < 0.004), which supports the use of median, not mean and standard deviation, for analysis.

Table 1 Forty-two subjects who successfully earned promotion from Assistant to Associate Professor from Western institutions were used in this study

Linear regression analysis was performed to determine which factors predicted successful promotion to Associate Professor with respect to years spent at the rank of Assistant Professor. Overall, the identified metrics did not significantly nor closely correlate with the number of years it took a candidate to receive promotion (p = 0.480, R2 = 0.215). Within this analysis, the number of publications did significantly but loosely correlate with the number of years spent at the rank of Assistant Professor (Fig. 1, p = 0.011, m = − 0.47 publications/year, R2 = 0.152). Total citations a faculty member received did not correlate with years until promotion (p = 0.081, m = − 10.02 total citations/year, R2 = 0.074), but the h-index of a faculty member at time of promotion was significantly but loosely correlated with years at the rank of Assistant Professor (Fig. 2, p = 0.019, m = − 0.28 h-index/year, R2 = 0.130). The amount of external funding a faculty member was awarded was not correlated with the number of years they spent earning promotion (Fig. 3, p = 0.407, m = − $20,400/year, R2 = 0.017). Quantity of external funding awarded loosely predicted the total citations a researcher received (Fig. 4, p = 0.011, m = + 9.24 citations per $100,000, y-intercept = 88.59 citations, R2 = 0.152). The Carnegie Classification of the faculty member’s promoting institution did not significantly correlate overall with the faculty research productivity metrics identified here (p = 0.307, R2 = 0.258). However, within that analysis, individual metrics did correlate significantly but loosely with Carnegie Classification, including total funding (p = 0.025, R2 = 0.092), external funding (p = 0.027, R2 = 0.90), total citations (p = 0.008, R2 = 0.134) and average citations count (p = 0.01, R2 = 0.117), and h-index (Fig. 5, p = 0.045, R2 = 0.070).

Fig. 1
figure 1

The number of manuscripts an Assistant Professor published in an indexed source was loosely but significantly correlated with the number of years spent at the rank of Assistant Professor before promotion (p = 0.011, m = − 0.47, R2 = 0.152). Overlapping data points are offset horizontally against an ordinal x-axis

Fig. 2
figure 2

The h-index of an Assistant Professor at time of promotion was loosely but significantly correlated with the number of years spent prior to promotion to Associate. (p = 0.019, m = − 0.28, R2 = 0.130). Overlapping data points are offset horizontally against an ordinal x-axis

Fig. 3
figure 3

The amount of external funding a faculty member received did not significantly correlate with the number of years they spent earning promotion (p = 0.407, m = − $20,400/year, y-int = $435,000, R2 = 0.017). The data indicates the median amount of external funding a faculty member has received prior to promotion to Associate Professor is $0.00, regardless of the number of years spent earning promotion. A total of 24 of 42 subjects (57%) received $0 in external funding. Overlapping data points are offset horizontally against an ordinal x-axis

Fig. 4
figure 4

The total number of citations an Assistant Professor receives is loosely but significantly correlated with the amount of external funding received prior to promotion to Associate (p = 0.011, m = $1640 per citation, y-intercept of $82,700, R2 = 0.152, CI: ± $250,816.47)

Fig. 5
figure 5

The Carnegie Classification of the promoting institution did not significantly correlate with the overall research metrics used in this study, but it did loosely but significantly correlate with author h-index (shown here, p = 0.045), the total number of citations received (p = 0.008), the average number of citations received per publication (p = 0.013), and the amount of external funding awarded to the faculty member (p = 0.027)

Ten of the 42 subjects (23.8%) were also promoted to Full Professor in the interval analyzed for this study. This subgroup exhibited a median time to Full Professor of 5 years with an average of 5.3 (3.1 SD). These subjects had higher average external funding amounts during their Associate Professor years than Assistant Professors prior to promotion. Median external funding prior to promotion to Full Professor was $1250.00, mean of $773,428.10 ($2,347,522.71) and with a range from $0 to $7,453,213. These faculty members had median total publications of 2, mean of 8.3 (15.1), median total citations of 27 with an average of 77.4 (122.0), median h-index during Associate Professor years of 2, mean of 3.4 (3.8), median of average citations per article of 10, mean 9.7 (8.0), and median journal impact factor of 2.429, mean of 2.174 (1.405).

The journal’s 5-year impact factor did not significantly correlate with the number of citations that article received (Fig. 6, p = 0.276, m = 0.89, R2 = 0.004) within the dataset used in this study. However, the number of years since an article was published correlated loosely but significantly (p = 1 × 10−13, m = 2.36 citations per year, y-int = 0 citations, R2 = 0.161).

Fig. 6
figure 6

Within this data set, the five-year impact factor of the journal in which a manuscript was published did not significantly correlate with the number of citations that article received (n = 316 manuscripts, p  = 0.276, m = 0.89, R2 = 0.004)

Discussion

These forty-two subjects indicate that a broad range of scholarship and creative productivity factors may be taken into account during considerations of promotion and tenure within academic institutions of the Western United States. A Shapiro–Wilk test of normality revealed that all factors are significantly non-normally distributed (p < 0.004) which supports the use of median, not the mean, for analysis. The faculty promoted to Associate Professor earliest amongst this dataset did so in 4 years, while the median was 6 years ranging as high as 21 years. While overall these research productivity metrics did not significantly predict the number of years spent prior to promotion to Associate Professor, several of the individual metrics were loosely but significantly correlated with the number of years spent prior to promotion (see Table 1). The strongest of these correlations was Publication Count (Fig. 1) which was the total number of articles published in sources which were indexed by Thompson-Reuters Web of Science. A closely related metric, the Author h-index (Fig. 2), was also significantly correlated with the number of years spent prior to promotion. However, neither the average journal impact factor nor the amount of external funding awarded was significantly correlated, indicating these metrics may be some of the least considered.

Comparing this data with the data from the Southeastern (SE) United States (Littman et al. 2017), we may reasonably conclude that the institutions studied in the SE place a larger emphasis on journal impact factor while the Western institutions place more emphasis on the number of articles an academician publishes. Neither region places a significant emphasis on funding awarded, which perhaps reflects the economic and policy climate related to funding availability during the years 2000–2016 included in this study. In addition, academic researchers in the DPT field may also be practicing clinicians that have access to patient data and are able to conduct research in the clinical environment without additional funding. The data from both the Western and SE regions indicate a downward slope when plotting any of the identified research metrics against Years Until Promotion. For instance, the more years spent at the rank of Assistant Professor, the fewer total publications the faculty member published at the time of their promotion to Associate Professor. This trend also held true across both regions for author h-index, total citations, and amount of external funding a faculty member received. It may be possible that these identified research metrics are sufficient but not necessary for promotion and thus early promotion is more likely to occur with faculty who are particularly successful in these metrics, and that teaching and service activities counterbalance research productivity later in the promotion timeline. However, this cannot be ascertained with certainty without having access to a cohort of faculty unsuccessful in the promotion and tenure process with which to compare. This study also did not attempt to identify or quantify faculty teaching or service metrics. As such, this represents a limitation of the methodological design of this study.

Within this dataset 7 subjects (17%) had no indexed publications. This is consistent with both data from the Southeastern (SE) region of the United States (Littman et al. 2017) and with self-reported data from DPT faculty published in 2009 (Kaufman 2009). This is taken as evidence of the consistency of the curriculum vitae collection methodology with that of voluntary self-reporting survey studies performed in the recent past. A total of 9 subjects (21%) in this dataset had received no citations at the time of promotion, which is also consistent with previously published data from the SE region (Littman et al. 2017). Twenty-four subjects (57%) did not receive any external funding. This differs from the SE region in which 36% of faculty had no funding, and it differs from survey data from 2009 in which 28% of subjects reported having received no external funding (Kaufman 2009). The composition of the institutions in the Western and SE region datasets did differ in so far as the SE faculty represented 13 R1 institutions while the Western faculty represented only 2 R1 institutions, as determined by Carnegie Classification. Within the Western dataset the Carnegie Classification of the institutions did not significantly correlate with the overall research metrics identified (p = 0.480, R2 = 0.215). The amount of external funding a faculty member received did correlate significantly with Carnegie Classification as analyzed by linear regression (p = 0.025). Furthermore, the three closely related metrics of total citations (p = 0.008), author h-index (p = 0.045), and average number of citations per publication (p = 0.013) significantly correlated with Carnegie Classification within the Western faculty. Contrasted with the SE institutions where external funding did not significantly correlate with Carnegie Classification (p = 0.062), Carnegie Classification of Western institutions may play a more notable role in resource availability for faculty members and thus more strongly affect faculty productivity, or it may be that the promotion and tenure requirements of Western U.S. institutions are more stratified along Carnegie Classification. According to the U.S. Bureau of Economic Activity, the regions comprising the Western United States within this analysis accounted for 34.7% of U.S. GDP in 2016 while the SE United States accounted for 21.4% of total U.S. GDP (Analysis 2018), supporting the statement that regional economic GDP is not accounting for the external funding of research awarded to faculty within this dataset. Finally, citation rate may not be as important an indicator of success for faculty that conduct rehabilitation research in the clinical setting, as the target audience is clinicians and directly improving patient outcomes. This use of research is difficult to easily quantify.

Using the indexed manuscripts published by faculty within this dataset, it was determined that the five-year impact factor of the journal did not significantly correlate with the number of citations an article received (Fig. 6; p = 0.276, R2 = 0.004, m = 0.89 citations/impact factor). Instead, the number of years since publication significantly and more closely correlated with the number of citations it received (p = 1 × 10−13, R2 = 0.161, m = 2.36 citations/year). This may indicate a non-normal distribution for citations received for each article within a given journal, or it may be an indication of the different citation rates and practices that characterize the broad and varied fields investigated by DPT faculty. This further supports the lack of correlation found between time to promotion and the average impact factor of the journals in which a faculty member publishes (p = 0.348). The total number of citations received by an Assistant Professor was also significantly correlated with the amount of external funding awarded to that Assistant Professor (Fig. 4), with a slope of $1640 per citation and a y-intercept of $82,700.

The methodology used in this study, namely the collection of publicly available curricula vita, has a number of shortfalls. These vitae were not uniformly available across all institutions and programs and thus provided information from only a fraction of the institutions in the region. Furthermore, this methodology did not allow us to positively identify individuals unsuccessful in their promotion and tenure, limiting our ability to contrast the metrics against such individuals. However, this dataset identified a broad range of metrics across all Carnegie classifications that resulted in successful promotion. This indicates that the identified research metrics are generally inadequate as sole justifying factors for promotion. Other metrics, such as teaching effectiveness and student perceptions of instruction, or non-quantitative characteristics may play an important role in determination of promotion. Such measures of teaching effectiveness are not publicly available, and neither are the guidelines used by various institutional promotion and tenure committees.

The current research sought to expand upon data obtained from the Southeastern region of the United States by obtaining data from the Western United States. As such, the data presented here is restricted to that region and the economic and institutional assets available to support faculty research productivity. This study did not seek to address other important factors such as the public/private status of the institution or to differentiate between institutional budgets. Additionally, U.S. Title VII protected characteristics of faculty were not able to be comprehensively identified employing the methodology used in this study and, thus, were not considered in the analysis. The overall value of this research is in its presentation of a number of metrics that have previously led to successful promotion and tenure in the Western region of the United States. Accordingly, the descriptive data outlined in this article can, perhaps should, be used by program directors, promotion and tenure committees, institutional administrators, and faculty themselves in developing competitive guidelines for promotion and tenure based on programmatic and institutional goals.