Skip to main content

Investigating Business English Teachers’ Belief About Online Assessment: Q Methodology Conducted During COVID-19 Period


This paper reports a study investigating business English teachers’ belief about online assessment using Q methodology. To address the common practice that teachers had to conduct online assessment during COVID-19 period and with performance assessment as theoretical foundation, this paper studied 22 Chinese business English teachers’ belief when they had to do assessment in online context. With data analysis from Q-methodology and complementary interviews, the findings of this study showed a shared configuration among the business English teachers in three factors: strong belief in performance assessment for business English courses; belief in supportive role of ICT; and belief in assessment as an ongoing process. The three factors were further interpreted from systematic approach of assessment and the role of ICT in online teaching. This study will be of significance to help teachers improve the quality of online assessment and raise their awareness of effective use ICT in online teaching.


During the outbreak of COVID-19 in 2020, when traditional teaching methods are no longer accessible, educational institutions have turned to information and communication technology (ICT) for help. In the pandemic, both students and teachers are “forced” to participate in teaching with ICT. Online teaching (or for the students, online learning) has become a must and an unavoidable topic for education. In fact, online teaching has remained the focus of research even before the pandemic with the developed ICT. In English language teaching, experts have conducted research into online teaching to address to the challenge for English teaching brought about by ICT. They have explored English teaching with ICT from general online education modes (Petcovici, 2016) to specific mobile-assisted learning (Shih, 2017) and vocabulary teaching (Muhanna, 2012). They have also investigated into English learning with ICT such as autonomous learning and students’ perceptions (Tao, Zheng, Lu, Liang, & Chin-Chung, 2020). The previous studies have revealed teaching and learning respectively in online context, but they seem not include an important instructional link-the assessment of online English courses. A systematic approach in assessment theories has highlighted assessment as an essential link in teaching process, as the entire assessment data can facilitate teachers to improve teaching process by understanding learners’ progress, achievement as well as problems. From this approach, assessment is an important topic for English teaching in online context and worth in-depth research.

Researchers have agreed that teachers’ beliefs influence their teaching, mindset and instructional decisions that underlie their instructional practices (Shinde & Karekatti, 2012). In particular, teachers’ beliefs are regarded as a primary source of teachers’ instructional practices. Online teaching practice provides new evidence that using ICT in instructional practices depends on teachers’ ability to change their beliefs (Palak, 2021). It may be accepted here that in order to develop effective assessment practices in the online teaching environment during COVID-19 period, teachers’ beliefs about assessment and ICT in their field of teaching hold great significance.

In order to explore teachers’ beliefs about online assessment, this study takes Business English teachers as research subjects and adopts Q methodology as research method. Business English courses have widely adopted performance assessment for its advantages in measuring communicative and intercultural competence in authentic business context (Wang, 2009).The findings will deepen our understanding about teachers’ beliefs about assessment in online context. It may help teachers improve the quality of online Business English language teaching practice when they can obtain reliable feedback from performance assessment.

Theoretical Foundation

Performance Assessment in Business English Courses

The research into performance assessment provides strong justification for its application to Business English courses. Performance assessment, examined from systematic approach, has positive influence on instructional practice for its formative, authentic and interactive characteristic. It requires students to perform a task in a real-world situation rather than select an answer from a ready-made list in classroom. In the progress of performance assessment, language proficiency is assessed in a dynamic and authentic communication in which communicative skills are also assessed in addition to speaking and writing skills. Performance assessment is widely adopted in Business English courses because it can satisfy the goal of Business English – to develop students’ particular communicative competence. Literature review reports that performance assessment leads to the positive changes in instructional practices (Stecher & Mitchell, 1996) and high-level thinking skills (Koretz, Stecher, Klein, & McCaffrey, 1994).

Relevant empirical studies have concluded that performance assessment is more effective than traditional summative assessment for Business English courses because the former is process-oriented (Strelchonok, 2018). Many experts have expressed their own understanding of developing performance assessment (Clementi, 2003; Herman, Aschbacher, & Winters, 1992). In terms of the process of performance assessment, four procedural descriptors are stipulated —outcomes, assessment administration process, actual question prompt and the scoring rubric (Herman, Aschbacher, & Winters, 1992). It is necessary to explore Business English teachers’ attitudes towards the above process of performance assessment so as to analyze their practice and clarify major problems in their real instructional practices.

The three main aspects of performance assessment (the characteristics, advantages and disadvantages, quality standards) have been summarized in the study of Business English teachers’ beliefs on classroom performance assessment. Performance assessment in Business English program is characterized as using rubrics, teachers’ participation and high cost (Clementi, 2003; Wang, 2009). In addition, the strong points of performance assessment can be summarized as theoretically and practically coherent and complex task design focusing on learners’ high-level knowledge and skills in response to authentic business situations. Performance assessment is critiqued for its labor intensiveness, lack of appropriate difficulty and creativity, dependence on teachers’ personal skills, subjectivity and unsure validity according to Business English teachers (Wang, 2009). Considering so, proper rubrics (Hawk, 2009), two or more raters and multiple assessment formats are often adopted to guarantee the validity and reliability of performance assessment in real instructional situations. A high-quality performance assessment should have a valued activity, act as a link among instruction, learning and assessment, have a clear and proper scoring rubric, be fair and free from bias and be conducted by skilled raters.

Although the researches exploring the principles and theories of performance assessment are abundant since twenty first century (Adair-Hauck et al., 2006), there is limited research on how these principles and theories of offline performance assessment are reflected in the ways teachers conduct online performance assessment. Online situations are recognized as one of the contextually influential factors on teachers’ performance assessment practices in addition to their individual beliefs (Hopârtean, 2020). Therefore, when performance assessment is conducted in the new online context, the three main aspects of performance assessment summarized by literature review (Clementi, 2003; Hawk, 2009; Wang, 2009) should be re-examined.

Teachers’ Beliefs on Performance Assessment and ICT

Teachers’ personal factors such as educational background, teaching experience, professional knowledge, teaching skills and beliefs are closely related with instructional practices. Among them, teachers’ beliefs are closely related with the implementation of their teaching practices (Shinde & Karekatti, 2012).In classroom teaching, teachers’ attitudes can work as a predictor of students’ achievement and teachers’ commitment appears to play a key role in students’ performance (Howie, 2005). Moreover, the close relationship between teachers’ beliefs and their assessment practices are well-documented (Valizadeh, 2019). Therefore, if Business English teachers’ perspectives and approaches in the process of performance assessment are clearly revealed, their instruction and evaluation practices are expected to get improved. This study targeting at interpreting teachers’ practices in performance assessment will focus on teachers’ individual beliefs, considering there exists less variance in class size, time on task, teaching load and even teachers’ level of education in Business English online classroom.

Theoretically, ICT is a valuable instrument to “empower” teaching and learning, performance assessment practices are planned to be successfully supported by ICT (Petcovici, 2016). Previous empirical studies show that in practice, ICT presents diverse and complex pedagogical applications (Caird & Lane, 2015). ICT are found to either facilitate students’ learning (Hennessy et al., 2007) or to be the barriers in teaching and learning (Umoru, 2012). The effect of ICT is related to teachers’ beliefs. Majority of teachers are proficient at ICT for personal use (e.g. social networking), but their beliefs might influence how they implement ICT (Lim & Khine, 2006). Despite the widespread application of ICT in Business English online teaching at the period of COVID-19, and some related research into specific instructional practices, there has been little research on teachers’ beliefs towards using ICT while accomplishing performance assessment. To enrich the research into Business English online performance assessment, this study intends to explore teachers’ beliefs because their beliefs about performance assessment and their beliefs about ICT will influence their performance assessment practices.

Q Methodology

Q methodology is a borrowed psychological research methodology designed to dig out people’s subjective ideas in a specific context. Q methodology is good at investigating into teachers’ implicit mindset and belief (e.g. Irie et al., 2018) and it allows individuals to voice out opinion even extreme viewpoints (Watts & Stenner, 2005).When the whole people’s viewpoints on a given topic are counted in, the counter-intuitive results will be emerged. Our study intends to deal with a special context that teachers are forced to do performance assessment through online channel during COVID-19. With the changing context, individual teachers may have their own understanding and beliefs about the online performance assessment. Q methodology helps us approaching to teachers’ in-depth beliefs by including a large variety of viewpoints. .

In data collection, Q methodology can effectively collect teachers’ various opinions of performance assessment in the given online context. Contradicting to the traditional survey, Q methodology is capable of extracting hidden opinions by breaking researchers’ pre-defined categories and including participants’ subjective opinions. In the online context, teachers’ beliefs go beyond pre-defined categories. When teachers’ beliefs about online performance assessment are to be explored, Q methodology is a suitable research method as it sets no preconception or hypothesis (Watts & Stenner, 2012). Moreover, Q methodology can prevent teachers from making neutral response in our moderate and conservative Chinese culture. Instead, Q-sort forces participants to sort and fill the statements in a quasi-normal distribution grid in addition to simple agreement or disagreement. Such a grill-filling is an interactive and dynamic method which individuals has to understand, compare, and examine all the statements based on their own criteria and thus producing some ideas that they may not otherwise think of (Ernest, 2011).,

In data analysis, Q methodology is an effective research method combining a qualitative interpretation and a quantitative result. It can reach objective conclusion out of subjective opinions. Qualitative interpretation in its data analysis leads to an in-depth understanding of the complexity (Watts & Stenner, 2012) while the quantitative factor analysis can draw comparatively objective conclusion out of subjective opinions (Lundberg, 2019). In Q methodology, factors selected for qualitative analysis must meet two requirements in quantitative results: (1) their “eigenvalue” should be above 1.00; (2) they should illustrate significant configuration in correlation matrix (Watts & Stenner, 2005). In this way, the factor analysis can have as much explanatory power as possible. In a word, Q methodology is a reliable method to investigate people’s beliefs by mapping out whole people’s opinions, either consensus or contradictions. With its introduction into education, teachers’ attitudes, beliefs and mindset have been explored and explained (e.g. Irie et al., 2018; Lundberg, 2019; Yang & Montgomery, 2013). To our knowledge, however, there has been no study using Q methodology to investigate teachers’ beliefs about online assessment, which gives rise to our study.

This Study

Context and Participants

English teaching at tertiary level in China, during the COVID-19 period, was conducted online to safeguard teachers’ and students’ health. Correspondingly, online assessment was widely adopted. Teachers, based on their beliefs about online assessment, designed various online assessment including written test, oral presentation, simulations and essay writing. This study focused on Business English teachers and intended to investigate their beliefs about performance assessment. Twenty-two Business English teachers from a Chinese university in Shanghai participated in this study. This university was the one of the three earliest universities setting Business English majors in China and the group of twenty-two Business English teachers engaged in teaching Business English majors at three levels. The participants taught 12 different Business English courses such as Enterprise English, Intercultural Communication, English for International Finance, and Introduction to Business English. They aged from25 to 51 years old, with teaching experience ranging from 1 year to 23 years. Among them, six were male and sixteen were female, which represented a typical gender distribution among Chinese English teachers. Since the focus of Q methodology is the extent of viewpoints provided by participants in the given context. For this concern, this study tries to include various and distinctive opinions but not a large number of participants.


This study has consisted of two main steps -- Q-sort, Q classification and interpretation. The former is a critical data collection in this study. Participants were interviewed about their beliefs about online performance assessment. Based on their interviews, Q-sort was designed with a quasi-normal distribution grid (see Fig. 1). All the participants took the Q-sort in October 2020. In Q-sort, the grid filling was difficult but essential for the participants to understand their own hidden mindset and clarify their dilemma (Watts & Stenner, 2005; Yang & Montgomery, 2013).Twenty-two participants were asked to sort the 48 statements and put them in a forced distribution continuum ranging from “mostly disagree” (− 5) to “mostly agree” (5). The 48 statements consisted of three main aspects of performance assessment framed in theory (Clementi, 2003; Hawk, 2009; Wang, 2009): (1) The advantages or disadvantages of performance assessment; (2) quality standards for performance assessment; (3) process of performance assessment. In addition, to address the online context, opinions about teachers’ ICT uses were included.

Fig. 1

The Q-sort of this study (The number of Q statements placed in the forced distribution grid from − 5 to 5 is 1, 2, 4, 6, 7, 8, 7, 6, 4, 2, and 1 respectively)

Data from each Q-sort were collected and analyzed via PQMethod and KADA software (Banasick, 2019). We input twenty-two participants’ Q-sort data and automatically produced factor arrays. Five factors were extracted (with eigenvalue above 1.00) and then they were rotated analytically (Varimax and judgmental) to obtain a correlation matrix. Four factors met the requirement for factor analysis (Lutfallah & Buchanan, 2020). Qualitative interpretation of the four main factors was conducted to reveal teachers’ beliefs about online assessment.


The findings of Q methodology were reported in two sections: a brief quantitative description and a detailed qualitative interpretation. The quantitative description explained the statistical significance of the factors and connections among the factors. The qualitative interpretations provided detailed analysis of each factor with interview data from participants.

The Quantitative Description of Four Factors

In this study, five principal factors were extracted (see Table 1) but three of them were selected for the qualitative explanation. In Q methodology, principle stipulates factors should account for the variation as much as possible. The key indicator is the “eigenvalue”, which illustrates the extent of the variation. Usually, a standard requirement to select factors is eigenvalue above 1.00 (Watts & Stenner, 2005). In this study, five factors met the requirement (see Table 1).

Table 1 Five factors with eigenvalue above 1.00

In addition to the key indicator of “eigenvalue” in factor extraction, we referred to the correlation matrix when confirming the factors for qualitative explanation. According to the principle stipulated in Q methodology, correlation matrix should be introduced to illustrate shared configuration. Significant configurations can be produced if participants attribute the importance to the same factor. In this study, the five factors were rotated analytically to illustrate their correlation. With correlation matrix, Factor 1, 2 3 and 5 were revealed to have significant configurations (see Fig. 2). It indicated that the four factors had much explanatory power in their original correlation. Based on the key indicators and referring to correlation matrix, we confirmed three factors for qualitative explanation in this study.

Fig. 2

Correlation matrix in this study

The Qualitative Interpretation of the Four Factors

Four factors selected according to quantitative results were interpreted qualitatively. Through PQMethod, factor arrays (see Table 2) were formed, indicating the significance that all participants attributed to a given factor. Each item was attributed to different importance while being perceived from different clustered patterns in Factor 1, Factor 2, Factor 3 and Factor 5 respectively. Our main consideration in the interpretation was to describe participants’ beliefs about online performance assessment with distinctive characteristics so we focused on the items that participants attributed great importance to.

Factor 1: Belief in Performance Assessment for BE Modules in Online Context

Factor 1 demonstrated participants’ belief in the performance assessment in Business English modules in online context. Fifteen participants agreed that performance assessment could be applicable to Business English assessment because they believed performance assessment could create authentic tasks in business settings (2: +3; 1: +2). Therefore, they insisted on using performance assessment in online context. They were concerned much about the quality of the assessment and in particularly the validity and fairness of performance assessment (22: +3; 27: +2) (see Table 2).

Table 2 Example item of factor arrays

This factor reflected Business English teachers’ rationale for using performance assessment in online context. They would deliberately match assessment with learning outcomes, perceiving the teaching goal of Business English was coherent with nature of performance assessment in the point of enhancing students’ ability to perform particular behavioral tasks in authentic business settings. In interviews, some teachers stressed that “the final goal of Business English was to enhance students’ communicative and language proficiency in different business contexts” (I-T5). They thought it was difficult to evaluate dynamic communicative competence specific to authentic setting only through oral or written tests for which were usually good at assessing language proficiency. Just as one participant said, “even well-designed oral or written exams might fail to simulate a real business context.”, “You can imagine, in a real business deal both sides are flexible, and we cannot prepare all the possible situations” (I-T16). Such dynamic and flexible communication in authentic business context demanded high for students and correspondingly, performance assessment was the prior assessment strategy in Business English courses.

Factor 1 revealed participants’ concern about the quality of performance assessment from design to implementation. Ten participants agreed performance assessment should be linked with learning outcomes and twelve participants stated it should reflect important knowledge and skills (Two participants scored 5). In interviews, they admitted they tried to “assess the required competence in an effective way” (I-T16). In performance assessment, they included “relevant criteria for language proficiency and communicative skills” and meanwhile, paid attention to “conducting the assessment in clear and justice way” (I-T21). They believed just like all other assessments, performance assessment maintained clear criteria which should be announced to students in advance. Moreover, performance assessment needed to create tasks similar to those in real-world. For instance, some participants mentioned that in scoring system, “new problems relating to online context should be taken into consideration”. In the interview, nineteen out of the twenty-two participants agreed that they actually spent much time on performance assessment with a more detailed instruction in online context.

Factor 2: Belief About the Role of ICT in Online Performance Assessment

Factor 2 revealed participants’ belief about ICT uses in online performance assessment. In this factor, participants agreed that teachers’ belief in ICT as well as their ICT knowledge and skills would influence the online performance assessment (7: +3; 8: +2) whereas they disagreed to include ICT learning competence in the assessment (6: − 2). This indicated that participants thought ICT was just the means rather than the end of the online performance assessment. When referring to conducting online assessment via the means of ICT, this factor revealed participants’ belief in the supportive role of ICT. Participants agreed that ICT allowed them to “give on-site individual direction”, to “display sample work via social media”. Moreover, ICT enabled students to “create digital individual record” such as auto-video record and digital portfolio. The detailed information from individualized interaction as well as from the digital record helped participants to assess each student fairly in online performance assessment. With ICT, they could “follow students’ steps in their group project” (I-T16) and accordingly mark each student based on their individual record.

From the perspective of communication, participants agreed that ICT helped the communication between students and teachers and collaboration among students in general (4: +2) but in the particular context of online performance assessment, some participants thought ICT had set barriers for communication (10:− 1). Such a contradiction led us to re-examine participants’ belief about using ICT in online performance assessment. In scrutiny, their opinion differentiated with regard to the function of ICT in online assessment. Factor 2 is the only factor that could be split via Bipolar Factor, representing a divergence in opinions. Further interview confirmed such a divergence about ICT in online assessment. Some participants complained that ICT had brought about the trouble such as “unclear signal”, “sudden mute in the simulated meeting” and “had to redo the presentation due to improper recording in Zoom”. In comparison, the other participants reported ICT had enhanced the frequency of students’ communication. “Students have held group meetings to solve the problems in their project more often than before” (I-T20) and they were “obviously active in Wechat, and they’d like to post their ideas freely in group discussion” (I-T8).

Factor 3: Belief About Online Performance Assessment as an Interactive Process

Participants in this factor demonstrated that online performance assessment was not a once-for-all task. Instead, online performance assessment consisted of several well-designed processes. In preparation, participants agreed that teachers should provide students with sufficient information about online performance assessment (25: +1). For example, scoring criteria should be delivered to students in advance (44: +1). However, participants had different opinions on to which extent the students should be informed. For example, they held different opinions on whether teachers needed give instruction in the preparation step (38: from − 5 to 4). During online assessment, some participants would display samples works or give individual instruction. On the contrary, others disagreed and believed that detailed and comprehensive directions might discourage students from completing online performance assessment. In terms of feedback, participants insisted that online performance assessment required teachers’ feedback (34: +2). Teachers “should not just poster the results on net” and they should explain to students why they assessed in this way. They also believed that online assessment helped teachers reflect their teaching, and more importantly, know about students’ learning process and outcomes (48: +3).

Participants considered online performance assessment as an interactive process among students. They agreed that online performance focused on students’ communication (32: +2) in addition to language acquisition. With such a belief, they emphasized students’ active engagement and modified interaction in online performance assessment. Whereas, they disagreed with the statement “student cooperation should be clearly required in scoring scheme of performance assessment” based on the idea that “communication is spontaneously conducted among students in the given business setting” (I-T 21). Like factor 2, this factor also showed participants’ worry about the barriers of ICT in the interactive process of online performance assessment. They complained about problems that ICT brought about in the communication like “breakdown in Zoom meeting” and “unclear voice during the presentation”.

This factor indicated participants’ belief in the relationship between online performance assessment and learning outcomes set by Business English courses. Participants tried to represent, check and reflect students’ learning outcomes throughout the process of online performance assessment. The marking criteria, in their eyes, should be “highly relevant to the learning outcomes” (I-T1). In the assessment, teachers marked students’ performance in terms of language and content, which “were defined by the dual goals of Business English learning” (I-T15). Feedback was viewed as inseparable step of online performance assessment. All the twenty-two participants admitted they had included feedback in their online performance assessment practices and some of them even set individual feedback via social media. They believed that students could knowhow they performed in assessment” (I-T6) and “their strengths and weaknesses” (I-T1) in their learning. Meanwhile, teachers could know how they could “help students improve business English learning.

Factor 5: Worries About Quality of Online Performance Assessment

This factor revealed participants’ worries about quality of performance assessment when it was conducted in online context. All the participants disagreed that the quality of online performance assessment was irrelevant to favorable teaching environment (20: − 2). They believed the quality of performance assessment largely depended on an authoritative performance assessment criteria and evaluation (21: +1). However, they were not confident to design an appropriate performance assessment (19: − 3) and they did not believe they could make authoritative criteria (18: − 1) when performance assessment was conducted in online context. They agreed that it was difficult for them in this semester to duplicate a complex assessing system in online context, which needed “much energy and necessary training”. In addition, they worried that their belief about ICT use (8: +1) and their knowledge and skill of using ICT devices (7: +2) would reduce the quality of online performance assessment.

The interview data confirmed their worries. Most of the participants lacked confidence in conducting online performance assessment. For them, “the teaching has conducted from classroom to online abruptly, so it left me little time to handle it” (T-1). In fact, when Covid-19 broken out, they were all in winter holiday and hardly made preparation for online teaching or online performance assessment yet. Just like a teacher said in interview, he “did not expect it lasted long, so I just prepared for the first three sessions when I was informed to teaching online” (T-3). All the participants had obtained relevant training for online teaching and assessment, but “the training programmes were organized at the university level and targeted at specific teaching platforms (e.g. and” (T-6). The training programmes failed to meet their needs since they “did not provide relevant information about performance assessment at all” (T-6). In their eyes, however, online performance assessment demanded high for teachers in knowledge about performance assessment (21: +1) and in knowledge and skills of ICT use (7: +2). The gap between insufficient preparation and high demand lead to their worries about the quality of online performance assessment. They lacked confidence in either designing appropriate online performance assessment (19: − 3) or making proper criteria (18: − 1) for online performance assessment. To make things worse, unskillful ICT use had deepened their worries. In interview, an experienced participant complained that she had “never met such an embarrassing and worrying situation before” when facing “sudden mute in the simulated meeting” during the online performance assessment (T-11).


This study investigated Business English teachers’ beliefs about online performance assessment with Q methodology. Results had proved that participants held strong belief in online performance assessment and its role as a crucial link between teaching and learning. This finding was in line with the previous studies on Business English teachers’ beliefs about performance assessment in offline context (Strelchonok, 2018; Wang, 2009). It indicated those participants had deep-rooted belief in performance assessment and its effective feedback to Business English teaching, and therefore, they would apply it even when the context had changed from offline to online. This finding confirmed the theoretical application of performance assessment in the English courses by adding new evidence in online context. Holding the belief that performance assessment was targeted at students’ learning outcomes, Business English teachers paid equal attention to both course goals-communication competence and language acquisition. They regarded assessment as a process in which feedback was the essential linking element between teaching and learning.

This study revealed different opinions on the degree of ICT integration with pedagogy in online performance assessment despite of the confirmation of ICT supportive role. A few participants thought ICT itself was part of online performance assessment so ICT learning competence should be included in learning outcomes. As a contrast, most participants thought ICT was only a channel for performance assessment, and therefore, ICT competence should not be included. This finding complied with the previous research into ICT uses which concluded the weak or strong integration of technology and pedagogy in teaching (Fullan & Langworthy, 2014). The weak integration indicated ICT was used as a pure technical skill while the strong integration of technology and pedagogy represented ICT as a medium to facilitate subject learning (Fullan & Langworthy, 2014; Hennessy et al., 2007). In this study, participants believed in the integration between ICT and performance assessment demonstrated willingness of using ICT. Such different opinions in ICT justified their different ICT uses in online performance assessment. Those who thought ICT itself was part of online performance assessment made full use of ICT to enhance teachers’ communication with students by “giving on-site individual direction”, to “follow students’ step”. In addition, they encouraged students to use ICT to discuss, to solve problems, and to create digital individual record. In comparison, those who perceived ICT as the channel of performance assessment focused on ICT technological feature like “unclear signal” and “sudden mute” rather than its potential application in online performance assessment. They used ICT just because ICT was the only choice to implement assessment during the COVID-19, and ICT in their opinion, was just a technological means.

Conclusion and Implications

To conclude, this study has investigated Business English teachers’ beliefs about online performance assessment with Q methodology. We have found that Business English teachers’ strong belief in performance assessment and the supportive role of ICT in the new online context. The investigation into Business English teachers’ beliefs about online assessment will add new evidence to the research on English language assessment. Moreover, Business English teachers’ beliefs about the online assessment and particularly the role of ICT in online assessment will explain their different practices. Business English teachers believed in the integration of ICT with pedagogy tended to use effective and various methods in online performance assessment. From this perspective, English teachers need to develop technological knowledge and integrate it with pedagogy to improve ICT uses in online English teaching and assessment.

The findings will have implication on the professional development for English teachers in the postepidemic era. The findings indicate that ICT knowledge and skills has become an integrated element when English teachers conduct online teaching and assessment. Therefore, it should be included in the teacher professional development. On the one hand, school should provide necessary ICT training to enhance teachers’ proficiency in English teaching and assessment in the online and offline contexts. English teachers, on the other hand, should develop ICT use and integration in terms of professional development to ensure the quality of English teaching and assessment in different teaching environments.


  1. Adair-Hauck, B., Glisan, E. W., Koda, K., Swender, E. B., & Sandrock, P. (2006). The integrated performance assessment (IPA): connecting assessment to instruction and learning. Foreign Language Annals, 39(3), 359–382.

    Article  Google Scholar 

  2. Banasick, S. (2019). KADE: A desktop application for Q methodology. Journal of Open Source Software, 4(36), 1360–1364.

    Article  Google Scholar 

  3. Caird, S., & Lane, A. (2015). Conceptualising the role of information and communication technologies in the design of higher education teaching models used in the UK. British Journal of Educational Technology, 46(1), 58–70.

    Article  Google Scholar 

  4. Clementi, D. (2003). Backwash design for performance assessment. Retrieved January 3, 2009, from

  5. Ernest, J. M. (2011). Using Q methodology as a mixed methods approach to study beliefs about early childhood education. International Journal of Multiple Research Approaches, 5(2), 223–237.

    Article  Google Scholar 

  6. Fullan, M., & Langworthy, M. (2014). A rich seam: How new pedagogies find deep learning. Pearson.

  7. Hawk, T. F. (2009). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student Performance/Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Academy of Management Learning & Education, 8(4), 612–614.

    Google Scholar 

  8. Hennessy, S., Wishart, J., Whitelock, D., Deaney, R., Brawn, R., La Velle, L., McFarlane, A., Ruthven, K., & Winterbottom, M. (2007). Pedagogical approaches for technology-integrated science teaching. Computers & Education, 48(1), 137–152.

    Article  Google Scholar 

  9. Herman, J. L., Aschbacher P. C., &Winters, L. (1992). A practical guide to alternative assessment. Association for Supervision and Curriculum Development.

  10. Hopârtean, A. M. (2020). Online business English examinations—Challenges and solutions. Review of Economic Studies and Research Virgil Madgearu, 13(2), 87–103.

    Article  Google Scholar 

  11. Howie, S. J. (2005). Contextual factors at the school and classroom level related to pupils’ performance in mathematics in South Africa. Educational Research and Evaluation, 11(2), 123–140.

    Article  Google Scholar 

  12. Irie, K., Ryan, S., & Mercer, S. (2018). Using Q methodology to investigate pre-service EFLteachers’ mindsets about teaching competences. Studies in Second Language Learning and Teaching, 8(3), 575–598.

    Article  Google Scholar 

  13. Koretz, D., Stecher, B., Klein, S., & McCaffrey, D. (1994). The evolution of a portfolio program: The impact and quality of the Vermont program in its second year (1992–1993) (CSE Tech. Rep. No. 385). Los Angeles: CRESST.

    Google Scholar 

  14. Lim, C. P., & Khine, M. S. (2006). Managing teachers’ barriers to ICT integration in Singapore schools. Journal of Technology and Teacher Education, 14, 97–125.

    Google Scholar 

  15. Lundberg, A. (2019). Teachers’ beliefs about multilingualism: Findings from Q method research. Current Issues in Language Planning, 20(3), 266–283.

    Article  Google Scholar 

  16. Lutfallah, S., & Buchanan, L. (2020). Quantifying subjective data using online Q-methodology software. The Mental Lexicon, 14(3), 415–423.

    Article  Google Scholar 

  17. Muhanna, W. (2012). Using online games for teaching English vocabulary for jordanian students learning English as A foreign language. Journal of College Teaching & Learning (online), 9(3), 235–240.

    Article  Google Scholar 

  18. Palak, D. (2021). Teachers' beliefs in relation to their instructional technology practices. Ph.D. thesis, West Virginia University. Retrieved April 21, 2021 from

  19. Petcovici, T. (2016). The Use of Information and Communication Technologies. In Language Teaching. Timisoara: Tibiscus University in Timisoara, Faculty of Economics. Retrieved from

  20. Shih, R. (2017). The Effect of English for Specific Purposes (ESP) Learning-Language Lab versus Mobile-Assisted Learning. International Journal of Distance Education Technologies, 15(3), 15–30.

    Article  Google Scholar 

  21. Shinde, M. B., & Karekatti, T. K. (2012). Pre-service teachers’ beliefs about teaching English to primary school children. International Journal of Instruction, 5(1), 69–79.

    Google Scholar 

  22. Stecher, B. M., & Mitchell, K. L. (1996). Portfolio-driven reform: Vermont teachers’ understanding of mathematical problem solving and related changes in classroom practice (CSE Tech. Rep. No. 400). Los Angeles: CRESST.

    Google Scholar 

  23. Strelchonok, A. (2018). Assessment of case study method implementation in the Business English Teaching. Central Bohemia University.

  24. Tao, J., Zheng, C., Lu, Z., Liang, J.-C., & Chin-Chung, T. (2020). Cluster analysis on Chinese university students’ conceptions of English language learning and their online self-regulation. Australasian Journal of Educational Technology, 36(2), 105–119.

    Google Scholar 

  25. Umoru, T. (2012). Barriers to the use of information and communication technologies in teaching and learning business education. American Journal of Business Education (AJBE), 5(5), 575–580.

    Article  Google Scholar 

  26. Valizadeh, M. (2019). EFL teachers’ writing assessment literacy, beliefs, and training needs in the context of turkey. Advances in Language and Literary Studies, 10(6), 53–62.

    Article  Google Scholar 

  27. Wang, Y. Y. (2010). Business English Teachers' Performance Assessment Practices and Their Second Language Acquisition. World Publication Company: Shanghai.

    Google Scholar 

  28. Watts, S., & Stenner, P. (2005). Doing Q methodological research: Theory, method and interpretation. Qualitative Research in Psychology, 2, 67–91.

    Article  Google Scholar 

  29. Watts, S., & Stenner, P. (2012). Doing Q methodological research: Theory, method and interpretation. Sage.

  30. Yang, Y., & Montgomery, D. (2013). Gaps or bridges in multicultural teacher education: A Q study of attitudes toward student diversity. Teaching and Teacher Education, 30, 27–37.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Yanyan Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, P., Wang, Y. Investigating Business English Teachers’ Belief About Online Assessment: Q Methodology Conducted During COVID-19 Period. Asia-Pacific Edu Res (2021).

Download citation


  • Teacher belief
  • Performance assessment
  • Information and Communication Technology (ICT)
  • Business English (BE)
  • Online context
  • Q methodology