Human-machine dialogue modality of English Language oral skills testing among Chinese EFL students


  • Tan Jin Jin Centre for Research in Language and Linguistic, Faculty of Social Sciences and Humanities, University Kebangsaan Malaysia, Malaysia
  • Imran Ho Centre for Research in Language and Linguistic, Faculty of Social Sciences and Humanities, University Kebangsaan Malaysia, Malaysia


With the development of computer technology and the development and application of interactive software, more and more second language oral examinations begin to adopt the form of "human-machine dialogue" as the main way of oral examination. However, the results of these experiments have not been validated in the Chinese context. Therefore, this study aims to identify the affecting factors of students’ performance in different English oral tests among Chinese EFL learners. The study used an experimental design. This paper designs and implements a simulated spoken English test of human-machine dialogue modality. 5 Chinese undergraduates students majoring in English language studies and 5 non-English majors are recruited to participate in the experiment. Also, in addition, the experiment also recruited 6 teachers with rich experience in teaching English as a second language as examiners. The results show that there are significant differences in the effective speech frequency in the two tests. Pearson value is.004, and reliability r value is R = 0.04, indicating significant reliability. In terms of hesitation, the duration of all subjects in Q1 and Q2 is significantly different (p<.01). Lexical errors and semantic errors were the most occurred mistakes among students. Finally, the subjects showed high level of anxiety. In terms of self-evaluation of speaking ability, only the group of intermediate English learners show significant differences in the self-evaluation of the two experiments test can truly reflect their oral English ability. This study recommends more research on the human-machine dialogue as the subjects in human-machine dialogue modality cannot get real-time feedback from the communication object, so the subjects are seldom able to notice and self-correct grammatical errors during the expression process. Fourth, the author analyzes and raises research questions about the mistakes made by the subjects in their oral expressions. The human-machine dialogue modality cannot guide the examinees through real-time communication and stimulate the examinees to express the examinee's grasp of relevant second language knowledge which the examiner wants to test.


Download data is not yet available.

Abu-Rabia, A. (2001). Bedouin century: Education and development among the Negev tribes in the twentieth century. Berghahn Books.

Alakrash, H. M., Razak, N. A., & Krish, P. (2022). The Application of Digital Platforms in Learning English. International Journal of Information and Education Technology, 12(9).

Alakrash, HM Razak, N, A. (2019). Motivation towards the application of ICT in english language learning among Arab EFL students. Journal of Advanced Research in Dynamical & Control Systems, 11, 1197-1203.

AlSaleem, B. I. (2018). The Effect of Facebook Activities on Enhancing Oral Communication Skills for EFL Learners. International Education Studies, 11(5), 144-153.

Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford university press.

Huang, I. A., Lu, Y., Wagner, J. P., Quach, C., Donahue, T. R., Tillou, A., ... & Wu, J. (2021). Multi-institutional virtual mock oral examinations for general surgery residents in the era of COVID-19. The American Journal of Surgery, 221(2), 429-430.

Li, K.F., Wang, Y.G. (2018) Artificial Intelligence. Hangzhou(Weekly), 20: 59.

Liu, T., Yuizono, T., Lu, Y., & Wang, Z. (2019, July). Application of human-machine dialogue in foreign language teaching at universities. In IOP Conference Series: Materials Science and Engineering (Vol. 573, No. 1, p. 012047). IOP Publishing.

QIAN Cui Jing. 2010. Washback Effect of TEM4 on Oral English Teaching and Learning. Journal of Inner Mongolia University for Nationalities (Social Sciences). 2010-05.

Qian, Y., Ubale, R., Lange, P., Evanini, K., Ramanarayanan, V., & Soong, F. K. (2020). Spoken language understanding of human-machine conversations for language learning applications. Journal of Signal Processing Systems, 92(8), 805-817.

Qian, Z. C., Visser, S., & Chen, Y. V. (2011). Integrating user experience research into industrial design education: The Interaction Design Program at Purdue. In VentureWell. Proceedings of Open, the Annual Conference (p. 1). National Collegiate Inventors & Innovators Alliance.

Ramanarayanan, V. (2020). Design and Development of a Human-Machine Dialog Corpus for the Automated Assessment of Conversational English Proficiency. In INTERSPEECH (pp. 419-423).

Ramanarayanan, V., Lange, P. L., Evanini, K., Molloy, H. R., & Suendermann-Oeft, D. (2017). Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions. In INTERSPEECH (pp. 1711-1715).

Razak, H. M., Razak, N. A., & Krish, P. (2022). Enhancing students’ digital literacy at EFL classroom: Strategies of teachers and school administrators. Jurnal Cakrawala Pendidikan, 41(3).

Wang, H. (2008). Language policy implementation: A look at teachers’ perceptions. Asian EFL Journal, 30(1), 1-38.

Xu Qiang. 2000. Approach to English teaching and assessment [M]. Shanghai: foreign language education press.

Zhang, X. (2013). Foreign language listening anxiety and listening performance: Conceptualizations and causal relationships. System, 41(1), 164-177.

Zheng, Y., & Cheng, L. (2018). How does anxiety influence language performance? From the perspectives of foreign language classroom anxiety and cognitive test anxiety. Language Testing in Asia, 8(1), 1-19.



How to Cite

Jin, T. J., & Ho, I. (2023). Human-machine dialogue modality of English Language oral skills testing among Chinese EFL students. Research Journal in Advanced Humanities, 4(1).