COMPARING GRAMMARLY AND CHATGPT FOR AUTOMATED WRITING EVALUATION OF ESL LEARNERS
DOI:
https://doi.org/10.58800/bujhss.v7i2.273Keywords:
Automated Writing Evaluation, ChatGPT, Grammarly, ESL LearnerAbstract
Despite the growing prevalence of AI-driven tools in language education, there is limited research on their implementation within the Pakistani context. This study is grounded in Technology-Enhanced Language Learning (TELL) and Second Language Acquisition (SLA) theory to examine the efficacy of Automated Writing Evaluation (AWE) tools, specifically Grammarly and ChatGPT, in enhancing the writing skills of English as a Second Language (ESL) learners. The primary objectives are to assess the impact of these tools on writing proficiency and to explore learners' perceptions of feedback quality. Employing a mixed-method approach, the study integrates quantitative analysis of writing improvement with qualitative insights into learner preferences, offering a comprehensive understanding of these tools’ roles in the ESL classroom. The scope encompasses ESL learners in Pakistan, focusing on how such AI tools can be effectively integrated to improve writing skills. Results indicated that while the ChatGPT group had slightly higher mean ranks than the Grammarly group, the differences in writing performance were not statistically significant, with p-values for the pre-test (p=0.276) and post-test (p=0.398), both greater than 0.05 (p>.05). Nonetheless, learners reported varied preferences, with some favoring Grammarly’s accuracy and others valuing ChatGPT’s comprehensive feedback. The study underscores the complementary nature of these tools and advocates for their informed incorporation into ESL writing instruction.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Bahria University Journal of Humanities & Social Sciences
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.