학술논문
Comparison of Human and ChatGPT on Holistic and Analytic Scoring of College EFL Opinion Writing
이용수 0
- 영문명
- 발행기관
- 한국영미어문학회
- 저자명
- Minkyung Cho Youmi Jang
- 간행물 정보
- 『영미어문학』제158호, 113~136쪽, 전체 24쪽
- 주제분류
- 어문학 > 영어와문학
- 파일형태
- 발행일자
- 2025.09.30

국문 초록
Writing evaluation can be done using different approaches such as holistic and analytic scoring, and by different raters, including human raters and machine. This study investigated the inter-rater reliability in holistic and analytic scoring by human raters and ChatGPT and the associations between average scores assigned by the two raters. Data consisted of one-paragraph opinion essays (n = 196) written by 28 South Korean college freshmen, which were scored by two human raters and ChatGPT across two trials. Inter-rater reliability, independent-samples t-tests, and correlations were computed. Results indicated that both humans and ChatGPT demonstrated substantial consistency, with humans showing greater reliability in holistic judgments, while ChatGPT exhibited stronger reliability in analytic categories of language and organization. Holistic scores did not differ significantly between the two raters; however, humans tended to be stricter in content and organization, whereas ChatGPT was stricter in language and mechanics. Strong correlations across all scores further suggested that humans and ChatGPT produced comparable rank-order evaluations. Overall, these findings support the theoretical validity of incorporating AI into writing assessment and highlight its pedagogical potential as a complement to human raters in EFL writing evaluation.
영문 초록
목차
1. Introduction
2. Literature Review
3. Methods
4. Results
5. Discussion and Conclusion
References
해당간행물 수록 논문
참고문헌
최근 이용한 논문
교보eBook 첫 방문을 환영 합니다!
신규가입 혜택 지급이 완료 되었습니다.
바로 사용 가능한 교보e캐시 1,000원 (유효기간 7일)
지금 바로 교보eBook의 다양한 콘텐츠를 이용해 보세요!
