An Assessment of the Quality of Post-Edited Text From CAT Tools Compared to Conventional Human Translation: An Error Analysis Study

Authors

  • Hind S. Alsaif Decision Support Centre-Royal Court
  • Ebtisam S. Aluthman Princess Nourah bint Abdulrahman University

DOI:

https://doi.org/10.17507/jltr.1506.12

Keywords:

post-editing, traditional human translation, computer-assisted translation (CAT) tools, ATA framework, error analysis

Abstract

This experimental study aims to evaluate the quality of post-edited texts, originally translated using computer-assisted translation (CAT) tools, in comparison with traditional human translation. This study investigates the quality of post-editing (PE) compared to traditional translation from scratch (TFS) in the context of Arabic–English translation, utilizing the Phrase CAT tool. The main hypothesis posits that PE yields a final product whose quality is similar or equivalent to that of TFS. The participants’ scores and error frequencies were evaluated using the American Translators Association framework for standardized error marking, and terminology, word choice, mistranslation, addition/omission, spelling, punctuation, case, inconsistency, style, and grammar in both approaches were compared. Data from nine professional Saudi translators showed that PE generally outperformed TFS in terminology, spelling, punctuation, and case, whereas TFS exhibited strengths in consistency, style, grammar, and literal translation. Statistical analysis confirmed the similarity in overall error rates between PE and TFS. The difference in mean error numbers between TFS and PE was not statistically significant. Thus, the disparity in means likely resulted from random chance and might not indicate substantive differences between the two groups. These results imply that PE yields quality that is comparable or equivalent to that of TFS, proving the aforementioned hypothesis. The implications highlight the need for CAT tool training and PE skills among translators to meet the demands of evolving translation technologies. Furthermore, this study underscores the importance of integrating PE training into translation curricula and organizing workshops to improve CAT tool usage.

Author Biography

Ebtisam S. Aluthman, Princess Nourah bint Abdulrahman University

Department of Applied Linguistics, College of Languages

References

Alanazi, M. (2019). The use of computer-assisted translation tools for Arabic translation: User evaluation, issues, and improvements [Doctoral dissertation, Kent State University].

Al-Jarf, R. (2017). Technology integration in translator training in Saudi Arabia. International Journal of Research in Engineering and Social Sciences, 7(3), 1–7.

Alkhatnai, M. (2021). Perceptions, skills, and technologies for the revitalization of translation industry in the post COVID-19 era: An empirical evidence from Saudi Arabia. Journal of Foreign Language Teaching and Translation Studies, 6(3), 71–96. https://doi.org/10.22034/EFL.2021.306093.1122

Alotaibi, H. M. (2017). Arabic-English parallel corpus: A new resource for translation training and language teaching. Arab World English Journal, 8(3). https://dx.doi.org/10.24093/awej/vol8no3.21

Alotaibi, H. M. (2020). Computer-assisted translation tools: An evaluation of their usability among Arab translators. Applied Sciences, 10(18). https://doi.org/10.3390/app10186295

Allen, J. (2001). Postediting: An integrated part of a translation software program. Language International Magazine, 13(2), 26–29.

Allen, J. (2003). Post-editing. In H. Somers (Ed.), Computers and translation: A translator’s guide (pp. 35). John Benjamins.

Al-Rumaih, L. A. (2021). The integration of computer-aided translation tools in translator-training programs in Saudi universities: Toward a more visible state. Arab World English Journal for Translation & Literary Studies, 5(1). http://dx.doi.org/10.2139/ssrn.3802984

Automatic Language Processing Advisory Committee. (1966). Machines: Computers in translation and linguistics (Publication 1416). Division of Behavioral Sciences, National Academy of Sciences, National Research Council.

Bowker, L. (2015). Computer-aided translation: Translator training. In S. Chan (Ed.), Routledge encyclopedia of translation technology (pp. 88–104). Routledge.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.

Do Carmo, F., & Moorkens, J. (2020). Differentiating editing, post-editing and revision. In Translation revision and post-editing (pp. 35–49). Routledge.

Doyle, M. S. (2003). Translation pedagogy and assessment: Adopting ATA’s framework for standard error marking. The ATA Chronicle, 32(11), 21–28.

Elming, J., Balling, L. W., & Carl, M. (2014). Investigating User Behaviour in Post-editing and Translation using the CASMACAT Workbench. In S. O'Brien, L. W. Balling, M. Carl, M. Simard, & L. Specia (Eds.), Post-editing of Machine Translation: Processes and Applications (pp. 147-169). Cambridge Scholars Publishing. Newcastle upon Tyne.

El-Zeini, N. T. (1994). Criteria for the evaluation of translation: A pragma-stylistic approach. Cairo University.

Esselink, B. (2000). A practical guide to localization. John Benjamins.

Fiederer, R., & O’Brien, S. (2009). Quality and machine translation: A realistic objective. The Journal of Specialised Translation, 11(11), 52–74.

Garcia, I. (2010). Is machine translation ready yet? Target – International Journal of Translation Studies, 22(1), 7–21. https://doi.org/10.1075/TARGET.22.1.02GAR

Garcia, I. (2012). A brief history of postediting and of research on postediting. Revista Anglo Saxonica, 291–310.

Guerberof, A. (2008). Productivity and quality in the post-editing of outputs from translation memories and machine translation. Localisation Focus – The International Journal of Localisation, 7(1), 11–21.

Guerberof, A. (2009). Productivity and quality in MT post-editing [conference paper]. Universitat Rovira I Virgilli, Spain. Retrieved July 17, 2021, from http://www.mt-archive.info/MTS-2009-Guerberof.pdf

House, J. (1997). Translation quality assessment: A model revisited. Gunter Narr.

Jia, Y., Carl, M., & Wang, X. (2019). How does the post-editing of neural machine translation compare with from-scratch translation? A product and process study. The Journal of Specialised Translation, 31(1), 60–86.

Krings, H. P., & Koby, G. S. (2001). Repairing texts: empirical investigations of machine translation post? Editing processes. Kent, Ohio: Kent State University Press. https://doi.org/10.7202/008026AR

McElhaney, T., & Vasconcellos, M. (1988). The translator and the postediting experience. Technology as Translation Strategy, 2, 140–148. https://doi.org/10.1075/ata.ii.28mce

Morado Vázquez, L., Rodriguez Vazquez, S., & Bouillon, P. (2013). Comparing forum data post-editing performance using translation memory and machine translation output: A pilot study. In Proceedings of Machine Translation Summit XIV (pp. 249–256).

Moreno, M. D. (2020). Translation quality gained through the implementation of the iso en 17100: 2015 and the usage of the blockchain: The case of sworn translation in Spain. Babel, 66(2), 226–253.

Mossop, B. (2017). Conflict over technology in the translation workplace. Multi-Languages Annual Conference 2017.

Munday, J. (2016). Introducing translation studies: Theories and applications. Routledge.

Newmark, P. (1995). A textbook of translation. Longman.

Nida, E. A. (1969). Toward a science of translating: With special reference to principles and procedures involved in Bible translating. Brill Archive.

Nida, E. A. (2001). Language and culture-contexts in translating. Shanghai Foreign Language Education Press.

Nord, C. (2001). Translating as a purposeful activity-functionalist approaches explained. Shanghai Foreign Language Education Press.

O’Brien, S. (2007). An empirical investigation of temporal and technical post-editing effort. Translation and Interpreting Studies – The Journal of the American Translation and Interpreting Studies Association, 2(1), 83–136.

O’Brien, S. (2011). Towards predicting post-editing productivity. Machine Translation, 25(3), 197–215. https://doi.org/10.1007/s10590-011-9096-7

O’Brien, S. (2012). Towards a dynamic quality evaluation model for translation. The Journal of Specialised Translation, 17(1), 55–77. Retrieved September 1, 2022, from https://aclanthology.org/www.mt-archive.info/10/JOST-2012-OBrien.pdf.

Phelan, M. (2017). Analytical assessment of legal translation: A case study using the American Translators Association framework. The Journal of Specialised Translation, 27, 89–210.

Samman, H. M. (2022). Evaluating machine translation post-editing training in undergraduate translation programs—An exploratory study in Saudi Arabia [Doctoral dissertation, University of Southampton].

Secară, A. (2005). Translation evaluation: A state of the art survey. In Proceedings of the eCoLoRe/MeLLANGE Workshop (pp. 39–44). St. Jerome Publishing.

Somers, H. (Ed.). (2003). Computers and translation: A translator’s guide. John Benjamins.

Yang, Y., Wang, X., and Yuan, Q. (2020). Measuring the usability of machine translation in the classroom context. Translation and Interpreting Studies, 16(1), 101–123. https://doi.org/10.1075/tis.18047.yan

Yamada, M. (2019). The impact of Google neural machine translation on post-editing by student translators. The Journal of Specialised Translation, 31, 87-106.

Downloads

Published

2024-11-01

Issue

Section

Articles