Abstract
Grammatical error correction aims to correct ungrammatical sentences automatically. Recently, some work has demonstrated the excellent capabilities of closed-source Large Language Models (LLMs, e.g., ChatGPT) in grammatical error correction. However, the potential of open-source LLMs remains unexplored. In this paper, we introduced GrammarGPT, an open-source LLM, to preliminary explore its potential for native Chinese grammatical error correction. The core recipe of GrammarGPT is to leverage the hybrid dataset of ChatGPT-generated and human-annotated. For grammatical errors with clues, we proposed a heuristic method to guide ChatGPT to generate ungrammatical sentences by providing those clues. For grammatical errors without clues, we collected ungrammatical sentences from publicly available websites and manually corrected them. In addition, we employed an error-invariant augmentation method to enhance the ability of the model to correct native Chinese grammatical errors. We ultimately constructed about 1k parallel data and utilized these data to fine-tune open-source LLMs (e.g., Phoenix, released by The Chinese University of Hong Kong, Shenzhen) with instruction tuning. The experimental results show that GrammarGPT outperforms the existing SOTA system significantly. Although model parameters are 20x larger than the SOTA baseline, the required amount of data for instruction tuning is 1200x smaller, illustrating the potential of open-source LLMs on native CGEC. Our GrammarGPT ranks \(3^{rd}\) on NLPCC2023 SharedTask1, demonstrating our approach’s effectiveness. The code and data are available at https://github.com/FreedomIntelligence/GrammarGPT.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
References
Chen, F., Feng, Y.: Chain-of-thought prompt distillation for multimodal named entity and multimodal relation extraction. ArXiv preprint arXiv:2306.14122 (2023)
Chen, Z., et al.: Phoenix: democratizing ChatGPT across languages. arXiv preprint arXiv:2304.10453 (2023)
Dahlmeier, D., Ng, H.T.: Better evaluation for grammatical error correction. In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 568–572. Association for Computational Linguistics, Montréal, Canada, June 2012. https://aclanthology.org/N12-1067
Fang, T., et al.: Is ChatGPT a highly fluent grammatical error correction system? A comprehensive evaluation. arXiv preprint arXiv:2304.01746 (2023)
Hinson, C., Huang, H.H., Chen, H.H.: Heterogeneous recycle generation for Chinese grammatical error correction. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 2191–2201 (2020)
Ho, N., Schmid, L., Yun, S.Y.: Large language models are reasoning teachers. arXiv preprint arXiv:2212.10071 (2022)
Katsumata, S., Komachi, M.: stronger baselines for grammatical error correction using a pretrained encoder-decoder model. In: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pp. 827–832 (2020)
Li, J., et al.: Sequence-to-action: grammatical error correction with action guided sequence generation. Proc. AAAI Conf. Artif. Intell. 36(10), 10974–10982 (2022)
Liang, D., et al.: BERT enhanced neural machine translation and sequence tagging model for Chinese grammatical error diagnosis. In: Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pp. 57–66. Association for Computational Linguistics (2020)
Ma, S., et al.: Linguistic rules-based corpus generation for native Chinese grammatical error correction. In: Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 576–589 (2022)
Ng, H.T., Wu, S.M., Briscoe, T., Hadiwinoto, C., Susanto, R.H., Bryant, C.: The CoNLL-2014 shared task on grammatical error correction. In: Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pp. 1–14 (2014)
Peng, B., Li, C., He, P., Galley, M., Gao, J.: Instruction Tuning with GPT-4. arXiv preprint arXiv:2304.03277 (2023)
Rao, G., Gong, Q., Zhang, B., Xun, E.: Overview of NLPTEA-2018 share task Chinese grammatical error diagnosis. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pp. 42–51 (2018)
Rao, G., Yang, E., Zhang, B.: Overview of NLPTEA-2020 shared task for chinese grammatical error diagnosis. In: Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pp. 25–35 (2020)
Rothe, S., Mallinson, J., Malmi, E., Krause, S., Severyn, A.: A simple recipe for multilingual grammatical error correction. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 702–707 (2021)
Sanh, V., et al.: Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207 (2021)
Scao, T.L., et al.: Bloom: a 176B-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022)
Touvron, H., et al.: LLaMA: open and efficient foundation language models (2023)
Wang, P., Wang, Z., Li, Z., Gao, Y., Yin, B., Ren, X.: SCOTT: self-consistent chain-of-thought distillation. arXiv preprint arXiv:2305.01879 (2023)
Wang, X., et al.: InstructUIE: multi-task instruction tuning for unified information extraction. arXiv preprint arXiv:2304.08085 (2023)
Wei, J., et al.: Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652 (2021)
Yu, J., et al.: Taoli LLaMA. https://github.com/blcuicall/taoli (2023)
Zhang, B.: Features and functions of the HSK dynamic composition corpus. Int. Chin. Lang. Educ. 4, 71–79 (2009)
Zhang, B., Yang, H., Liu, X.Y.: Instruct-FinGPT: financial sentiment analysis by instruction tuning of general-purpose large language models. arXiv preprint arXiv:2306.12659 (2023)
Zhang, H., et al.: HuatuoGPT, towards taming language model to be a doctor. arXiv preprint arXiv:2305.15075 (2023)
Zhang, Y., et al.: MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3118–3130 (2022)
Zhang, Y., et al.: NaSGEC: a multi-domain Chinese grammatical error correction dataset from native speaker texts. arXiv preprint arXiv:2305.16023 (2023)
Zhao, Y., Jiang, N., Sun, W., Wan, X.: Overview of the NLPCC 2018 shared task: grammatical error correction. In: Zhang, M., Ng, V., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2018. LNCS (LNAI), vol. 11109, pp. 439–445. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99501-4_41
Zhao, Z., Wang, H.: MaskGEC: improving neural grammatical error correction via dynamic masking. Proc. AAAI Conf. Artif. Intell. 34(01), 1226–1233 (2020)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (Grant No. 62271432) and the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen (Grant No. B10120210117).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fan, Y., Jiang, F., Li, P., Li, H. (2023). GrammarGPT: Exploring Open-Source LLMs for Native Chinese Grammatical Error Correction with Supervised Fine-Tuning. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14304. Springer, Cham. https://doi.org/10.1007/978-3-031-44699-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-44699-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44698-6
Online ISBN: 978-3-031-44699-3
eBook Packages: Computer ScienceComputer Science (R0)