Abstract
Ancient Chinese Reading Comprehension (ACRC) is challenging for the absence of datasets and the difficulty of understanding ancient languages. Further, among ACRC, entire-span-regarded (Entire spaN regarDed, END) questions are especially exhausting because of the input-length limitation of seminal BERTs, which solve modern-language reading comprehension expeditiously. To alleviate the datasets absence issue, this paper builds a new dataset ACRE (Ancient Chinese Reading-comprehension End-question). To tackle long inputs, this paper proposes a non-trivial model which is based on the convolution of multiple encoders that are BERT decedents, named EVERGREEN (EVidence-first bERt encodinG with entiRE-tExt coNvolution). Besides proving the effectiveness of encoding compressing via convolution, our experimental results also show that, for ACRC, first, neither pre-trained AC language models nor long-text-oriented transformers realize its value; second, the top evidence sentence along with distributed sentences are better than top-n evidence sentences as inputs of EVERGREEN; third, comparing with its variants, including dynamic convolution and multi-scale convolution, classical convolution is the best.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Both ACRE and the source code of EVERGREEN will be released on GitHub after publication.
- 2.
See eea.gd.gov.cn.
- 3.
The model of GuwenBERT is available on the github.
- 4.
- 5.
- 6.
Thanks to the anonymous NeurIPS reviewer. Although we can draw a line at the Chinese renaissance around 1920 as the boundary between ancient and modern Chinese, fictions which are written in or after the Ming dynasty are not in this scope.
References
Bartolo, M., Roberts, A., Welbl, J., Riedel, S., Stenetorp, P.: Beat the AI: investigating adversarial human annotation for reading comprehension. Trans. Assoc. Comput. Linguist. 8, 662–678 (2020)
Beltagy, I., Peters, M.E., Cohan, A.: Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150 (2020)
Dzendzik, D., Foster, J., Vogel, C.: English machine reading comprehension datasets: a survey. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 8784–8804 (2021)
Jiang, Y., et al.: Improving machine reading comprehension with single-choice decision and transfer learning. arXiv abs/2011.03292 (2020)
Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: Race: large-scale reading comprehension dataset from examinations. In: EMNLP (2017)
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite bert for self-supervised learning of language representations. arXiv abs/1909.11942 (2020)
Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Putri, R.A., Oh, A.H.: IDK-MRC: unanswerable questions for Indonesian machine reading comprehension. In: The 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022. EMNLP (2022)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)
Sugawara, S., Inui, K., Sekine, S., Aizawa, A.: What makes reading comprehension questions easier? In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4208–4219 (2018)
Sun, K., Yu, D., Yu, D., Cardie, C.: Investigating prior knowledge for challenging Chinese machine reading comprehension. Trans. Assoc. Comput. Linguist. 8, 141–155 (2020)
Tan, H., et al.: GCRC: a new challenging MRC dataset from Gaokao Chinese for explainable evaluation. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 1319–1330 (2021)
Tian, H., Yang, K., Liu, D., Lv, J.: Anchibert: a pre-trained model for ancient Chinese language understanding and generation. In: Proceedings of the International Joint Conference on Neural Networks (2021)
Wang, S., Huang, M., Deng, Z., et al.: Densely connected CNN with multi-scale feature attention for text classification. In: IJCAI, vol. 18, pp. 4468–4474 (2018)
Xu, S., Liu, Y., Yi, X., Zhou, S., Li, H., Wu, Y.: Native Chinese reader: a dataset towards native-level Chinese machine reading comprehension. In: Thirty-Fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) (2022)
Yu, W., Jiang, Z., Dong, Y., Feng, J.: Reclor: a reading comprehension dataset requiring logical reasoning. In: International Conference on Learning Representations (ICLR) (2020)
Zeng, W., et al.: Pangu-\(\alpha \): large-scale autoregressive pretrained Chinese language models with auto-parallel computation. arXiv preprint arXiv:2104.12369 (2021)
Zhang, C., Lai, Y., Feng, Y., Zhao, D.: Extract, integrate, compete: towards verification style reading comprehension. In: Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 2976–2986 (2021)
Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable ConvNets V2: more deformable, better results. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9308–9316 (2019)
Acknowledgements
This paper is supported by Guangdong Basic and Applied Basic Research Foundation, China (Grant No. 2021A1515012556).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Rao, D., Huang, G., Jiang, Z. (2024). Ancient Chinese Machine Reading Comprehension Exception Question Dataset with a Non-trivial Model. In: Liu, F., Sadanandan, A.A., Pham, D.N., Mursanto, P., Lukose, D. (eds) PRICAI 2023: Trends in Artificial Intelligence. PRICAI 2023. Lecture Notes in Computer Science(), vol 14326. Springer, Singapore. https://doi.org/10.1007/978-981-99-7022-3_14
Download citation
DOI: https://doi.org/10.1007/978-981-99-7022-3_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7021-6
Online ISBN: 978-981-99-7022-3
eBook Packages: Computer ScienceComputer Science (R0)