|
白子薇. 2022. 基于预训练语言模型的机器阅读理解技术研究. 北京: 北京邮电大学.
|
|
Bai Z W. 2022. Research on machine reading comprehension technology based on pre-training language model. Beijing: Beijing University of Posts and Telecommunications. [in Chinese]
|
|
陈志泊, 李钰曼, 许 福, 等. 基于TextRank和簇过滤的林业文本关键信息抽取研究. 农业机械学报, 2020, 51 (5): 207- 214,172.
doi: 10.6041/j.issn.1000-1298.2020.05.023
|
|
Chen Z B, Li Y M, Xu F, et al. Key information extraction of forestry text based on TextRank and clusters filtering. Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (5): 207- 214,172.
doi: 10.6041/j.issn.1000-1298.2020.05.023
|
|
崔晓晖, 师栋瑜, 陈志泊, 等. 基于Spark框架XGBoost的林业文本并行分类方法研究. 农业机械学报, 2019, 50 (6): 280- 287.
doi: 10.6041/j.issn.1000-1298.2019.06.032
|
|
Cui X H, Shi D Y, Chen Z B, et al. Parallel forestry text classification technology based on XGBoost in spark framework. Transactions of the Chinese Society for Agricultural Machinery, 2019, 50 (6): 280- 287.
doi: 10.6041/j.issn.1000-1298.2019.06.032
|
|
郭肇毅. 引入类别关键词的朴素贝叶斯林业文本分类. 乐山师范学院学报, 2022, 37 (8): 39- 43.
|
|
Guo Z Y. Naive Bayes forestry text classification with category keywords. Journal of Leshan Normal University, 2022, 37 (8): 39- 43.
|
|
李钰曼. 2020. 面向林业文本的关键信息抽取研究. 北京: 北京林业大学.
|
|
Li Y M. 2020. Research on key information extraction for forestry text. Beijing: Beijing Forestry University. [in Chinese]
|
|
王明月. 2017. 基于深度学习的林业信息文本分类算法研究. 哈尔滨: 东北林业大学.
|
|
Wang M Y. 2017. Research on forestry information text classification algorithm based on deep learning. Harbin: Northeast Forestry University. [in Chinese]
|
|
Araci D. 2019. FinBERT: financial sentiment analysis with pre-trained language models. arXiv Preprint arXiv: 1908.10063.
|
|
Beltagy I, Lo K, Cohan A. 2019. SciBERT: a pretrained language model for scientific text. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, 3615–3620.
|
|
Cui Y M, Che W X, Wang S J, et al. 2022. LERT: a linguistically-motivated pre-trained language model. arXiv. arXiv: 2211.05344.
|
|
Devlin J, Chang M W, Lee K, et al. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, 4171–4186.
|
|
Guo Z Y. 2022. Forestry text classification based on BERT and KNN. 2022 International Conference on Information Technology, Communication Ecosystem and Management (ITCEM), 61–65.
|
|
Gupta T, Zaki M, Anoop Krishnan N M, et al. MatSciBERT: a materials domain language model for text mining and information extraction. NPJ Computational Materials, 2022, 8, 102.
doi: 10.1038/s41524-022-00784-w
|
|
Gururangan S, Marasović A, Swayamdipta S, et al. 2020. Don’t stop pretraining: adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 8342–8360.
|
|
Lee J, Yoon W, Kim S, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 2020, 36 (4): 1234- 1240.
doi: 10.1093/bioinformatics/btz682
|
|
Lewis M, Liu Y, Goyal N, et al. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 7871–7880.
|
|
Liu Y H, Ott M, Goyal N, et al. 2019. RoBERTa: a robustly optimized BERT pretraining approach. arXiv Preprint arXiv: 1907.11692.
|
|
Mikolov T, Karafiát M, Burget L, et al. 2010. Recurrent neural network based language model. 11th Annual Conference of the International Speech Communication Association. Interspeech, 1045–1048.
|
|
Miller T, Laparra E, Bethard S. 2021. Domain adaptation in practice: lessons from a real-world information extraction pipeline. Proceedings of the Second Workshop on Domain Adaptation for NLP. Association for Computational Linguistics, 105–110.
|
|
Patil S. 2020. Question generation using transformers. https://github.com/patil-suraj/question_generation.
|
|
Peng F C, Schuurmans D. 2003. Combining naive Bayes and n-gram language models for text classification. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 335–350.
|
|
Radford A, Narasimhan K. 2018. Improving language understanding by generative pre-training. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
|
|
Rasmy L, Xiang Y, Xie Z Q, et al. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ Digital Medicine, 2021, 4, 86.
doi: 10.1038/s41746-021-00455-y
|
|
Rezayi S, Liu Z L, Wu Z H, et al. 2022. AgriBERT: Knowledge-infused agricultural language models for matching food and nutrition. IJCAI International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence, 5150–5156.
|
|
Thompson B, Gwinnup J, Khayrallah H, et al. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, 2062–2068.
|
|
Touvron H, Lavril T, Izacard G, et al. 2023. Llama: open and efficient foundation language models. ArXiv, abs/2302.13971.
|
|
Xue L T, Constant N, Roberts A, et al. 2021. mT5: a massively multilingual pre-trained text-to-text transformer Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 483–498.
|
|
Yue Q, Li X. 2021a. Chinese forestry knowledge graph construction based on BERT and bidirectional RNN Journal of Inner Mongolia University (Natural Science Edition), 52(2): 176–184.
|
|
Yue Q, Li X, Li D. Chinese relation extraction on forestry knowledge graph construction. Computer Systems Science and Engineering, 2021b, 37 (3): 423- 442.
doi: 10.32604/csse.2021.014448
|
|
Zhao M X, Li D, Long Y S. Forestry big data platform by knowledge graph. Journal of Forestry Research, 2021, 32 (3): 1305- 1314.
doi: 10.1007/s11676-020-01130-w
|
|
Zhao W X, Zhou K, Li J Y, et al. 2023. A survey of large language models. 2303.18223.
|
|
Zhou H Y, Chen X Y, Zhang Y H, et al. Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports. Nature Machine Intelligence, 2022, 4, 32- 40.
doi: 10.1038/s42256-021-00425-9
|