Computer and Modernization ›› 2023, Vol. 0 ›› Issue (02): 58-61.
Previous Articles Next Articles
Online:
2023-04-10
Published:
2023-04-10
XIE Shi-chao, HUANG Wei, REN Xiang-hui. A Text Entity Linking Method Based on BERT[J]. Computer and Modernization, 2023, 0(02): 58-61.
[1] | RAU L F. Extracting company names from text 18[C]// Proceedings of the Seventh IEEE Conference on Artificial Intelligence Application. 1991:29-32. |
[2] | ZHOU G, SU J. Named entity recognition using an HMM-based chunk tagger[C]// Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. 2002:473-480. |
[3] | MALOUF R. Markov models for language-independent named entity recognition[C]// Proceedings of the 6th Conference on Natural language learning - Volume 20. 2002:1-4. |
[4] | NADEAU D, TURNEY P D, MATWIN S. Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity[C]// Proceedings of the 19th International Conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence. 2006:266-277. |
[5] | LI Y, BONTCHEVA K, CUNNINGHAM H. SVM based learning system for information extraction[C]// Proceedings of the First International Conference on Deterministic and Statistical Methods in Machine Learning. 2004:319-339. |
[6] | LIU S,Y TANG B Z, CHEN Q C, et al. Effects of semantic features on machine learning-based drug name recognition systems: Word embeddings vs. manually constructed dictionaries[J]. Information (Switzerland), 2015,6(4):848-865. |
[7] | SEGURA-BEDMAR I, MARTINEZ P, ZAZO M H. Semeval-2013 task 9: Extraction of drug-drug interactions from biomedical texts (ddiextraction 2013)[J]. Association for Computational Linguistics, 2013(6):341-350. |
[8] | BENGIO Y, DUCHARME R, VINCENT P. A neural probabilistic language model[C]// Proceedings of the 13th International Conference on Neural Information Processing Systems. 2000:893-899. |
[9] | PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations. NAACL-HLT[J]. 2018. [J]. arXiv preprint arXiv:1802.05365, 2018. |
[10] | SARZYNSKA-WAWER J, WAWER A, PAWLAK A, et al. Detecting formal thought disorder by deep contextualized word representations[J]. Psychiatry Research, 2021,304. DOI:10.1016/j.psychres.2021.114135. |
[11] | MAO J, XU W, YANG Y, et al. Deep captioning with multimodal recurrent neural networks (m-RNN)[J]. arXiv preprint arXiv:1412.6632, 2014. |
[12] | CHO K, VAN M B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[J]. arXiv preprint arXiv:1406.1078, 2014. |
[13] | SHERSTINSKY A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network[J]. Physica D: Nonlinear Phenomena, 2020,404. DOI:10.1016/j.physd.2019.132306. |
[14] | MIAO Y, GOWAYYED M, METZE F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding[C]// 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2015:167-174. |
[15] | WILLIAMS G, BAXTER R, HE H, et al. A comparative study of RNN for outlier detection in data mining[C]// Proceedings. 2002 IEEE International Conference on Data Mining, 2002. ICDM 2003. 2002:709-712. |
[16] | JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[J]. arXiv preprint arXiv:1506.02025, 2015. |
[17] | KITAEV N, KAISER Ł, LEVSKAYA A. Reformer: The efficient transformer[J]. arXiv preprint arXiv:2001.04451, 2020. |
[18] | RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving Language Understanding by Generative Pre-training[EB/OL]. [2022-05-17]. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf. |
[19] | ZHANG Y Z, SUN S Q, GALLEY M, et al. DialoGPT: large-scale generative pre-training for conversational response Generation[J]. arXiv preprint arXiv:1911.00536, 2019. |
[20] | ETHAYARAJH K. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 Embeddings[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019:55-65. |
[21] | FLORIDI L, CHIRIATTI M. GPT-3: Its nature, scope, limits, and consequences[J]. Minds and Machines, 2020,30(4):681-694. |
[22] | DEVLIN J, CHANG M, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018. |
[23] | JOVCIC D. Bidirectional, high-power DC transformer[J]. IEEE Transactions on Power Delivery, 2009,24(4):2276-2283. |
[24] | SUN F, LIU J, WU J, et al. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer[C]// Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019:1441-1450. |
[25] | ZHANG X X, WEI F R, ZHOU M. HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization[J]. arXiv preprint arXiv:1905.06566, 2019. |
[26] | 詹飞,朱艳辉,梁文桐,等. 基于多任务学习的短文本实体链接方法[J]. 计算机工程, 2022,48(3):315-320. |
[1] | ZHENG Li-rui, XIAO Xiao-xia, ZOU Bei-ji, LIU Bin, ZHOU Zhan. Named Entity Recognition in Electronic Medical Record Based on BERT [J]. Computer and Modernization, 2024, 0(01): 87-91. |
[2] | LIU Yu-peng, GE Yan, DU Jun-wei, CHEN Zhuo. Joint Extraction Method of Entities and Relations Based on FGM and Pointer Annotation [J]. Computer and Modernization, 2023, 0(11): 1-5. |
[3] | TANG Shi-qi, ZHOU Rui-ping, XIE Shi-bin, LIU Meng-chi, XIAO Wen, . Cross-language Multi-label Sentiment Classification Based on Stacked Denoising AutoEncoder [J]. Computer and Modernization, 2023, 0(11): 6-12. |
[4] | LI Shi-yue, MENG Jia-na, YU Yu-hai, LI Xue-ying, XU Ying-ao. Aspect Based Sentiment Analysis Model Based on Knowledge Enhancement [J]. Computer and Modernization, 2023, 0(10): 1-8. |
[5] | WANG Hong-jie, XU Sheng-chao. Clustering Method of Cloud Platform Abnormal Transmission Data Based on Hilbert Similarity [J]. Computer and Modernization, 2023, 0(09): 27-31. |
[6] | ZHU Ya-jun, Yong Tso, Nyima Tashi, . Tibetan Medical Entity Recognition Based on Tibetan BERT [J]. Computer and Modernization, 2023, 0(01): 43-48. |
[7] | YU Qing, MA Zhi-long, XU Chun. Medical Knowledge Extraction Based on BERT and Non-autoregressive [J]. Computer and Modernization, 2023, 0(01): 120-126. |
[8] | HUANG Zhong-xiang, LI Ming. Text Classification Based on ALBERT Combined with Bidirectional Network [J]. Computer and Modernization, 2022, 0(10): 8-12. |
[9] | CHEN Gang. Government Hotline Work-order Classification Fusing RoBERTa and Feature Extraction [J]. Computer and Modernization, 2022, 0(06): 21-26. |
[10] | ZHNAG Jun, QIU Long-long. A Text Classification Model Based on BERT and Pooling Operation [J]. Computer and Modernization, 2022, 0(06): 1-7. |
[11] | FAN Hai-wei, QIN Jia-jie, SUN Huan, ZHANG Li-miao, LU Xin-siyu. Traffic Accident Text Information Extraction Model Based on BERT and BiGRU-CRF Fusion [J]. Computer and Modernization, 2022, 0(05): 10-15. |
[12] | GUO Tian-yu, YAN Rong-guo, FANG Xu-chen, XU Yu-ling, TAO Zheng-yi. Detection of R Wave Based on Hilbert Transform and Adaptive Threshold [J]. Computer and Modernization, 2022, 0(02): 114-119. |
[13] | LIU Meng-ying, WANG Yong. Microblog Hot Topic Discovery Based on Text Dual Representation Model [J]. Computer and Modernization, 2021, 0(12): 110-115. |
[14] | SONG Shuang, LU Xin-da. Text Matching Model Based on BERT and Self-attention Mechanism of Image [J]. Computer and Modernization, 2021, 0(11): 12-16. |
[15] | YANG Wen-hao, LIU Guang-cong, LUO Ke-jing. News Label Classification Based on BERT and Deep Equal Length Convolution [J]. Computer and Modernization, 2021, 0(08): 94-99. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||