[1] 王东升,王卫民,王石,等. 面向限定领域问答系统的自然语言理解方法综述[J]. 计算机科学, 2017,44(8):1-8.
[2] 郭天翼,彭敏,伊穆兰,等. 自然语言处理领域中的自动问答研究进展[J]. 武汉大学学报(理学版), 2019,65(5):417-426.
[3] YIH W T, CHANG M W, MEEK C, et al. Question answering using enhanced lexical semantic models[C]// Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. 2013,1:1744-1753.
[4] BARRN-CEDEO A, BONADIMAN D, DA SAN MARTINO G, et al. ConvKN at semeval-2016 task 3: Answer and question selection for question answering on Arabic and English fora[C]// Proceedings of the 10th International Workshop on Semantic Evaluation. 2016:896-903.
[5] YAO X C, VAN DURME B. Information extraction over structured data: Question answering with freebase[C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. 2014,1:956-966.
[6] BROMLEY J, GUYON I, LECUN Y, et al. Signature verification using a "Siamese" time delay neural network[C]// Proceedings of the 6th International Conference on Neural Information Processing Systems. 1993:737-744.
[7] 栾克鑫,孙承杰,刘秉权,等. 基于句内注意力机制的答案自动抽取方法[J]. 智能计算机与应用, 2017,7(5):87-91.
[8] GRAVES A, SCHMIDHUBER J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures[J]. Neural Networks, 2005,18(5-6):602-610.
[9]〖KG-*5〗 TAN M, DOS SANTOS C, XIANG B, et al. LSTM-based deep learning models for non-factoid answer selection[C]// Proceedings of the International Conference on Learning Representations. 2016.
[10]余本功,许庆堂. 基于协同注意力机制的答案选择算法研究[C]// 第十三届(2018)中国管理学年会论文集. 2018:534-540.〖HJ1.15mm〗
[11]YIN W P, SCHTZE H, XIANG B, et al. ABCNN: Attention-based convolutional neural network for modeling sentence pairs[J]. Transactions of the Association for Computational Linguistics, 2016,4:259-272.
[12]WANG S H, JIANG J. A compare-aggregate model for matching text sequences[J]. arXiv preprint arXiv:1611.01747, 2016.
[13]WANG Z G, MI H T, ITTYCHERIAH A. Sentence similarity learning by lexical decomposition and composition[C]// Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2016:1340-1349.
[14]HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997,9(8):1735-1780.
[15]CHEN Q, ZHU X D, LING Z H, et al. Enhanced LSTM for natural language inference[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2016:1657-1668.
[16]WANG Z G, HAMZA W, FLORIAN R. Bilateral multi-perspective matching for natural language sentences[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017:4144-4150.
[17]RAFFEL C, ELLIS D P W. Feed-forward networks with attention can solve some long-term memory problems[J]. arXiv preprint arXiv:1512.08756, 2015.
[18]MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]// Proceedings of the 26th International Conference on Neural Information Processing Systems. 2013,2:3111-3119.
[19]PENNINGTON J, SOCHER R, MANNING C D. Glove: Global vectors for word representation[C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014:1532-1543.
[20]CHO K, VAN MERRINBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014:1724-1734.
[21]赵姗姗. 深度学习与多元特征相结合的答案选择排序研究[D]. 哈尔滨:哈尔滨工业大学, 2016.
[22]张学武. 基于深度学习的候选答案句选择研究[D]. 广州:广东工业大学, 2019.
[23]熊雪,刘秉权,吴翔虎. 基于注意力机制的答案选择方法研究[J]. 智能计算机与应用, 2018,8(6):90-93.
[24]KINGMA D, BA J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
[25]PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human language Technologies. 2018:2227-2237.
[26]DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
|