[1] 许坤,冯岩松,赵东岩. 面向知识库的中文自然语言问句的语义理解[J]. 北京大学学报(自然科学版), 2014,50(1):85-92.
[2] WANG X J, ZHANG L, MA W Y. Answer ranking in community question-answering sites. US 8346701[P]. 2013-01-01.
[3] ANGELINO E, LARUS-STONE N, ALABI D, et al. Learning certifiably optimal rule lists for categorical data[J]. The Journal of Machine Learning Research, 2017,18(1):8753-8830.
[4] GREGOROMICHELAKI E, KEMPSON R M, PURVER M, et al. Incrementality and intention-recognition in utterance processing[J]. Dialogue & Discourse, 2011,2(1):199-233.
[5] RAMANAND J, BHAVSAR K, PEDANEKAR N. Wishful thinking-finding suggestions and ‘buy’ wishes from product reviews[C]// Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. 2010:54-61.
[6] ZHANG D L, YAO L N, CHEN K X, et al. Making sense of spatio-temporal preserving representations for EEG-based human intention recognition[J]. IEEE Transactions on Cybernetics, 2019,50(7):3033-3044.
[7] 陈浩辰. 基于微博的消费意图挖掘[D]. 哈尔滨:哈尔滨工业大学, 2014.
[8] 李超,柴玉梅,南晓斐,等. 基于深度学习的问题分类方法研究[J]. 计算机科学, 2016(12):115-119.
[9] KIM Y. Convolutional neural networks for sentence classification[J]. arXiv preprint arXiv:1408.5882, 2014.
[10]HASHEMI H B, ASIAEE A, KRAFT R. Query intent detection using convolutional neural networks[C]// Proceedings of the International Conference on Web Search and Data Mining, Workshop on Query Understanding. 2016. DOI: 10.1145/1235.
[11]RAVURI S, STOLCKE A. Recurrent neural network and LSTM models for lexical utterance classification[C]// Proceedings of the 16th Annual Conference of the International Speech Communication Association. 2015:1597-1600.
[12]DEY R, SALEM F M. Gate-variants of gated recurrent unit (GRU) neural networks[C]// Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS). 2017:1597-1600.
[13]RAVURI S, STOLCKE A. A comparative study of recurrent neural network models for lexical domain classification[C]// Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2016:6075-6079.
[14]LIN Z H, FENG M W, SANTOS C N D, et al. A structured self-attentive sentence embedding[J]. arXiv preprint arXiv:1703.03130, 2017.
[15]CAI R, ZHUB, JI L, et al. An CNN-LSTM attention approach to understanding user query intent from online health communities[C]// Proceedings of the 2017 IEEE International Conference on Data Mining Workshops (ICDMW). 2017:430-437.
[16]DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
[17]SUN C, QIU X P, XU Y G, et al. How to fine-tune BERT for text classification?[C]// Proceedings of the China National Conference on Chinese Computational Linguistics. 2019:194-206.
[18]ETHAYARAJH K. How contextual are contextualized word representations? Comparing the geometry of BERT,ELMo, and GPT-2 embeddings[J]. arXiv preprint arXiv:1909.00512, 2019.
[19]TANG M, GANDHI P, KABIR M A, et al. Progress notes classification and keyword extraction using attention-based deep learning models with BERT[J]. arXiv preprint arXiv:1910.05786, 2019.
[20]ALSENTZER E, MURPHY J R, BOAG W, et al. Publicly available clinical BERT embeddings[J]. arXiv preprint arXiv:1904.03323, 2019.
[21]VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017:6000-6010.
[22]张志昌,张珍文,张治满. 基于IndRNN-Attention的用户意图分类[J]. 计算机研究与发展, 2019,56(7):1517-1524.
[23]LIU Y H, OTT M, GOYAL N, et al. RoBERTa: A robustly optimized BERT pretraining approach[J]. arXiv preprint arXiv:1907.11692, 2019.
[24]SZE V, CHEN Y H, YANG T J, et al. Efficient processing of deep neural networks: A tutorial and survey[J]. Proceedings of the IEEE. 2017,105(12):2295-2329.
|