[1] 李慕. 结合自动摘要技术的文本推荐方法研究及应用[D]. 武汉:武汉工程大学, 2017.
[2] 沈华东,彭敦陆. AM-BRNN:一种基于深度学习的文本摘要自动抽取模型[J]. 小型微型计算机系统, 2018,39(6):1184-1189.
[3] BHASKAR P, BANDYOPADHYAY S. A query focused multi document automatic summarization[C]// Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation. 2010:545-554.
[4] 叶静. 面向多文本集的比较摘要研究[D]. 长沙:国防科学技术大学, 2012.
[5] MIHALCEA R. Graph-based ranking algorithms for sentence extraction, applied to text summarization[C]// Proceedings of the ACL Interactive Poster and Demonstration Sessions. 2004:170-173.
[6] BADRINATH R, VENKATASUBRAMANIYAN S, MADHAVAN C E V. Improving query focused summarization using look-ahead strategy[C]// European Conference on Information Retrieval. 2011:641-652.
[7] YANG L, CAI X. Semi-supervised co-clustering for query-oriented theme-based summarization[J]. Research Journal of Applied Sciences, Engineering and Technology, 2012,4(18):3410-3414.
[8] RUSH A M, CHOPRA S, WESTON J. A neural attention model for abstractive sentence summarization[J/OL]. 2015, arXiv: 1509.00685, (2015-09-03)[2019-06-01]. https://arxiv.org/pdf/1509.00685.pdf.
[9] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J/OL]. 2018, arXiv: 1810.04805, (2018-10-11)[2019-06-01]. https:/arxiv.org/abs/1810.04805.
[10]VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Advances in Neural Information Processing Systems. 2017:5998-6008.
[11]SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[C]// Advances in Neural Information Processing Systems. 2014:3104-3112.
[12]SENNRICH R, HADDOW B. Linguistic input features improve neural machine translation[J/OL]. 2016, arXiv: 1606.02892, (2016-07-27)[2019-06-01]. https://arxiv.org/pdf/1606.0289v2.pdf.
[13]蒲梅,周枫,周晶晶,等. 基于加权TextRank的新闻关键事件主题句提取[J]. 计算机工程, 2017,34(8):219-224.
[14]MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]// Advances in Neural Information Processing Systems. 2013:3111-3119.
[15]洪冬梅. 基于LSTM的自动文本摘要技术研究[D]. 广州:华南理工大学, 2018.
[16]唐晓波,翟夏普. 基于混合机器学习模型的多文档自动摘要[J]. 情报理论与实践, 2019,42(2):145-150.
[17]NEMA P, KHAPRA M, LAHA A, et al. Diversity driven attention model for query-based abstractive summarization[J/OL]. 2017, arXiv:1704.08300, (2018-06-07)[2019-06-01]. https://arxiv.org/pdf/1704.08300.pdf.
[18]明拓思宇,陈鸿昶. 文本摘要研究进展与趋势[D]. 郑州:国家数字交换系统工程技术研究中心, 2018.
[19]官宸宇. 面向事件的社交媒体文本自动摘要研究[D]. 武汉:武汉大学, 2017.
[20]郭捷. 基于网络评论的情感分类技术的研究及应用[D]. 成都:电子科技大学, 2018.
[21]徐立鑫. 面向短文本流摘要抽取系统的在线学习技术[D]. 北京:北京邮电大学, 2015.
[22]MEMISEVIC R, ZACH C, POLLEFEYS M, et al. Gated softmax classification[C]// Advances in Neural Information Processing Systems. 2010:1603-1611.
[23]LIN C Y, HOVY E. Automatic evaluation of summaries using n-gram co-occurrence statistics[C]// Proceedings of 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. 2003:150-157.
[24]〖JP2〗PENNINGTON J, SOCHER R, MANNING C. Glove: Global vectors for word representation[C]// Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing. 2014:1532-1543. |