[1] 汪岿,刘柏嵩. 文本分类研究综述[J]. 数据通信, 2019(3):37-47.
[2] HUQ M R, ALI A, RAHMAN A. Sentiment analysis on Twitter data using KNN and SVM[J]. International Journal of Advanced Computer Science and Applications, 2017,8(6): 19-25.
[3] RONG X. Word2vec parameter learning explained[J]. arXiv preprint arXiv:1411.2738, 2014.
[4] 何力,谭霜,项凤涛,等. 基于深度学习的文本分类技术研究进展[J/OL].计算机工程:1-15[2020-11-22]. https://doi.org/10.19678/j.issn.1000-3428.0059099.
[5] NAM J, KIM J, MENCIA E L, et al. Large-scale multi-label text classification—revisiting neural networks[C]// 2014 European Conference on Machine Learning and Knowledge Discovery in Databases. 2014:437-452.
[6] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003,3:1137-1155.
[7] KIM Y. Convolutional neural networks for sentence classification[J]. arXiv preprint arXiv:1408.5882, 2014.
[8] BAGHERI H, ISLAM M J. Sentiment analysis of Twitter data[J]. arXiv preprint arXiv:1711.10377, 2017.
[9] ZHANG Y, WALLACE B. A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification[J]. arXiv preprint arXiv:1510.03820, 2015.
[10]PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[J]. arXiv preprint arXiv:1802.05365, 2018.
[11]DEVLIN J, CHANG M W, LEE K, et al. Bert: Pretraining of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
[12]LUONG M T, PHAM H, MANNING C D. Effective approaches to attention-based neural machine translation[J]. arXiv preprint arXiv:1508.04025, 2015.
[13]BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. arXiv preprint arXiv:1409.0473, 2014.
[14]VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing. 2017: 6000-6010.
[15]HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:770-778.
[16]IOFFE S,SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift [C]// 2015 International Conference on Machine Learning. 2015:448-456.
[17]SANTURKAR S, TSIPRAS D, ILYAS A, et al. How does batch normalization help optimization?[C]//Advances in Neural Information Processing Systems. 2018:2483-2493.
[18]SCHOLZ R W, TIETJE O. Embedded Case Study Methods: Integrating Quantitative and Qualitative Knowledge[M]. Sage, 2002.
[19]ZHANG Z Y, HAN X, LIU Z Y, et al. ERNIE: Enhanced language representation with informative entities[J]. arXiv preprint arXiv:1905.07129, 2019.
[20]LAN Z, CHEN M, GOODMAN S, et al. Albert: A lite BERT for self-supervised learning of language representations[J]. arXiv preprint arXiv:1909.11942, 2019.
[21]YANG Z C, YANG D Y, DYER C, et al. Hierarchical attention networks for document classification[C]// Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2016:1480-1489.
[22]CHO K, VAN MERRIENBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[J]. arXiv preprint arXiv:1406.1078, 2014.
[23]方炯焜,陈平华,廖文雄. 结合GloVe和GRU的文本分类模型[J]. 计算机工程与应用, 2020,56(20):98-103.
|