[1] 田贵华. 德目教育:道德教育的新视角[J]. 学校党建与思想教育, 2012(1):26-28.
[2] 王志华,刘绍廷,罗齐. 基于改进K-modes聚类的KNN分类算法[J]. 计算机工程与设计, 2019,40(8):2228-2234.
[3] 郭超磊,陈军华. 基于SA-SVM的中文文本分类研究[J]. 计算机应用与软件, 2019,36(3):277-281.
[4] BLEI D M, NG A Y, JORDAN M I. Latent dirichlet allocation[J]. Journal of Machine Learning Research, 2003,3:993-1022.
[5] 宋钰婷,徐德华. 基于LDA和SVM的中文文本分类研究[J]. 现代计算机(专业版), 2016(5):18-23.
[6] 于苹苹,倪建成,姚彬修,等. 基于Spark框架的高效KNN中文文本分类算法[J]. 计算机应用, 2016,36(12):3292-3297.
[7] GRAVES A, SCHMIDHUBER J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures[J]. Neural Networks, 2005,18(5-6):602-610.
[8] 祝锡永,吴炀,刘崇. 基于CTD-BLSTM的医疗领域中文命名实体识别模型[J]. 计算机系统应用, 2020,29(8):173-178.
[9] 郭蕴颖,丁云峰. 基于CNN和LSTM联合预测并修正的电量缺失数据预测[J]. 计算机系统应用, 2020,29(8):192-198.
[10]翟学明,魏巍. 混合神经网络和条件随机场相结合的文本情感分析[J]. 智能系统学报, 2021,16(2):202-209.
[11]CHUNG J Y, GULCEHRE C, CHO K, et al. Gated feedback recurrent neural networks[J]// Proceedings of the 32nd International Conference on International Conference on Machine Learning. 2015:2067-2075.
[12]KIM Y. Convolutional neural networks for sentence classification[J]. Computation and Language, 2014,arXiv:1408.5882.
[13]WEI J W, ZOU K. EDA: Easy data augmentation techniques for boosting performance on text classification tasks[J]. Computation and Language, 2019,arXiv:1901.11196
[14]MNIH V, HEESS N, GRAVES A. Recurrent models of visual attention[C]// Advances in Neural Information Processing Systems. 2014:2204-2212.
[15]BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. Computation and Language, 2014,arXiv:1409.0473.
[16]VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Advances in Neural Information Processing Systems. 2017:5998-6008.
[17]YIN W P, SCHTZE H, XIANG B, et al. ABCNN: Attention-based convolutional neural network for modeling sentence pairs[J]. Transactions of the Association for Computational Linguistics, 2016,4:259-272.
[18]PIAO L X. Jieba-Analysis[EB/OL]. (2013-08-06)[2021-01-19]. https://www.oschina.net/p/jieba-analysis.
[19]MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]// Proceedings of the Advances in Neural Information Processing Systems. 2013:3111-3119.
[20]MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space[J]. Computation and Language, 2013,arXiv:1301.3781.
[21]SPECHTD F. A general regression neural network[J]. IEEE Transaction on Neural Network, 1991,2(6):568-576.
[22]DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[J]. Computation and Language, 2018,arXiv:1810.04805.
[23]KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017,60(6):84-90.
|