Classification Algorithm for Goods Names Based on Enhanced Semantic Model
(1. College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China;
2. School of Management, Hefei University of Technology, Hefei 230009, China)
LI Xiao-feng, MA Jing, ZHOU Yan. Classification Algorithm for Goods Names Based on Enhanced Semantic Model[J]. Computer and Modernization, 2023, 0(03): 71-78.
[1] 曲道静,高天,李京. 进出口商品归类差错原因分析及对策[J]. 上海海关学院学报, 2013,34(3):92-96.
[2] 王昊,邓三鸿,朱立平,等. 大数据环境下政务数据的情报价值及其利用研究——以海关报关商品归类风险规避为例[J]. 科技情报研究, 2020,2(4):74-89.
[3] 胥丽娜. 海关商品归类错误的风险及其防范[J]. 对外经贸实务, 2015(11):70-73.
[4] MA J, LI X F, LI C, et al. Machine learning based cross-border E-commerce commodity customs product name recognition algorithm[C]// Pacific Rim International Conference on Artificial Intelligence. 2019:247-256.
[5] 李晓峰,马静,李驰,等. 基于XGBoost模型的电商商品品名识别算法研究[J]. 数据分析与知识发现, 2019,3(7):34-41.
[6] 贺波,马静,李驰. 基于融合特征的商品文本分类方法研究[J]. 情报理论与实践, 2020,43(11):162-168.
[7] PETERS M, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 2018:2227-2237.
[8] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving Language Understanding by Generative Pre-training[EB/OL]. [2021-12-31]. https://www.docin.com/p-2176538517.html.
[9] DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
[10] 张彦楠,黄小红,马严,等. 基于深度学习的录音文本分类方法[J]. 浙江大学学报(工学版), 2020,54(7):1264-1271.
[11] IRISSAPPANE A A, YU H F, SHEN Y K, et al. Leveraging GPT-2 for classifying spam reviews with limited labeled data via adversarial training[J]. arXiv preprint arXiv:2012.13400, 2020.
[12] 段丹丹,唐加山,温勇,等. 基于BERT模型的中文短文本分类算法[J]. 计算机工程, 2021,47(1):79-86.
[13] RYU M. [RE] ALBERT: A lite bert for self-supervised learning of language representations[J]. arXiv preprint arXiv:1909.11942, 2021.
[14] SUN Z Q, YU H K, SONG X D, et al. MobileBERT: A compact task-agnostic bert for resource-limited devices[J]. arXiv preprint arXiv:2004.02984, 2020.
[15] 廖胜兰,吉建民,俞畅,等. 基于BERT模型与知识蒸馏的意图分类方法[J]. 计算机工程, 2021,47(5):73-79.
[16] ZHANG S W, ZHANG X Z. Does QA-based intermediate training help fine-tuning language models for text classification[J]. arXiv preprint arXiv:2112.15051, 2021.
[17] 宋英华,吕龙,刘丹. 基于组合深度学习模型的突发事件新闻识别与分类研究[J]. 情报学报, 2021,40(2):145-151.
[18] 杨东,王移芝. 基于Attention-based C-GRU神经网络的文本分类[J]. 计算机与现代化, 2018(2):96-100.
[19] 尼格拉木·买斯木江,艾孜尔古丽·玉素甫. 基于BERT及双向GRU模型的慕课用户评论情感倾向性分析[J]. 计算机与现代化, 2021(4):20-26.
[20] 陈杰,马静,李晓峰. 融合预训练模型文本特征的短文本分类方法[J]. 数据分析与知识发现, 2021,5(9):21-30.
[21] 蒋雨肖,丁晟春,吴鹏. 基于BiLSTM-VGG16的多模态信息特征分类研究[J]. 情报理论与实践, 2021,44(11):180-186.
[22] 范涛,王昊,李跃艳,等. 基于多模态融合的非遗图片分类研究[J/OL]. 数据分析与知识发现,2022,6(2):329-337[2021-12-31]. http://kns.cnki.net/kcms/detail/10.14 78.g2.20220126.1127.004.html.
[23] SALTON G, BUCKLEY C. Term-weighting approaches in automatic text retrieval[J]. Information Processing & Management, 1988,24(5):513-523.
[24] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[J]. arXiv preprint arXiv: 2010.11929, 2020.
[25] 谢豪,毛进,李纲. 基于多层语义融合的图文信息情感分类研究[J]. 数据分析与知识发现, 2021,5(6):103-114.
[26] ZHOU Z H, FENG J. Deep forest[J]. arXiv preprint arXiv:1702.08835v3, 2018.
[27] HUANG Z L, WANG X G, HUANG L C, et al. CCNet: Criss-cross attention for semantic segmentation[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. 2019:603-612.
[28] YUAN L, CHEN Y P, WANG T, et al. Tokens-to-token ViT: Training vision transformers from scratch on imagenet[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. 2021:558-567.