Computer and Modernization ›› 2021, Vol. 0 ›› Issue (08): 94-99.

Previous Articles     Next Articles

News Label Classification Based on BERT and Deep Equal Length Convolution 

  

  1. (School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 511400, China)  
  • Online:2021-08-19 Published:2021-08-19

Abstract: For the THUCNews’ Chinese news text label classification task, a news label classification model (DPCNN-BERT) that combines multi-layer equal-length convolution and residual connection based on BERT pre-training language model is proposed. Firstly, by querying the Chinese vector table, each word in the news text is converted into a vector and input into BERT model to get the full-text context of the text. Then, the local context relationship in the text is obtained through the initial semantic extraction layer and deep equal-length convolution. Finally, the predicted label of the entire news text is obtained through a single-layer fully connected neural network. The model proposed in this paper is compared with the convolutional Neural Network Classification Model (TextCNN), Recurrent Neural Network Classification Model (TextRNN) and other models. The experimental results show that the prediction accuracy of the model reaches 94.68%, and the F1 value reaches 94.67%, which is better than the comparison models. The performance of the model proposed in this paper is verified. 

Key words: label classification, equal-length convolution, residual connection, BERT