计算机与现代化 ›› 2024, Vol. 0 ›› Issue (07): 36-40.doi: 10.3969/j.issn.1006-2475.2024.07.006

• 人工智能 • 上一篇    下一篇

基于知识提示微调的事件抽取方法



  

  1. (西南交通大学计算机与人工智能学院,四川 成都 610031)
  • 出版日期:2024-07-25 发布日期:2024-08-08
  • 基金资助:
    四川省科技计划项目(2019YFSY0032)

Knowledge Prompt Fine-tuning for Event Extraction

  1. (School of Computer and Artificial Intelligence, Southwest Jiaotong University, Chengdu 610031, China)
  • Online:2024-07-25 Published:2024-08-08

摘要: 事件抽取是信息抽取中的一个重要研究热点,旨在通过识别和分类事件触发词和论元,从文本中抽取出事件结构化信息。传统的方法依赖于复杂的下游网络,需要足够的训练数据,在数据稀缺的情况下表现不佳。现有研究利用提示学习,在事件抽取上取得一定的研究成果,但依赖手工构建,且只依靠预训练模型已有的知识,缺乏事件特有的知识。因此本文提出一种基于知识提示微调的事件抽取方法。该方法采用条件生成的方式,在现有预训练语言模型知识的基础上,注入事件信息以提供论元关系约束,并采用提示微调策略对提示进行优化。大量实验结果表明,相较于传统基线方法,该方法在触发词抽取上优于基线方法,并在小样本下达到最好的效果。

关键词: 事件抽取, 提示学习, 信息抽取, 自然语言处理, 预训练语言模型

Abstract:  Event extraction is an important research focus in information extraction, which aims to extract event structured information from text by identifying and classifying event triggers and arguments. Traditional methods rely on complex downstream networks, require sufficient training data, and perform poorly in situations where data is scarce. Existing research has achieved certain results in event extraction using prompt learning, but it relies on manually constructed prompts and only relies on the existing knowledge of pre-trained language models, lacking event specific knowledge. Therefore, a knowledge based fine-tuning event extraction method is proposed. This method adopts a conditional generation approach, injecting event information to provide argument relationship constraints based on existing pre-trained language model knowledge, and optimizing prompts using a fine-tuning strategy. Numerous experiment results show that compared to traditional baseline methods, this method outperforms the baseline method in terms of trigger word extraction and achieves the best results in small samples.

Key words:  , event extraction; prompt learning; information extraction; natural language processing; pre-trained language model

中图分类号: