计算机与现代化 ›› 2025, Vol. 0 ›› Issue (08): 16-23.doi: 10.3969/j.issn.1006-2475.2025.08.003

• 人工智能 • 上一篇    下一篇

目标驱动的面向推荐的对话生成方法

  


  1. (东北大学计算机科学与工程学院,辽宁 沈阳 110167)
  • 出版日期:2025-08-27 发布日期:2025-08-27
  • 作者简介: 作者简介:景清武(1969—),男,黑龙江绥化人,高级实验师,硕士,研究方向:网络与信息安全,观点挖掘,E-mail: jingqw@cc.neu.edu.cn; 陈洪军(1971—),男,辽宁铁岭人,高级实验师,本科,研究方向:计算机网络与应用,E-mail: chenhj@cc.neu.edu.cn; 高翟(1997—),女,吉林长春人,硕士研究生,研究方向:人机对话生成,E-mail: 1793556519@qq.com; 周美美(1998—),女,山东济宁人,硕士研究生,研究方向:人机对话生成,E-mail: 1271261044@qq.com。
  • 基金资助:
    基金项目:国家自然科学基金资助项目(61672144)
     

Goal Driven Recommendation-oriented Dialog Generation Method


  1. (School of Computer Science & Engineering, Northeastern University, Shenyang 110167, China)
  • Online:2025-08-27 Published:2025-08-27

摘要:
摘要:面向推荐的对话生成任务旨在通过人机对话交互获取用户偏好,以实现精准推荐。针对现有研究工作存在对话推荐类型单一和生成回复质量低的问题,本文提出一种基于统一预训练语言模型(Unified Language Model pre-training, UniLM)的目标驱动的推荐对话生成模型(Goal Driven Recommendation-oriented Dialog Generation model, GDRDG)。该模型包括文本表示模块、多头编码模块、解码模块以及一种特殊的注意力掩码机制。其中,文本表示模块通过UniLM对输入文本进行向量化表示,确保模型能捕获文本的深层次语义特征;多头编码模块利用多头自注意力机制捕捉全局上下文信息,提高生成回复的连贯性和相关性;解码模块生成当前轮对话目标及基于该目标的回复,确保回复符合上下文并将对话向预期目标引导;特殊的注意力掩码机制则通过控制解码过程中的信息流,确保模型仅关注当前轮次相关信息,以提高回复质量。实验结果表明,GDRDG模型在BLEU、Distinct、F1和Hit@1等指标上均优于现有方法,验证了模型的有效性和先进性。

   

关键词: 关键词:目标驱动, 推荐对话, 对话生成, 统一预训练语言模型, 注意力机制

Abstract:  
Abstract: The task of recommendation-oriented dialog generation aims to achieve accurate recommendations by obtaining user preferences through human-computer dialog interactions. In response to the problem of limited dialog recommendation types and low quality of generated replies in existing research, this paper proposes a Goal Driven Recommendation-oriented Dialog Generation model (GDRDG) based on the Unified Language Model pre-training (UniLM). The model comprises a text representation module, a multi-head encoding module, a decoding module, and a specialized attention masking mechanism. The text representation module uses UniLM to vectorize the input text, ensuring that the model captures deep semantic features of the text. The multi-head encoding module employs a multi-head self-attention mechanism to capture global contextual information, enhancing the coherence and relevance of the generated responses. The decoding module generates the target of the current dialogue round and the response based on this target, ensuring that the reply is consistent with the context and guides the conversation towards the intended goal. The special attention masking mechanism is used to control the information flow during the decoding process, ensuring that the model focuses only on information relevant to the current round, thereby improving the quality of the response. Experimental results demonstrate that the proposed GDRDG model outperforms existing methods in metrics such as BLEU, Distinct, F1, and Hit@1, thereby validating the model’s effectiveness and advancement.

Key words: Key words: goal driven, recommendation dialog, dialog generation, unified language model pre-training, attention mechanism

中图分类号: