计算机与现代化 ›› 2025, Vol. 0 ›› Issue (09): 109-118.doi: 10.3969/j.issn.1006-2475.2025.09.016

• 数据库与数据挖掘 • 上一篇    下一篇

FedLDP:本地化差分隐私下一种高效联邦学习方法

  


  1. (重庆交通大学信息科学与工程学院,重庆 400074)
  • 出版日期:2025-09-24 发布日期:2025-09-24
  • 作者简介: 作者简介:成梦圆(1999—),男,四川广安人,硕士研究生,研究方向:差分隐私与联邦学习,E-mail: C17628627643@163.com; 通信作者:李艳辉(1989—),女,黑龙江齐齐哈尔人,副教授,博士,研究方向:数据隐私,差分隐私,大数据分析,E-mail: ylibo@cqjtu.edu.cn; 吕天赐(1999—),男,四川内江人,硕士研究生,研究方向:差分隐私,数据隐私,E-mail: 2038512925@qq.com; 赵玉鑫(2001—),女,黑龙江齐齐哈尔人,硕士研究生,研究方向:位置隐私,E-mail: 3025972357@qq.com; 黄臣(2000 —),男,湖北南漳人,硕士研究生,研究方向:差分隐私与联邦学习,E-mail: 622230070016@mails.cqjtu.edu.cn。
  • 基金资助:
        基金项目:国家自然科学基金资助项目(62002036); 重庆市自然科学基金资助项目(cstc2021jcyj-msxmX0859); 重庆市教育委员会科学技术研究项目(KJQN202000707)

FedLDP: Efficient Federated Learning with Localized Differential Privacy


  1. (School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China)
  • Online:2025-09-24 Published:2025-09-24

摘要:
摘要:联邦学习作为一种分布式机器学习框架,允许用户在不泄露原始数据的情况下通过共享模型参数来协作训练模型。然而,模型参数仍然可能包含大量隐私敏感信息,直接对其共享存在泄露用户隐私信息的风险。本地化差分隐私能够抵御具有任意背景知识的攻击者,为隐私信息提供更全面的保护。由于联邦学习的参数数据高维度和多轮次的特点,给本地化差分隐私应用于联邦学习带来了挑战。为此,本文提出一种满足本地化差分隐私的联邦学习算法FedLDP。该算法利用维度选择策略(EMDS)挑选出用于全局聚合的重要参数维度;采用拉普拉斯机制扰动所选的参数维度;为了提高模型的学习效率和整体性能,设计增量隐私预算分配策略调整迭代过程中的隐私预算分配方式,优化模型训练过程。理论分析证明FedLDP算法满足[ε]-本地化差分隐私。实验结果表明,在MNIST和Fashion-MNIST数据集上,FedLDP算法能够在相同级别的隐私约束下使模型准确率分别提升5.07百分点和3.01百分点,优于现有方法。

关键词: 关键词:增量隐私预算分配, 差分隐私, 维度选择, 联邦学习

Abstract:
Abstract: Federated learning, as a distributed machine learning framework, allows users to collaboratively train models by sharing model parameters without disclosing raw data. However, model parameters may still contain a substantial amount of sensitive information, and direct sharing poses considerable threats to individuals’ privacy. The state-of-the-art solution for this problem is local differential privacy, which can resist adversaries with arbitrary background knowledge and protect private information thoroughly. Due to the high dimensionality and multi-round characteristics of federated learning parameters, it is particularly challenging to apply local differential privacy into federated learning. In this paper, we propose FedLDP, an efficient algorithm to privately federate learning. To avoid the individuals’ privacy leakage, in this algorithm, an exponential mechanism-based dimension selection is used to select important parameter dimensions for global aggregation, and Laplace mechanism is utilized to perturb the selected parameter dimensions. In addition, to improve the learning efficiency and overall performance of the model, an incremental privacy budget allocation strategy is designed to adjust the privacy budget allocation during the iteration process, optimizing the model training process. We theoretically prove that FedLDP satisfies [ε]-LDP, and extensive experiments using MINIST and Fashion-MINIST datasets demonstrate that FedLDP improves the final model’s accuracy by 5.07 percentage points and 3.01 percentage points under the same level of privacy constraints compared with state-of-the-art schemes.

Key words: Key words: incremental privacy budget allocation, differential privacy, dimension selection, federated learning

中图分类号: