具有记忆功能的联邦学习搭便车攻击
首发时间:2025-03-03
摘要:联邦学习作为一种解决数据孤岛和隐私问题的机器学习框架,面临着搭便车攻击的威胁。此类攻击通过伪造本地模型参数更新来获取全局模型的改进,而不贡献有效数据。2024 年,洪等人提出了一种基于梯度回溯的检测方法,该方法通过服务器下发重复全局模型参数后对比两次模型更新的余弦相似度,低着识别为攻击者。本文提出了一种具有记忆功能的搭便车攻击方法,通过记录全局模型参数,显著提高了重复模型参数的余弦相似度, 从而有效抵御基于梯度回溯的检测方法。实验结果表明,在独立同分布和非独立同分布数据场景下,该攻击方法均能成功伪装,使攻击者与诚实用户难以区分。
关键词: 联邦学习 搭便车攻击 记忆功能 梯度回溯 余弦相似度
For information in English, please click here
Memory-enabled Free-riding Attack in Federated Learning
Abstract:Federated learning, as a machine learning framework that addresses data silos and privacy issues, is threatened by freeriding attacks. These attacks obtain improvements in the global model by forging local model parameter updates without contributing valid data. In 2024, Hong et al. proposed a detection method based on gradient retracing. This method identifies attackers by comparing the cosine similarity of two model updates after the server sends down repeated global model parameters; low similarity is recognized as an indication of an attacker. This paper proposes a freeriding attack method with memory functionality, which significantly increases the cosine similarity of repeated model parameters by recording the global model parameters, thereby effectively countering the detection method based on gradient retracing. Experimental results show that under both independent and identically distributed (i.i.d.) and non-i.i.d. data scenarios, the attack method can successfully disguise itself, making it difficult to distinguish between attackers and honest users.
Keywords: Federated learning Free-riding attack Memory functionality Gradient retracing Cosine similarity
基金:
论文图表:
引用
No.****
同行评议
勘误表
具有记忆功能的联邦学习搭便车攻击
评论
全部评论