Abstract:
Federated learning (FL) o ers a collaborative machine learning (ML) paradigm where
participants train models on their local data and contribute updates to a central server,
preserving data privacy. However, this distributed nature introduces a vulnerability,
malicious nodes can inject manipulated updates to disrupt the training process and
compromise the model’s performance. These attacks might be targetted or untarget ted. A particularly deceptive tactic employed by malicious actors is the label- ipping
attack. In this targeted assault, attackers subtly sabotage the training data by reversing
the labels of speci c examples. This seemingly minor manipulation can wreak havoc
on the global model’s performance, making it di cult to detect yet highly impact ful. Existing defenses against Label-Flipping attacks often su er from several critical
shortcomings. They tend to be overly reliant on central servers, computationally bur densome, and susceptible to slow poisoning attacks. Moreover, many of these meth ods struggle to e ectively detect and mitigate malicious behavior accurately.
Our proposed defense methodology explores a novel distributed model training ap proach that utilizes trust-based updates with incentivized learning mechanisms. Nodes
are rewarded for accurate contributions and penalized for inaccurate ones, drawing
inspiration from reinforcement learning principles to achieve consensus on reliable
and consistent model updates. Then, the updates are subjected to a weighted aggre gation based on the trust level of the clients, ensuring a robust and resilient global
model. We evaluate the e ectiveness of our proposed approach through extensive
simulations, demonstrating its ability to accurately identify malicious nodes while
maintaining high detection accuracy and low computational overhead. Our method ology outperforms several state-of-the-art defenses in detecting malicious nodes.Our
ndings pave the way for the development of more robust and trustworthy Decentral ized Federated Learning systems, enabling secure and e cient collaborative learning.
Description:
Supervised by
Dr. Md. Azam Hossain,
Associate Professor,
Department of Computer Science and Engineering (CSE)
Islamic University of Technology (IUT)
Board Bazar, Gazipur, Bangladesh
This thesis is submitted in partial fulfillment of the requirement for the degree of Bachelor of Science in Computer Science and Engineering, 2024