Federated learning (FL) is a distributed machine learning approach that reduces data transfer by aggregating gradients from multiple users. However, this process raises concerns about user privacy, leading to the emergence of privacy preserving FL. Unfortunately, this development poses new Byzantine-robustness challenges as poisoning attacks become difficult to detect. Existing byzantine-robust algorithms operate primarily in plaintext, and crucially, current byzantine-robust privacy FL methods fail to concurrently defend against adaptive attacks. In response, we propose a lightweight, byzantine-robust, and privacy-preserving federated learning framework (LRFL), employing shuffle functions and encryption masks to ensure privacy. In addition, we comprehensively calculate the similarity of the direction and magnitude of each gradient vector to ensure byzantine-robustness. To the best of our knowledge, LRFL is the first byzantine-robust privacy preserving FL capable of identifying malicious users based on gradient angles and magnitudes. What's more, the theoretical complexity of LRFL is $\mathcal{O}(dN + dN\log N)$, comparable to byzantine-robust FL with user number $N$ and gradient dimension $d$. Experimental results demonstrate that LRFL achieves similar accuracy to state-of-the-art methods under multiple attack scenarios.
Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning