تجاوز إلى المحتوى الرئيسي
User Image

Saad Abdullah AlAhmadi | سعد عبدالله الأحمدي

Professor

Professor in Computer Science - Specialty: Artificial Intelligence (AI), Cybersecurity, and the Internet of Things (IoT)

علوم الحاسب والمعلومات
Building 31 (CCIS Building) - 2nd Floor - Room 2179
المنشورات
مقال فى مجلة
2025

Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach

Training Data models Servers Privacy Feature extraction Computational modeling Adversari

Machine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained challenging due to the limited computing resources. Moreover, training such models using private data is prone to serious privacy risks resulting from inadvertent disclosure of sensitive information. Split Learning (SL) has emerged as a promising technique to mitigate these risks through partitioning neural networks into the client and the server subnets. One should note that although only the extracted features are transmitted to the server, sensitive information can still be unwittingly revealed. Existing approaches addressing this privacy concern in SL struggle to maintain a balance of privacy and utility. This research introduces a novel privacy-preserving split learning approach that integrates: 1) Adversarial learning and 2) Network channel pruning. Specifically, adversarial learning aims to minimize the risk of sensitive data leakage while maximizing the performance of the target prediction task. Furthermore, the channel pruning performed jointly with the adversarial training allows the model to dynamically adjust and reactivate the pruned channels. The association of these two techniques makes the intermediate representations (features) exchanged between the client and the server models less informative and more robust against data reconstruction attacks. Accordingly, the proposed approach enhances data privacy without ceding the model’s performance in achieving the intended utility task. The contributions of this research were validated and assessed using benchmark datasets. The experiments demonstrated the superior defense ability, against data reconstruction attacks, of the proposed approach in comparison with relevant state-of-the-art approaches. In particular, the SSIM between the original data and the data reconstructed by the attacker, achieved by our approach, decreased significantly by 57%. In summary, the obtained quantitative and qualitative results proved the efficiency of the proposed approach in balancing privacy and utility for typical split learning frameworks.

نوع عمل المنشور
Research Article
اسم الناشر
IEEE Access
مزيد من المنشورات
publications

Internet of Things (IoT) networks’ wide range and heterogeneity make them prone to cyberattacks. Most IoT devices have limited resource capabilities (e.g., memory capacity, processing power, and…

2025
تم النشر فى:
Sensors
publications

Machine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained…

2025
تم النشر فى:
IEEE Access
publications

One of the most promising applications for electroencephalogram (EEG)-based brain–computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training…

2024
تم النشر فى:
Sensors