Bootstrap

区块链论文速读A会-SECURITY 2024 针对恶意大多数安全攻击的隐私保护机器学习 附ppt下载

图片

Conference:33rd USENIX Security Symposium

CCF level:CCF A

Categories:network and information security

Year:2024

Conference time:August 14–16, 2024 Philadelphia, PA, USA

Title: 

MD-ML: Super Fast Privacy-Preserving Machine Learning for Malicious Security with a Dishonest Majority

MD-ML:超快速隐私保护机器学习,可防止大多数不诚实用户发起恶意安全攻击

Authors

图片

Abstract

Privacy-preserving machine learning (PPML) enables the training and inference of models on private data, addressing security concerns in machine learning. PPML based on secure multi-party computation (MPC) has garnered significant attention from both the academic and industrial communities. Nevertheless, only a few PPML works provide malicious security with a dishonest majority. The state of the art by Damgård et al. (SP'19) fails to meet the demand for large models in practice, due to insufficient efficiency. In this work, we propose MD-ML, a framework for Maliciously secure Dishonest majority PPML, with a focus on boosting online efficiency.

MD-ML works for n parties, tolerating corruption of up to n-1 parties. We construct our novel protocols for PPML, including truncation, dot product, matrix multiplication, and comparison. The online communication of our dot product protocol is one single element per party, independent of input length. In addition, the online cost of our multiply-then-truncate protocol is identical to multiplication, which means truncation incurs no additional online cost. These features are achieved for the first time in the literature concerning maliciously secure dishonest majority PPML.

Benchmarking of MD-ML is conducted for SVM and NN including LeNet, AlexNet, and ResNet-18. For NN inference, compared to the state of the art (Damgård et al., SP'19), we are about 3.4—11.0x (LAN) and 9.7—157.7x (WAN) faster in online execution time.

隐私保护机器学习 (PPML) 支持在私有数据上训练和推理模型,解决了机器学习中的安全问题。基于安全多方计算 (MPC) 的 PPML 引起了学术界和工业界的极大关注。然而,只有少数 PPML 工作提供了不诚实多数的恶意安全性。Damgård 等人的最新成果 (SP'19) 由于效率不足,无法满足实践中对大型模型的需求。在这项工作中,我们提出了 MD-ML,这是一个恶意安全不诚实多数 PPML 框架,重点是提高在线效率。

MD-ML 适用于 n 方,最多可容忍 n-1 方的腐败。我们为 PPML 构建了新颖的协议,包括截断、点积、矩阵乘法和比较。我们的点积协议的在线通信是每方一个元素,与输入长度无关。此外,我们的先乘后截断协议的在线成本与乘法相同,这意味着截断不会产生额外的在线成本。这些特性首次在有关恶意安全不诚实多数 PPML 的文献中实现。

对 SVM 和 NN(包括 LeNet、AlexNet 和 ResNet-18)进行了 MD-ML 基准测试。对于 NN 推理,与最先进的技术(Damgård 等人,SP'19)相比,我们的在线执行时间大约快 3.4—11.0 倍(LAN)和 9.7—157.7 倍(WAN)。

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

图片

持续接收区块链最新论文

洞察区块链技术发展趋势

Follow us to keep receiving the latest blockchain papers

Insight into Blockchain Technology Trends

;