all InfoSec news
I can't see it but I can Fine-tune it: On Encrypted Fine-tuning of Transformers using Fully Homomorphic Encryption
Feb. 15, 2024, 5:10 a.m. | Prajwal Panzade, Daniel Takabi, Zhipeng Cai
cs.CR updates on arXiv.org arxiv.org
Abstract: In today's machine learning landscape, fine-tuning pretrained transformer models has emerged as an essential technique, particularly in scenarios where access to task-aligned training data is limited. However, challenges surface when data sharing encounters obstacles due to stringent privacy regulations or user apprehension regarding personal information disclosure. Earlier works based on secure multiparty computation (SMC) and fully homomorphic encryption (FHE) for privacy-preserving machine learning (PPML) focused more on privacy-preserving inference than privacy-preserving training. In response, we …
access arxiv can challenges cs.ai cs.cr cs.lg data data sharing encrypted encryption fine-tuning fully homomorphic encryption homomorphic encryption machine machine learning sharing task today training training data transformers
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)