Aug. 29, 2022, 1:23 a.m. | Haodong Zhao, Wei Du, Fangqi Li, Peixuan Li, Gongshen Liu

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) has enabled global model training on decentralized
data in a privacy-preserving way by aggregating model updates. However, for
many natural language processing (NLP) tasks that utilize pre-trained language
models (PLMs) with large numbers of parameters, there are considerable
communication costs associated with FL. Recently, prompt tuning, which tunes
some soft prompts without modifying PLMs, has achieved excellent performance as
a new learning paradigm. Therefore we want to combine the two methods and
explore the effect of prompt …

communication federated learning lg privacy

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Application Security Engineer - Remote Friendly

@ Unit21 | San Francisco,CA; New York City; Remote USA;

Cloud Security Specialist

@ AppsFlyer | Herzliya

Malware Analysis Engineer - Canberra, Australia

@ Apple | Canberra, Australian Capital Territory, Australia

Product CISO

@ Fortinet | Sunnyvale, CA, United States

Manager, Security Engineering

@ Thrive | United States - Remote