April 9, 2024, 4:11 a.m. | Shanshan Wu, Zheng Xu, Yanxiang Zhang, Yuanbo Zhang, Daniel Ramage

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.04360v1 Announce Type: cross
Abstract: Pre-training on public data is an effective method to improve the performance for federated learning (FL) with differential privacy (DP). This paper investigates how large language models (LLMs) trained on public data can improve the quality of pre-training data for the on-device language models trained with DP and FL. We carefully design LLM prompts to filter and transform existing public data, and generate new data to resemble the real user data distribution. The model pre-trained …

applications arxiv can cs.cl cs.cr cs.lg data device differential privacy federated federated learning language language models large llms performance privacy private prompt public quality training training data

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cloud Security Analyst

@ Cloud Peritus | Bengaluru, India

Cyber Program Manager - CISO- United States – Remote

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700

Network Security Engineer (AEGIS)

@ Peraton | Virginia Beach, VA, United States

SC2022-002065 Cyber Security Incident Responder (NS) - MON 13 May

@ EMW, Inc. | Mons, Wallonia, Belgium

Information Systems Security Engineer

@ Booz Allen Hamilton | USA, GA, Warner Robins (300 Park Pl Dr)