Feb. 22, 2024, 5:11 a.m. | Da Yu, Peter Kairouz, Sewoong Oh, Zheng Xu

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.13659v1 Announce Type: new
Abstract: Service providers of large language model (LLM) applications collect user instructions in the wild and use them in further aligning LLMs with users' intentions. These instructions, which potentially contain sensitive information, are annotated by human workers in the process. This poses a new privacy risk not addressed by the typical private optimization. To this end, we propose using synthetic instructions to replace real instructions in data annotation and model fine-tuning. Formal differential privacy is guaranteed …

applications arxiv collect cs.cl cs.cr human information language language models large large language model llm llms privacy privacy risk process risk sensitive sensitive information service service providers workers

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Principal Security Researcher (Advanced Threat Prevention)

@ Palo Alto Networks | Santa Clara, CA, United States

EWT Infosec | IAM Technical Security Consultant - Manager

@ KPMG India | Bengaluru, Karnataka, India

Security Engineering Operations Manager

@ Gusto | San Francisco, CA; Denver, CO; Remote

Network Threat Detection Engineer

@ Meta | Denver, CO | Reston, VA | Menlo Park, CA | Washington, DC