May 26, 2022, 1:20 a.m. | Jie Huang, Hanyin Shao, Kevin Chen-Chuan Chang

cs.CR updates on arXiv.org arxiv.org

Large Pre-Trained Language Models (PLMs) have facilitated and dominated many
NLP tasks in recent years. However, despite the great success of PLMs, there
are also privacy concerns brought with PLMs. For example, recent studies show
that PLMs memorize a lot of training data, including sensitive information,
while the information may be leaked unintentionally and be utilized by
malicious attackers.


In this paper, we propose to measure whether PLMs are prone to leaking
personal information. Specifically, we attempt to query PLMs …

information language large personal personal information

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA