May 23, 2022, 1:20 a.m. | Haoran Li, Yangqiu Song, Lixin Fan

cs.CR updates on arXiv.org arxiv.org

Social chatbots, also known as chit-chat chatbots, evolve rapidly with large
pretrained language models. Despite the huge progress, privacy concerns have
arisen recently: training data of large language models can be extracted via
model inversion attacks. On the other hand, the datasets used for training
chatbots contain many private conversations between two individuals. In this
work, we further investigate the privacy leakage of the hidden states of
chatbots trained by language modeling which has not been well studied yet. We …

don speakers

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Operations Analyst

@ Astranis | San Francisco

Manager - Business continuity Security and Safety.Risk and Compliance

@ MTN | Benin

Cyber Analyst, Digital Forensics Incident Response

@ At-Bay | Canada

Technical Product Manager, AppSec and DevSecOps

@ Penn Interactive | Philadelphia

Experienced Cloud Security Engineer (m/f/d) - Cybersecurity

@ MediaMarktSaturn | Barcelona, ES, 8003