July 25, 2022, 1:20 a.m. | Bargav Jayaraman, Esha Ghosh, Huseyin Inan, Melissa Chase, Sambuddha Roy, Wei Dai

cs.CR updates on arXiv.org arxiv.org

With the wide availability of large pre-trained language model checkpoints,
such as GPT-2 and BERT, the recent trend has been to fine-tune them on a
downstream task to achieve the state-of-the-art performance with a small
computation overhead. One natural example is the Smart Reply application where
a pre-trained model is fine-tuned for suggesting a number of responses given a
query message. In this work, we set out to investigate potential information
leakage vulnerabilities in a typical Smart Reply pipeline and …

attacks data language

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Systems Administration

@ Peraton | Washington, DC, United States

Android Security Engineer, Public Sector

@ Google | Reston, VA, USA

Lead Electronic Security Engineer, CPP - Federal Facilities - Hybrid

@ Black & Veatch | Denver, CO, US

Profissional Sênior de Compliance & Validação em TI - Montes Claros (MG)

@ Novo Nordisk | Montes Claros, Minas Gerais, BR

Principal Engineer, Product Security Engineering

@ Google | Sunnyvale, CA, USA