March 24, 2023, 1:10 a.m. | Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, Mohit Iyyer

cs.CR updates on arXiv.org arxiv.org

To detect the deployment of large language models for malicious use cases
(e.g., fake content creation or academic plagiarism), several approaches have
recently been proposed for identifying AI-generated text via watermarks or
statistical irregularities. How robust are these detection algorithms to
paraphrases of AI-generated text? To stress test these detectors, we first
train an 11B parameter paraphrase generation model (DIPPER) that can paraphrase
paragraphs, optionally leveraging surrounding text (e.g., user-written prompts)
as context. DIPPER also uses scalar knobs to control …

algorithms cases content creation defense deployment detect detection fake generated language language models large malicious parameter stress stress test test text train use cases

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Associate Manager, BPT Infrastructure & Ops (Security Engineer)

@ SC Johnson | PHL - Makati

Cybersecurity Analyst - Project Bound

@ NextEra Energy | Jupiter, FL, US, 33478

Lead Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts

Junior Information Security Coordinator (Internship)

@ Garrison Technology | London, Waterloo, England, United Kingdom

Sr. Security Engineer

@ ScienceLogic | Reston, VA