all InfoSec news
Is Watermarking LLM-Generated Code Robust?
March 28, 2024, 4:10 a.m. | Tarun Suresh, Shubham Ugare, Gagandeep Singh, Sasa Misailovic
cs.CR updates on arXiv.org arxiv.org
Abstract: We present the first study of the robustness of existing watermarking techniques on Python code generated by large language models. Although existing works showed that watermarking can be robust for natural language, we show that it is easy to remove these watermarks on code by semantic-preserving transformations.
arxiv can code cs.cr cs.lg easy generated language language models large llm natural natural language python remove robustness semantic study techniques watermarking watermarks
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)