April 3, 2024, 4:11 a.m. | Ying Zhou, Ben He, Le Sun

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.01907v1 Announce Type: cross
Abstract: With the development of large language models (LLMs), detecting whether text is generated by a machine becomes increasingly challenging in the face of malicious use cases like the spread of false information, protection of intellectual property, and prevention of academic plagiarism. While well-trained text detectors have demonstrated promising performance on unseen test data, recent research suggests that these detectors have vulnerabilities when dealing with adversarial attacks such as paraphrasing. In this paper, we propose a …

academic adversarial adversarial attack arxiv attack cases cs.cl cs.cr cs.lg detection development generated information intellectual property language language models large llms machine machine-generated content malicious plagiarism prevention property protection text use cases

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote