Oct. 19, 2023, 1:11 a.m. | Zhengmian Hu, Lichang Chen, Xidong Wu, Yihan Wu, Hongyang Zhang, Heng Huang

cs.CR updates on arXiv.org arxiv.org

The recent advancements in large language models (LLMs) have sparked a
growing apprehension regarding the potential misuse. One approach to mitigating
this risk is to incorporate watermarking techniques into LLMs, allowing for the
tracking and attribution of model outputs. This study examines a crucial aspect
of watermarking: how significantly watermarks impact the quality of
model-generated outputs. Previous studies have suggested a trade-off between
watermark strength and output quality. However, our research demonstrates that
it is possible to integrate watermarks without …

aspect attribution impact language language models large llms risk study techniques tracking watermarking watermarks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Principal Security Analyst - Threat Labs (Position located in India) (Remote)

@ KnowBe4, Inc. | Kochi, India

Cyber Security - Cloud Security and Security Architecture - Manager - Multiple Positions - 1500860

@ EY | Dallas, TX, US, 75219

Enterprise Security Architect (Intermediate)

@ Federal Reserve System | Remote - Virginia

Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Associate -- Dallas

@ Goldman Sachs | Dallas, Texas, United States

Vulnerability Management Team Lead - North Central region (Remote)

@ GuidePoint Security LLC | Remote in the United States