all InfoSec news
Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing
April 12, 2024, 4:11 a.m. | ZhenZhe Gao, Zhenjun Tang, Zhaoxia Yin, Baoyuan Wu, Yue Lu
cs.CR updates on arXiv.org arxiv.org
Abstract: Neural networks have increasingly influenced people's lives. Ensuring the faithful deployment of neural networks as designed by their model owners is crucial, as they may be susceptible to various malicious or unintentional modifications, such as backdooring and poisoning attacks. Fragile model watermarks aim to prevent unexpected tampering that could lead DNN models to make incorrect decisions. They ensure the detection of any tampering with the model as sensitively as possible.However, prior watermarking methods suffered from …
aim arxiv attacks backdooring cs.ai cs.cr deployment integrity malicious may modifications networks neural networks people poisoning poisoning attacks protection sample sensitive unintentional volatility watermarks
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Network Security Engineer – Zscaler SME
@ Peraton | United States
Splunk Data Analytic Subject Matter Expert
@ Peraton | Woodlawn, MD, United States
Principal Consultant, Offensive Security, Proactive Services (Unit 42)- Remote
@ Palo Alto Networks | Santa Clara, CA, United States
Senior Engineer Software Product Security
@ Ford Motor Company | Mexico City, MEX, Mexico
Information System Security Engineer (Red Team)
@ Evolution | Riga, Latvia