all InfoSec news
Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation
March 13, 2024, 4:11 a.m. | Di Mi, Yanjun Zhang, Leo Yu Zhang, Shengshan Hu, Qi Zhong, Haizhuan Yuan, Shirui Pan
cs.CR updates on arXiv.org arxiv.org
Abstract: Model extraction attacks (MEAs) enable an attacker to replicate the functionality of a victim deep neural network (DNN) model by only querying its API service remotely, posing a severe threat to the security and integrity of pay-per-query DNN-based services. Although the majority of current research on MEAs has primarily concentrated on neural classifiers, there is a growing prevalence of image-to-image translation (I2IT) tasks in our everyday activities. However, techniques developed for MEA of DNN classifiers …
api arxiv attacker attacks cs.cr domain enable extraction gan image integrity mitigation model extraction network neural network pay query security service services threat translation victim
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Red Team Operator
@ JPMorgan Chase & Co. | LONDON, United Kingdom
SOC Analyst
@ Resillion | Bengaluru, India
Director of Cyber Security
@ Revinate | San Francisco Bay Area
Jr. Security Incident Response Analyst
@ Kaseya | Miami, Florida, United States
Infrastructure Vulnerability Consultant - (Cloud Security , CSPM)
@ Blue Yonder | Hyderabad
Product Security Lead
@ Lely | Maassluis, Netherlands