April 25, 2024, 7:11 p.m. | Peican Zhu, Zechen Pan, Yang Liu, Jiwei Tian, Keke Tang, Zhen Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.15744v1 Announce Type: cross
Abstract: Graph Neural Network (GNN)-based fake news detectors apply various methods to construct graphs, aiming to learn distinctive news embeddings for classification. Since the construction details are unknown for attackers in a black-box scenario, it is unrealistic to conduct the classical adversarial attacks that require a specific adjacency matrix. In this paper, we propose the first general black-box adversarial attack framework, i.e., General Attack via Fake Social Interaction (GAFSI), against detectors based on different graph structures. …

adversarial adversarial attack adversarial attacks arxiv attack attackers attacks box classification construction cs.ai cs.cr cs.lg fake fake news general graph graphs learn network neural network scenario

Enterprise Security Architect

@ Proofpoint | Utah

Senior Incident Response and Digital Forensics Engineer

@ Danske Bank | Vilnius, Lithuania

SOC Analyst (Remote)

@ Bertelsmann | New York City, US, 10019

Risk Consulting - Protect Tech - Staff - IT Compliance - ISO-NIST-FISMA-PCI DSS and Privacy

@ EY | Bengaluru, KA, IN, 560016

Security Officer Warrenpoint Harbour

@ TSS | Newry, County Down, United Kingdom

Senior DevSecOps Engineer

@ Scientific Systems Company, Inc. | Burlington, Massachusetts, United States