Nov. 15, 2022, 2:20 a.m. | Jing Xu, Stefanos Koffas, Oguzhan Ersoy, Stjepan Picek

cs.CR updates on arXiv.org arxiv.org

Graph Neural Networks (GNNs) have achieved promising performance in various
real-world applications. Building a powerful GNN model is not a trivial task,
as it requires a large amount of training data, powerful computing resources,
and human expertise in fine-tuning the model. Moreover, with the development of
adversarial attacks, e.g., model stealing attacks, GNNs raise challenges to
model authentication. To avoid copyright infringement on GNNs, verifying the
ownership of the GNN models is necessary.


This paper presents a watermarking framework for …

attacks backdoor networks neural networks watermarking

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Deputy Chief Information Security Officer

@ City of Philadelphia | Philadelphia, PA, United States

Global Cybersecurity Expert

@ CMA CGM | Mumbai, IN

Senior Security Operations Engineer

@ EarnIn | Mexico

Cyber Technologist (Sales Engineer)

@ Darktrace | London