Nov. 16, 2022, 2:20 a.m. | Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

cs.CR updates on arXiv.org arxiv.org

Contrastive learning (CL) pre-trains general-purpose encoders using an
unlabeled pre-training dataset, which consists of images (called single-modal
CL) or image-text pairs (called multi-modal CL). CL is vulnerable to data
poisoning based backdoor attacks (DPBAs), in which an attacker injects poisoned
inputs into the pre-training dataset so the encoder is backdoored. However,
existing DPBAs achieve limited effectiveness. In this work, we propose new
DPBAs called CorruptEncoder to CL. Our experiments show that CorruptEncoder
substantially outperforms existing DPBAs for both single-modal and …

attacks backdoor data data poisoning poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA