all InfoSec news
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models
May 8, 2024, 4:11 a.m. | Minghang Deng, Zhong Zhang, Junming Shao
cs.CR updates on arXiv.org arxiv.org
Abstract: Large-scale pre-trained models (PTMs) such as BERT and GPT have achieved great success in diverse fields. The typical paradigm is to pre-train a big deep learning model on large-scale data sets, and then fine-tune the model on small task-specific data sets for downstream tasks. Although PTMs have rapidly progressed with wide real-world applications, they also pose significant risks of potential attacks. Existing backdoor attacks or data poisoning methods often build up the assumption that the …
arxiv attack bert big collision cs.ai cs.cr data data sets deep learning defence gpt great large paradigm scale task train
More from arxiv.org / cs.CR updates on arXiv.org
A Privacy Preserving System for Movie Recommendations Using Federated Learning
2 days, 16 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Computer and Forensics Investigator
@ ManTech | 221BQ - Cstmr Site,Springfield,VA
Senior Security Analyst
@ Oracle | United States
Associate Vulnerability Management Specialist
@ Diebold Nixdorf | Hyderabad, Telangana, India