all InfoSec news
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning. (arXiv:2211.15926v1 [cs.CR])
Nov. 30, 2022, 2:10 a.m. | Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed
cs.CR updates on arXiv.org arxiv.org
Deep learning methods have gained increased attention in various applications
due to their outstanding performance. For exploring how this high performance
relates to the proper use of data artifacts and the accurate problem
formulation of a given task, interpretation models have become a crucial
component in developing deep learning-based systems. Interpretation models
enable the understanding of the inner workings of deep learning models and
offer a sense of security in detecting the misuse of artifacts in the input
data. Similar …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 9 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 9 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 9 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 9 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Staff DFIR Investigator
@ SentinelOne | United States - Remote
Senior Consultant.e (H/F) - Product & Industrial Cybersecurity
@ Wavestone | Puteaux, France
Information Security Analyst
@ StarCompliance | York, United Kingdom, Hybrid
Senior Cyber Security Analyst (IAM)
@ New York Power Authority | White Plains, US