Web: http://arxiv.org/abs/2211.10024

Nov. 21, 2022, 2:20 a.m. | Stephen Casper, Kaivalya Hariharan, Dylan Hadfield-Menell

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) are powerful, but they can make mistakes that
pose significant risks. A model performing well on a test set does not imply
safety in deployment, so it is important to have additional tools to understand
its flaws. Adversarial examples can help reveal weaknesses, but they are often
difficult for a human to interpret or draw generalizable, actionable
conclusions from. Some previous works have addressed this by studying
human-interpretable attacks. We build on these with three contributions. …

attacks automated copy networks neural networks paste

Senior Cloud Security Engineer

@ HelloFresh | Berlin, Germany

Senior Security Engineer

@ Reverb | Remote, US

I.S. Security Analyst

@ YVFWC | Yakima, WA

Territory Account Manager - Cybersecurity - Little Rock

@ Optiv | Little Rock, AR

Cybersecurity Network Engineer

@ Bitcoin Depot | Remote

Senior Solutions Architect, Prisma Cloud - Visibility, Compliance, and Security (EMEA)

@ Palo Alto Networks | Manchester, United Kingdom

Cloud Security Engineer

@ Snow Software | Solna, Sweden

Senior Security Engineer - 12 month contract - Outside IR35 - Northampton Area

@ Eurofins | Northampton, United Kingdom

Penetration Tester

@ Family Zone | Melbourne, Australia

Senior Consultant - II - Fortinet

@ Optiv | Bengaluru, Karnataka

Snr Professional Services Consultant - XSIAM

@ Palo Alto Networks | Madrid, Spain

Data Governor and Security Specialist

@ Dynatrace | Milan, Italy