Web: http://arxiv.org/abs/2105.15010

Nov. 23, 2022, 2:20 a.m. | Sizhe Chen, Zhehao Huang, Qinghua Tao, Xiaolin Huang

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial
attacks, while the existing black-box attacks require extensive queries on the
victim DNN to achieve high success rates. For query-efficiency, surrogate
models of the victim are used to generate transferable Adversarial Examples
(AEs) because of their Gradient Similarity (GS), i.e., surrogates' attack
gradients are similar to the victim's ones. However, it is generally neglected
to exploit their similarity on outputs, namely the Prediction Similarity (PS),
to filter out inefficient queries …

attack identity

Operational Technology Cyber Security Consultant

@ PA Consulting | Edinburgh, United Kingdom

Cyber Security Analyst I

@ Humanity | Cincinnati, OH, United States

IT Security Analyst Specialist

@ Humanity | Phoenix, AZ, United States

IT Security Analyst Senior

@ Humanity | Phoenix, AZ, United States

Managed Network Detection & Response Analyst (REMOTE)

@ Arista Networks | Vancouver, BC, Canada

Director, Next Generation Firewall Customer Success

@ Palo Alto Networks | Raleigh, NC, United States

Cyber Security engineer

@ LACROIX | Rennes, France

Cyber Security Engineer(台北)

@ SGS | Taipei, Taiwan

Duales Studium Elektrotechnik mit Schwerpunkt Cyber Security (w/m/div.) - anteilig remote

@ Bosch Group | Rülzheim, Germany

Cloud Security Controls Expert

@ PA Consulting | London, United Kingdom

Cybersecurity Audit Manager

@ ServiceNow | Santa Clara, CALIFORNIA, United States

Security Solution Administrator - Platform Operation (REF1249B)

@ Deutsche Telekom IT Solutions | Pécs, Budapest, Szeged, Debrecen, Hungary