Web: http://arxiv.org/abs/2211.12990

Nov. 24, 2022, 2:10 a.m. | Elre T. Oldewage, John Bronskill, Richard E. Turner

cs.CR updates on arXiv.org arxiv.org

This paper examines the robustness of deployed few-shot meta-learning systems
when they are fed an imperceptibly perturbed few-shot dataset. We attack
amortized meta-learners, which allows us to craft colluding sets of inputs that
are tailored to fool the system's learning algorithm when used as training
data. Jointly crafted adversarial inputs might be expected to synergistically
manipulate a classifier, allowing for very strong data-poisoning attacks that
would be hard to detect. We show that in a white box setting, these attacks …

adversarial attacks meta poisoning

More from arxiv.org / cs.CR updates on arXiv.org

Senior Cloud Security Engineer

@ HelloFresh | Berlin, Germany

Senior Security Engineer

@ Reverb | Remote, US

Sr. Product Manager - Cloud Security/CNAPP

@ Zscaler | Atlanta, GA, United States

ISSO - Security Delivery

@ Novetta | Columbia, MD

Junior Cyber Security Recruitment Consultant (possibility for work abroad)

@ Gradfuel | London, England, United Kingdom

Internship, Cybersecurity

@ Qontigo | Eschborn, Hessen, Germany

Security Administrator

@ Zero Hash | Melbourne, VIC - Remote

Cybersecurity Project Manager, Reactive Lead - Unit 42 Consulting (Remote)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant, GRC, Proactive Services (Unit 42) - Remote

@ Palo Alto Networks | New York City, United States

Senior Manager, Security Operations (Secure Access Engineering)

@ GitHub | Remote - United States

Junior Penetration Tester - Amsterdam

@ BreachLock | Amsterdam, North Holland, Netherlands

Senior Product Security Engineer

@ 8x8, Inc. | Remote, Romania