Dec. 5, 2022, 2:10 a.m. | Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe

cs.CR updates on arXiv.org arxiv.org

Machine learning models are critically susceptible to evasion attacks from
adversarial examples. Generally, adversarial examples, modified inputs
deceptively similar to the original input, are constructed under whitebox
settings by adversaries with full access to the model. However, recent attacks
have shown a remarkable reduction in query numbers to craft adversarial
examples using blackbox attacks. Particularly, alarming is the ability to
exploit the classification decision from the access interface of a trained
model provided by a growing number of Machine Learning …

decision exploit network neural network

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Oracle EBS DevSecOps Developer

@ Accenture Federal Services | Arlington, VA

Information Security GRC Specialist - Risk Program Lead

@ Western Digital | Irvine, CA, United States

Senior Cyber Operations Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States

AI Cybersecurity Architect

@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)