Aug. 18, 2022, 1:20 a.m. | Konstantinos Konstantinidis, Aditya Ramamoorthy

cs.CR updates on arXiv.org arxiv.org

A plethora of modern machine learning tasks require the utilization of
large-scale distributed clusters as a critical component of the training
pipeline. However, abnormal Byzantine behavior of the worker nodes can derail
the training and compromise the quality of the inference. Such behavior can be
attributed to unintentional system malfunctions or orchestrated attacks; as a
result, some nodes may return arbitrary results to the parameter server (PS)
that coordinates the training. Recent work considers a wide range of attack
models …

detection distributed lg systems training

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Security Engineer II- Full stack Java with React

@ JPMorgan Chase & Co. | Hyderabad, Telangana, India

Cybersecurity SecOps

@ GFT Technologies | Mexico City, MX, 11850

Senior Information Security Advisor

@ Sun Life | Sun Life Toronto One York

Contract Special Security Officer (CSSO) - Top Secret Clearance

@ SpaceX | Hawthorne, CA

Early Career Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts