Web: https://techxplore.com/news/2022-04-involve-poisoning-machine.html

April 25, 2022, 1:30 p.m. |

Tech Xplore - Security News techxplore.com

A growing number of studies suggest that machine learning algorithms can leak a considerable amount of information included in the data used to train them through their model parameters and predictions. As a result, malicious users with general access to the algorithm can in many cases reconstruct and infer sensitive information included in the training dataset, ranging from simple demographic data to bank account numbers.

attacks data machine machine learning machine learning models poisoning security training

Software Engineering Lead, Application Security

@ Hotjar | Remote

Mentor - Cyber Security Career Track (Part-time/Remote)

@ Springboard | Remote

Project Manager Data Privacy and IT Security (d/m/f)

@ Bettermile | Hybrid, Berlin

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

Network Architect

@ Earthjustice | Remote, US

DevOps Application Administrator

@ University of Michigan - ITS | Ann Arbor, MI

Threat Analyst (WebApp)

@ Patchstack | Remote, EU Only

NIST Compliance Specialist

@ Coffman Engineers, Inc. | Seattle, WA

Senior Cybersecurity Advisory Consultant (Argentina)

@ Culmen International LLC | Buenos Aires, Argentina

Information Security Administrator

@ Peterborough Victoria Northumberland and Clarington Catholic District School Board | Peterborough, Ontario

Senior SOC Analyst - REMOTE

@ XOR Security | Falls Church, Virginia

Cyber Intelligence Analyst

@ FWG Solutions, Inc. | Shaw AFB, SC