Aug. 24, 2023, 9:19 p.m. | Joel R. McConvey

Biometric Update www.biometricupdate.com


A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has designed a new tool to jam AI image generators, using invisible “perturbations” at the pixel level of an image.

A release describes how the PhotoGuard technique uses a combination of offensive and defensive tactics to block AI tools such as DALL-E or Midjourney from manipulating photos to create deepfakes and other compromised images. In the encoding tactic, perturbations are small alterations to the latent representation …

ai ai tools artificial artificial intelligence biometric r&d biometrics news block computer computer science deepfakes defensive fraud prevention image image analysis intelligence manipulation mit offensive pixel release researchers science tactics team tool tools

More from www.biometricupdate.com / Biometric Update

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Application Security Engineer

@ Austin Community College | HMO99: Field Office - MO Remote Location, Remote City, MO, 65043 USA

Sr. Information Assurance Security Analyst

@ SMS Data Products Group, Inc. | San Antonio, TX, United States

Product Cybersecurity Test Infrastructure Engineer (Remote)

@ SNC-Lavalin | HCT99: Field Office - CT Remote Location, Remote City, CT, 06101 USA