all InfoSec news
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques. (arXiv:2302.11704v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Deep learning is a crucial aspect of machine learning, but it also makes
these techniques vulnerable to adversarial examples, which can be seen in a
variety of applications. These examples can even be targeted at humans, leading
to the creation of false media, such as deepfakes, which are often used to
shape public opinion and damage the reputation of public figures. This article
will explore the concept of adversarial examples, which are comprised of
perturbations added to clean images or …
adversarial adversarial attacks applications article aspect attacks deepfake deepfake detection deepfakes deep learning detection humans machine machine learning media opinion public techniques vulnerable