all InfoSec news
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
Feb. 28, 2024, 5:11 a.m. | Soroosh Tayebi Arasteh, Alexander Ziller, Christiane Kuhl, Marcus Makowski, Sven Nebelung, Rickmer Braren, Daniel Rueckert, Daniel Truhn, Georgios Kai
cs.CR updates on arXiv.org arxiv.org
Abstract: Artificial intelligence (AI) models are increasingly used in the medical domain. However, as medical data is highly sensitive, special precautions to ensure its protection are required. The gold standard for privacy preservation is the introduction of differential privacy (DP) to model training. Prior work indicates that DP has negative implications on model accuracy and fairness, which are unacceptable in medicine and represent a main barrier to the widespread use of privacy-preserving techniques. In this work, …
ai models artificial artificial intelligence arxiv cs.ai cs.cr cs.cv cs.lg data differential privacy domain eess.iv fair imaging intelligence introduction large medical medical data medical imaging model training preservation privacy private protection scale sensitive special standard training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer
@ Commit | San Francisco
Trainee (m/w/d) Security Engineering CTO Taskforce Team
@ CHECK24 | Berlin, Germany
Security Engineer
@ EY | Nicosia, CY, 1087
Information System Security Officer (ISSO) Level 3-COMM Job#455
@ Allen Integrated Solutions | Chantilly, Virginia, United States
Application Security Engineer
@ Wise | London, United Kingdom