all InfoSec news
Guarantees of confidentiality via Hammersley-Chapman-Robbins bounds
April 4, 2024, 4:10 a.m. | Kamalika Chaudhuri, Chuan Guo, Laurens van der Maaten, Saeed Mahloujifar, Mark Tygert
cs.CR updates on arXiv.org arxiv.org
Abstract: Protecting privacy during inference with deep neural networks is possible by adding noise to the activations in the last layers prior to the final classifiers or other task-specific layers. The activations in such layers are known as "features" (or, less commonly, as "embeddings" or "feature embeddings"). The added noise helps prevent reconstruction of the inputs from the noisy features. Lower bounding the variance of every possible unbiased estimator of the inputs quantifies the confidentiality arising …
arxiv confidentiality cs.cr cs.cy cs.lg feature features networks neural networks noise privacy protecting stat.ml task
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Cyber Security Culture – Communication and Content Specialist
@ H&M Group | Stockholm, Sweden
Container Hardening, Sr. (Remote | Top Secret)
@ Rackner | San Antonio, TX
GRC and Information Security Analyst
@ Intertek | United States
Information Security Officer
@ Sopra Steria | Bristol, United Kingdom
Casual Area Security Officer South Down Area
@ TSS | County Down, United Kingdom