April 16, 2024, 4:11 a.m. | Jiachun Li, David Simchi-Levi, Yining Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.09413v1 Announce Type: cross
Abstract: Contextual bandit with linear reward functions is among one of the most extensively studied models in bandit and online learning research. Recently, there has been increasing interest in designing \emph{locally private} linear contextual bandit algorithms, where sensitive information contained in contexts and rewards is protected against leakage to the general public. While the classical linear contextual bandit algorithm admits cumulative regret upper bounds of $\tilde O(\sqrt{T})$ via multiple alternative methods, it has remained open whether …

algorithms arxiv bandit cs.cr cs.lg functions information interest linear locally online learning private research reward rewards sensitive sensitive information stat.ml

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Data Privacy Manager m/f/d)

@ Coloplast | Hamburg, HH, DE

Cybersecurity Sr. Manager

@ Eastman | Kingsport, TN, US, 37660

KDN IAM Associate Consultant

@ KPMG India | Hyderabad, Telangana, India

Learning Experience Designer in Cybersecurity (f/m/div.) (Salary: ~113.000 EUR p.a.*)

@ Bosch Group | Stuttgart, Germany

Senior Security Engineer - SIEM

@ Samsara | Remote - US