all InfoSec news
Exploring the Robustness of In-Context Learning with Noisy Labels
April 30, 2024, 4:11 a.m. | Chen Cheng, Xinzhi Yu, Haodong Wen, Jinsong Sun, Guanzhang Yue, Yihao Zhang, Zeming Wei
cs.CR updates on arXiv.org arxiv.org
Abstract: Recently, the mysterious In-Context Learning (ICL) ability exhibited by Transformer architectures, especially in large language models (LLMs), has sparked significant research interest. However, the resilience of Transformers' in-context learning capabilities in the presence of noisy samples, prevalent in both training corpora and prompt demonstrations, remains underexplored. In this paper, inspired by prior research that studies ICL ability using simple function classes, we take a closer look at this problem by investigating the robustness of Transformers …
arxiv context cs.ai cs.cl cs.cr cs.lg math.oc noisy robustness
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Sr. Staff Firmware Engineer – Networking & Firewall
@ Axiado | Bengaluru, India
Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)
@ SAP | Walldorf, DE, 69190
SAP Security Administrator
@ FARO Technologies | EMEA-Portugal