April 30, 2024, 4:11 a.m. | Guoliang Dong, Haoyu Wang, Jun Sun, Xinyu Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.18534v1 Announce Type: cross
Abstract: By training on text in various languages, large language models (LLMs) typically possess multilingual support and demonstrate remarkable capabilities in solving tasks described in different languages. However, LLMs can exhibit linguistic discrimination due to the uneven distribution of training data across languages. That is, LLMs are hard to keep the consistency of responses when faced with the same task but depicted in different languages.
In this study, we first explore the consistency in the LLMs' …

arxiv can capabilities cs.ai cs.cl cs.cr cs.se data discrimination distribution language language models languages large linguistic llms support text training training data

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité

@ Hifield | Sèvres, France

Infrastructure Consultant

@ Telefonica Tech | Belfast, United Kingdom