all InfoSec news
How Susceptible are Large Language Models to Ideological Manipulation?
Feb. 20, 2024, 5:11 a.m. | Kai Chen, Zihao He, Jun Yan, Taiwei Shi, Kristina Lerman
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information. This raises concerns about the societal impact that could arise if the ideologies within these models can be easily manipulated. In this work, we investigate how effectively LLMs can learn and generalize ideological biases from their instruction-tuning data. Our findings reveal a concerning vulnerability: exposure to only a small amount of ideologically driven samples significantly alters the …
arxiv can cs.cl cs.cr cs.cy effectively impact influence information language language models large llms manipulation public societal impact work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
QA Customer Response Engineer
@ ORBCOMM | Sterling, VA Office, Sterling, VA, US
Enterprise Security Architect
@ Booz Allen Hamilton | USA, TX, San Antonio (3133 General Hudnell Dr) Client Site
DoD SkillBridge - Systems Security Engineer (Active Duty Military Only)
@ Sierra Nevada Corporation | Dayton, OH - OH OD1
Senior Development Security Analyst (REMOTE)
@ Oracle | United States
Software Engineer - Network Security
@ Cloudflare, Inc. | Remote
Software Engineer, Cryptography Services
@ Robinhood | Toronto, ON