March 14, 2024, 4:05 a.m. | Balaji N

Cyber Security News cybersecuritynews.com

Researchers discovered multiple vulnerabilities in Google’s Gemini Large Language Model (LLM) family, including Gemini Pro and Ultra, that allow attackers to manipulate the model’s response through prompt injection. This could potentially lead to the generation of misleading information, unauthorized access to confidential data, and the execution of malicious code. The attack involved feeding the LLM […]


The post Google’s Gemini AI Vulnerability let Hackers Gain Control Over Users’ Queries appeared first on Cyber Security News.

access attackers code confidential control cyber-attack cyber security data family gemini google hackers information injection language large large language model llm malicious pro prompt prompt injection researchers response ultra unauthorized unauthorized access vulnerabilities vulnerability

More from cybersecuritynews.com / Cyber Security News

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC