all InfoSec news
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
May 13, 2024, 4:11 a.m. | Jiangnan Li, Yingyuan Yang, Jinyuan Sun
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Model (LLM) is a significant breakthrough in artificial intelligence (AI) and holds considerable potential for application within smart grids. However, as demonstrated in previous literature, AI technologies are susceptible to various types of attacks. It is crucial to investigate and evaluate the risks associated with LLMs before deploying them in critical infrastructure like smart grids. In this paper, we systematically evaluate the vulnerabilities of LLMs and identify two major types of attacks relevant …
ai technologies application artificial artificial intelligence arxiv attacks cs.cr grid intelligence language language models large large language model literature llm modeling risks smart smart grid technologies threat threat modeling types validation
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Corporate Intern - Information Security (Year Round)
@ Associated Bank | US WI Remote
Senior Offensive Security Engineer
@ CoStar Group | US-DC Washington, DC