all InfoSec news
From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?. (arXiv:2308.01990v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Large Language Models (LLMs) have found widespread applications in various
domains, including web applications, where they facilitate human interaction
via chatbots with natural language interfaces. Internally, aided by an
LLM-integration middleware such as Langchain, user prompts are translated into
SQL queries used by the LLM to provide meaningful responses to users. However,
unsanitized user prompts can lead to SQL injection attacks, potentially
compromising the security of the database. Despite the growing interest in
prompt injection vulnerabilities targeting LLMs, the specific …
application applications attacks chatbots domains human injection injection attacks integration langchain language language models large llm llms middleware natural language prompts sql sql injection web web application web applications