Prompt Injection Is the SQL Injection of AI
In the early 2000s, SQL injection was everywhere. Web applications mixed user input with SQL queries, and attackers exploited the blurred boundary between data and commands. Today, we are seeing the exact same pattern with LLMs.
The parallel
| SQL Injection | Prompt Injection |
|---|---|
| User input mixed into SQL query | User input mixed into LLM prompt |
| Attacker closes the query and starts a new one | Attacker overrides instructions and injects new ones |
| Data exfiltration via UNION SELECT | Data exfiltration via "reveal your system prompt" |
| Defence: parameterised queries | Defence: input scanning + output validation |
Same root cause
Both attacks exploit the same fundamental problem: the boundary between instructions and data is not enforced. In SQL, the fix was parameterised queries that structurally separate code from data. In LLMs, there is no equivalent structural separation. All text in the context window is treated as potential instructions.
Same consequences
Both attacks lead to data exfiltration, privilege escalation, and loss of control. Both are embarrassingly simple to execute but difficult to defend against without purpose-built tools.
The key difference
SQL injection has a definitive fix: parameterised queries. Prompt injection does not have an equivalent structural fix because natural language is inherently ambiguous. The best we can do is defence in depth: input scanning, output filtering, least-privilege design, and monitoring.
Protect your LLM application
Add prompt injection detection in minutes with Bordair's API.
Get started free