IndustryTrendsLLM Security

The LLM Security Landscape in 2026: Threats, Tools, and Trends

22 Jan 20267 min readBordair

LLM security has matured rapidly over the past year. Here is where the industry stands and where it is heading.

Threat landscape

Prompt injection remains the number one threat (OWASP LLM01), but attack sophistication has increased dramatically:

  • Multimodal attacks are now practical, not theoretical. Images, documents, and audio are active attack vectors.
  • Agentic attacks target AI systems that take actions (API calls, database queries, code execution). The consequences are real-world, not just text-based.
  • Multi-turn attacks (Crescendo-style) split payloads across conversation turns to evade single-message scanners.
  • Cross-modal attacks coordinate payloads across modalities, making each individual input look benign.

Defence tools

The defence ecosystem is growing:

  • Input scanners (Bordair, Rebuff, LLM Guard) detect injection before it reaches the model
  • Output filters catch leaked secrets, PII, and harmful content in model responses
  • Guardrails frameworks (NeMo Guardrails, Guardrails AI) provide rule-based conversation flow control
  • Red teaming tools (Garak, PyRIT) automate adversarial testing

Regulatory trends

The EU AI Act is driving compliance requirements for AI safety. Organisations deploying high-risk AI systems need to demonstrate robust security measures, including input validation and output monitoring. Prompt injection protection is becoming a compliance requirement, not just a best practice.

What to expect next

We expect continued growth in agentic AI attacks as agents become more capable and more widely deployed. Multi-modal and cross-modal attacks will become standard red-teaming practice. And regulatory pressure will push more organisations to adopt formal LLM security measures.

Protect your LLM application

Add prompt injection detection in minutes with Bordair's API.

Get started free