For the past few years, the cybersecurity community has comforted itself with a familiar analogy: Prompt Injection is just the LLM version of SQL Injection.

It was a reassuring thought. SQL injection is a solved problem—just sanitize your inputs, right? But a groundbreaking new paper, “The Promptware Kill Chain,” argues that this analogy is not just wrong; it is dangerous.

Prompt injection hasn’t just stayed an input-manipulation trick. Over the last three years, it has evolved into Promptware: a polymorphic class of malware that uses Large Language Models (LLMs) as its execution engine.

Here is a deep dive into how attacks have evolved from simple pranks to multistage kill chains, and why we need a new defense strategy.


The Misconception: SQL vs. Promptware

Why is the SQL injection analogy failing? Because the blast radius is vastly different.

SQL injection is deterministic. If you inject code, the database executes it. The outcome is predictable, and the damage is usually confined to the database layer.

Promptware is non-deterministic. It relies on the LLM’s “reasoning” to execute. More importantly, modern LLM applications are no longer just chatbots—they are agents with access to your emails, files, terminal, and even smart home devices.

Comparison of Attack Vectors:

Dimension SQL Injection Script Injection Promptware
Language SQL Python/JS/etc. Natural Language, Images, Audio
Determinism Deterministic Deterministic Non-deterministic
Target Database Interpreter LLM Application
Compromised Space Database Application Application & OS
Blast Radius DB-scoped App-scoped System/OS-wide
Outcomes Data Exfil/Corruption Infostealers/RCE Spyware, RCE, Crypto-theft, Worms

The Promptware Kill Chain

The paper introduces a seven-stage kill chain. This moves us away from thinking about “injection” as a single event and toward understanding it as a lifecycle.

Here is the anatomy of a Promptware attack:

+----------------+    +----------------+    +----------------+
| 1. INITIAL     | -> | 2. PRIVILEGE   | -> | 3. RECONNAISS- |
|    ACCESS      |    |    ESCALATION  |    |    ANCE        |
| (Prompt Inj.)  |    | (Jailbreaking) |    | (Context Probe)|
+-------+--------+    +-------+--------+    +-------+--------+
        |                     |                     |
        v                     v                     v
+-------+--------+    +-------+--------+    +-------+--------+
| 7. ACTIONS ON  | <- | 6. LATERAL     | <- | 4. PERSISTENCE |
|    OBJECTIVE   |    |    MOVEMENT    |    | (Memory Poison)|
| (Data/RCE)     |    | (Propagation)  |    |                |
+----------------+    +----------------+    +-------+--------+
                                    ^
                                    |
                            +-------+--------+
                            | 5. COMMAND &   |
                            |    CONTROL     |
                            | (Remote Ctrl)  |
                            +----------------+

1. Initial Access (Prompt Injection)

This is the entry point. The attacker injects malicious instructions into the context window.

2. Privilege Escalation (Jailbreaking)

The model is in, but it’s likely aligned to refuse harmful requests.

3. Reconnaissance

Unlike traditional malware, promptware doesn’t need to know the system architecture beforehand. It asks the host LLM.

4. Persistence

This is where promptware differs from simple “injections.” It wants to stay.

5. Command & Control (C2)

The “ZombAI” stage.

6. Lateral Movement

Promptware can self-replicate (Worms).

7. Actions on Objective

The final blow.


The Evolution of Attacks (2023–2026)

The authors analyzed 36 real-world incidents to map the evolution of these threats.

2023: The Early Days

2024: The Expansion

2025–2026: The Maturation

Kill Chain Complexity Over Time:

Average Stages Involved in Attacks
^
|                                       [ 5 Stages ]
|                                    [ 4 Stages ]
|                          [ 3 Stages ]
|                 [ 2 Stages ]
|    [ 1 Stage ]
|
+----------------------------------------------------> Year
      2022/2023          2024          2025/2026
      (Isolated)         (Worms)       (C2 & RCE)

Why This Matters: The Defense Shift

If prompt injection was just SQL injection, a good input filter would solve it. But since promptware is a kill chain, we need Defense-in-Depth.

We cannot rely on just fixing the input. We must secure the runtime.

  1. Initial Access: Input sanitizers are not enough. We need visual/auditory sanitization for multimodal inputs.
  2. Privilege Escalation: Robust alignment is required, but we must assume it can be bypassed.
  3. Persistence: Monitor the LLM’s long-term memory for anomalies.
  4. Action: Enforce strict permission boundaries on what the LLM agent is allowed to do (e.g., “Read-only” access to files, “No external execution”).

Key Takeaway

The era of treating LLM attacks as simple “bugs” is over. Promptware is malware. It worms, it persists, and it can turn our AI assistants against us. Security teams must shift from “preventing bad prompts” to “limiting agent capabilities” and “monitoring kill chain progression.”


References

  1. Primary Source: Brodt, O., Feldman, E., Schneier, B., & Nassi, B. (2026). The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multistep Malware Delivery Mechanism. arXiv:2601.09625.
  2. Morris II Worm: Moor, M. et al. (2024). An LLM-assisted worm….
  3. ChatGPT ZombAI: Brodt et al. (2024).
  4. Freysa AI Heist: Demonstrating financial theft via social engineering.
  5. Bing Chat Exfil: Greshake, K. et al. (2023). Not what you signed up for.