Understanding the OWASP Top 10 for LLMs: Risks and Controls 1. Prompt Injection Prompt injection occurs when malicious inputs manipulate a Large Language Model (LLM) into executing unintended actions or revealing sensitive data. Attackers craft inputs that override the model’s instructions, potentially leading to data leaks or unauthorized actions.