The Basics of AI Agent Security Prompt injection is a fundamental, unsolved weakness in all LLMs. With prompt injection, certain types of untrustworthy strings or pieces of data can cause unintended consequences when passed into an AI agent's context window, like ignoring instructions and safety guidelines or executing unauthorized tasks.