Fred Rohrer's Blog
  • About
  • Home
Subscribe
Tagged

AI Agents

A collection of 1 post

AI Security

The Basics of AI Agent Security

The Basics of AI Agent Security Prompt injection is a fundamental, unsolved weakness in all LLMs. With prompt injection, certain types of untrustworthy strings or pieces of data can cause unintended consequences when passed into an AI agent's context window, like ignoring instructions and safety guidelines or executing unauthorized tasks.

  • Fred Rohrer
Fred Rohrer Nov 13, 2025 • 6 min read
Fred Rohrer's Blog © 2026
  • Contact
Powered by Ghost