Fred Rohrer's Blog
  • About
  • Home
Subscribe
Tagged

AI Security

A collection of 3 posts

AI Security

The Basics of AI Agent Security

The Basics of AI Agent Security Prompt injection is a fundamental, unsolved weakness in all LLMs. With prompt injection, certain types of untrustworthy strings or pieces of data can cause unintended consequences when passed into an AI agent's context window, like ignoring instructions and safety guidelines or executing unauthorized tasks.

  • Fred Rohrer
Fred Rohrer Nov 13, 2025 • 6 min read
AI Security

MCP Security Vulnerabilities: A Quick Weekend List

The Model Context Protocol (MCP) is revolutionizing how AI agents interact with external tools, but this power comes with serious security implications that most organizations are overlooking. Here are 15 critical security issues with MCP - short and sweet so you can read it quickly.

  • Fred Rohrer
Fred Rohrer Jul 19, 2025 • 4 min read
OWASP

Understanding the OWASP Top 10 for LLMs: Risks and Controls

Understanding the OWASP Top 10 for LLMs: Risks and Controls 1. Prompt Injection Prompt injection occurs when malicious inputs manipulate a Large Language Model (LLM) into executing unintended actions or revealing sensitive data. Attackers craft inputs that override the model’s instructions, potentially leading to data leaks or unauthorized actions.

  • Fred Rohrer
Fred Rohrer Jun 3, 2025 • 4 min read
Fred Rohrer's Blog © 2025
  • Contact
Powered by Ghost