Skip to content
Scan a barcode
Scan
Paperback LLM Agents security: Threat Models, Prompt Injections, and Memory Hardening Book

ISBN: B0FMX5W1JP

ISBN13: 9798298643146

LLM Agents security: Threat Models, Prompt Injections, and Memory Hardening

What happens when your large language model (LLM) evolves into an autonomous agent capable of reasoning, recalling, and interacting with the world in real time?

As LLMs transition into powerful agents, they redefine the landscape of cybersecurity. Traditional security measures falter when agents process open-ended inputs, leverage external tools, maintain persistent memory, and execute complex workflows. This unprecedented capability introduces significant risks: agents can be manipulated through adversarial prompts, poisoned memory, or exploited integrations, exposing organizations to data breaches, unauthorized actions, and compliance violations.

LLM Agents Security is your authoritative guide to securing autonomous LLM agents. Whether you're developing conversational agents, integrating with APIs, or deploying systems that adapt dynamically, this book provides a comprehensive framework to fortify your agents against modern threats. From prompt injections and memory tampering to supply-chain attacks and ethical lapses, you'll master the techniques to identify and mitigate vulnerabilities unique to agentic systems.

Inside, you'll learn how to:

Develop agent-specific threat models using frameworks like STRIDE tailored for LLM architecturesDesign secure prompts with strict parsing, input validation, and semantic guards to block injection attacksImplement memory hardening with encryption, access controls, and integrity checks to prevent poisoningSecure tool integrations with least privilege, API token scoping, and runtime isolationEstablish continuous monitoring, anomaly detection, and red-teaming to proactively identify weaknessesEnsure compliance with GDPR, HIPAA, and emerging AI regulations like the EU AI Act for auditable deployments

Tailored for AI engineers, security professionals, DevSecOps teams, and ethical AI practitioners, this book combines strategic insights with practical techniques to build agents that are robust, secure, and trustworthy. Drawing on Ethan Vale's decade of experience in AI engineering, it equips you with the tools to navigate the complexities of agentic security in high-stakes environments.

The future of AI lies in agents that act with precision and safety. Start securing them today with LLM Agents Security: Threat Models, Prompt Injections, and Memory Hardening

Recommended

Format: Paperback

Condition: New

$17.00
Ships within 2-3 days
Save to List

Customer Reviews

0 rating
Copyright © 2026 Thriftbooks.com Terms of Use | Privacy Policy | Do Not Sell/Share My Personal Information | Cookie Policy | Cookie Preferences | Accessibility Statement
ThriftBooks® and the ThriftBooks® logo are registered trademarks of Thrift Books Global, LLC
GoDaddy Verified and Secured