The Evolving AI Security Landscape: Why Secure by Design and Defense in Depth Are Essential

Explore how to secure AI systems against threats like prompt injection, data poisoning, and model attacks using Secure by Design
AI Security

The age of generative artificial intelligence (GenAI) is here and organizations are adopting it at lightning speed. But as this transformation unfolds, AI security has become a critical challenge. While businesses deploy GenAI tools to boost efficiency and innovation, attackers are increasingly exploiting vulnerabilities unique to AI systems.Industries like financial services, biotech, and telecom, long reliant on predictive models, now face an urgent need to rethink their security strategies as AI becomes a high-value target.

Understanding the Unique AI Attack Surface

Unlike traditional cybersecurity which focuses on securing networks and endpoints AI security must address how systems learn, adapt, and respond. Below are some of the most dangerous and growing threats:

1. Data Poisoning Attacks

Attackers inject malicious or misleading data into training sets, corrupting AI models before they’re deployed. For example, a content moderation tool trained on manipulated datasets could start allowing harmful content, undermining trust and safety.

Think of it as sabotaging the recipe ingredients before the meal is even cooked.

2. Prompt Injection Attacks

Well-crafted prompts can bypass safety constraints in GenAI systems. These prompts may come from websites, messages, or embedded content that hijacks AI behavior causing it to reveal sensitive data, place unauthorized orders, or output disallowed content.

3. Model Deserialization Attacks

When AI models are shared or stored, attackers can embed hidden code within them. Upon loading, the malicious code runs silently similar to opening a document that secretly installs malware.

4. Autonomous AI System RisksAs AI systems become agentic capable of making decisions and taking actions independently they introduce new classes of risk. A smart assistant interacting with compromised websites might execute malicious instructions without user consent or awareness.

Why Secure by Design Matters for AI

These evolving threats demand a Secure by Design approach to AI security. Instead of adding protections after deployment, security must be embedded at every stage from development to training and deployment.

This shift ensures that security measures are built into the system architecture, addressing AI-specific vulnerabilities from the ground up.

But Secure by Design alone isn’t enough.

The Role of Defense in Depth for AI Security

Defense in Depth (DiD) complements Secure by Design by layering protections. These overlapping controls ensure that even if one layer fails, others remain active. Key layers include:

  • Training data validation
  • Input/output monitoring
  • Runtime protections
  • Logging and audit trails
  • Automated incident response
  • By combining design-time and runtime safeguards, DiD makes AI systems more resilient to both known and unknown attack vectors.

Applying CISA’s Secure by Design Principles to AI

The Cybersecurity and Infrastructure Security Agency (CISA) offers three core Secure by Design principles originally created for software development that now apply directly to AI systems.

1. Take Ownership of Customer Security Outcomes

AI security is a shared responsibility that starts with leadership. Developers must bake in protections from the earliest design phases.

  • Adopt Machine Learning Security Operations (MLSecOps) to extend DevSecOps principles to AI workflows.
  • Define security requirements during model architecture design not after deployment.
  • Account for autonomous behavior and emerging threats throughout the lifecycle.

2. Embrace Radical Transparency and Accountability

AI security should be understandable and verifiable by stakeholders. This means:

  • Keeping detailed records of training data sources, model versions, and security controls
  • Maintaining AI-specific Bills of Materials (AI-BOMs) covering code, datasets, and third-party models
  • Running regular red team exercises, penetration tests, and compliance assessments

Transparency builds trust, and accountability ensures continuous improvement.

3. Lead from the Top

Security must be a business priority, not just a tech initiative. Boards and C-level executives should:

  • Receive regular briefings on AI-specific risks and protections
  • Allocate funding and resources to AI security initiatives
  • Empower AI security teams to influence product decisions and roadmaps

True leadership means prioritizing customer safety even when it requires upfront investment or slower rollouts.

The AI security landscape is fast-changing and high-stakes. From prompt injection and data poisoning to agentic AI risks, organizations must prepare for threats that go beyond traditional IT systems.

By adopting Secure by Design principles, implementing Defense in Depth, and applying CISA’s AI security guidance, businesses can build AI systems that are not only powerful but also secure and resilient.

In the next part of this series, we’ll explore frameworks and tools that support these principles such as MLSecOps pipelines, AI-BOM templates, and real-world examples of secure AI in action.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top