Building and Deploying Secure AI: Practical Strategies for Developers

This two-day, developer-focused program explores how to harness AI tools for secure software development while also building, deploying, and maintaining AI systems that are resilient to evolving threats. Attendees will learn practical threat modeling approaches tailored to AI/ML pipelines, proven strategies for adversarial and supply-chain defense, and best practices for continuous security monitoring.

  • Nov 11
    Rebel Oslo
    2 days
    08:00 - 16:00 UTC
    Jim Manico
    12 990 NOK

The course blends actionable guidance (e.g., AI-assisted coding security) with hands-on insights into securing AI itself (e.g., protecting model endpoints, managing drift, and red teaming AI pipelines). After completing this workshop, participants will be equipped with both the technical know-how and the processes needed to embed security into every stage of their AI-driven products.

Day 1: Using AI to Build Secure Software

1. Introduction to AI Security

Topics

  • Key AI security concepts and common threats
  • Overview of AI-related vulnerabilities in software development
  • Quick review of how attackers exploit AI-driven software

Learning Objectives

  • Understand the foundational principles of AI security
  • Recognize the broad threat landscape surrounding AI

2. AI for Code Creation

Topics

  • Using AI-powered tools (e.g., ChatGPT, Copilot) for code generation
  • Common pitfalls and vulnerabilities (injected flaws, licensing, dependency issues)
  • Secure usage patterns and safe prompts

Learning Objectives

  • Assess the security implications of AI-driven code generation
  • Adopt best practices to minimize vulnerabilities introduced by auto-generated code

3. Secure AI Development Lifecycle

Topics

  • Integrating security throughout AI/ML system development (planning, data ingestion, modeling, deployment)
  • Secure DevOps and MLOps practices (CI/CD with security gates)
  • Handling sensitive data used to train AI models

Learning Objectives

  • Incorporate security gates into each phase of AI software development
  • Properly manage and protect training data and models

4. Threat Modeling for AI Systems

Topics

  • Unique threat modeling considerations for AI/ML pipelines (data poisoning, model extraction)
  • Differences between typical web-app threat modeling vs. AI-centric threats

Learning Objectives

  • Identify critical assets and attack vectors in AI workflows
  • Apply threat modeling to preempt attacks unique to AI pipelines

5. OWASP Top 10 for Large Language Model (LLM) Applications

Topics

  • Top vulnerabilities specific to LLM-based applications
  • Prompt injection, data exfiltration, malicious chat manipulation
  • Recommendations, mitigations, and real-world case studies

Learning Objectives

  • Recognize and remediate LLM-specific security flaws
  • Secure LLM-based features in a production environment

Day 2: Building Secure AI Systems (75%)

1. Supply Chain Security in AI

Topics

  • Third-party libraries, datasets, and pre-trained models
  • Checking trustworthiness of external dependencies (data lineage, library patch levels)
  • Dependency scanning tools and strategies

Learning Objectives

  • Understand how AI supply chain vulnerabilities can undermine entire systems
  • Implement best practices to vet and protect external model/data sources

2. Adversarial Machine Learning

Topics

  • Types of adversarial attacks (evasion, poisoning, model inversion)
  • Techniques to detect and mitigate adversarial inputs
  • Gradient masking pitfalls and robust model training

Learning Objectives

  • Recognize adversarial attacks and how they manifest
  • Integrate detection and defense strategies into AI models

3. Red Teaming AI Systems

Topics

  • How to structure AI-specific penetration tests
  • Tools and frameworks for adversarial testing
  • Red vs. Blue Team exercises for AI pipelines

Learning Objectives

  • Conduct targeted security assessments on AI systems
  • Operationalize red teaming to continuously improve defenses

4. AI Model Updates and Patching

Topics

  • Lifecycle management for AI models in production
  • Security-driven versioning and rollback strategies
  • Responding quickly to emerging threats or discovered vulnerabilities

Learning Objectives

  • Implement robust model update/patching workflows
  • Reduce downtime and exposure during patch cycles

5. AI Model Interpretability and Security

Topics

  • Why interpretability is critical for detecting malicious behavior or biases
  • How more interpretable models can reveal vulnerabilities (model extraction, inversion attacks)

Learning Objectives

  • Balance interpretability with security concerns
  • Use interpretability tools to spot anomalies and potential threats

6. Access Control Design for AI

Topics

  • Securing AI workloads and vector databases with strict role-based and zero-trust principles
  • Credential management for AI model endpoints (API tokens, ephemeral credentials)

Learning Objectives

  • Architect AI systems with principle-of-least-privilege at every layer
  • Apply zero-trust approaches to protect critical AI resources

7. AI Model Drift and Security Monitoring

Topics

  • How models degrade over time (concept drift, data distribution shift)
  • Continuous monitoring and anomaly detection in production
  • Mitigating risks introduced by drift (potential new vulnerabilities)

Learning Objectives

  • Set up effective monitoring to detect model performance and security anomalies
  • Know when/how to trigger model retraining or security re-evaluations

8. Course Conclusion & Next Steps

Topics

  • Recap of key takeaways
  • Linking Day 1 (secure use of AI) and Day 2 (secure AI systems)
  • Resources for continued learning, tool recommendations, community best practices

Learning Objectives

  • Solidify an actionable checklist for secure AI development and usage
  • Encourage ongoing improvement with guidelines and resources


This two-day curriculum ensures attendees leave with practical AI-assisted secure software development strategies and a robust understanding of how to build and maintain secure AI systems. Feel free to adjust session lengths and scheduling to match your conference format.


Jim Manico
CEO, Manicode Security

Jim Manico is the founder of Manicode Security where he trains software developers on secure coding and security engineering. He is also an investor/advisor for KSOC, Nucleus Security, Signal Sciences, and BitDiscovery. Jim is a frequent speaker on secure software practices, is a Java Champion, and is the author of 'Iron-Clad Java - Building Secure Web Applications' from Oracle Press. Jim also volunteers for OWASP as the project co-lead for the OWASP ASVS and the OWASP Proactive Controls.

    NDC Conferences uses cookies to see how you use our website. We also have embeds from YouTube and Vimeo. How do you feel about that?