Building and Deploying Secure AI: Practical Strategies for Security Developers

This two-day advanced workshop is designed for security-focused developers and engineers who are building AI-powered systems or using AI tools in secure software development. Attendees will dive deep into applied techniques for both using AI securely and securing AI itself, with a sharp focus on real-world threats and mitigations.

  • Dec 1
    Manchester
    2 days
    08:00 - 16:00 UTC
    Jim Manico
    -

Topics include AI threat modeling, adversarial defense, supply chain integrity, red teaming, and continuous monitoring. Participants will leave equipped with actionable practices to embed security across the AI development lifecycle—whether leveraging LLMs for coding or deploying models into production.

Day 1: Using AI Securely in Software Development

1. Foundations of AI Security

AI-specific vulnerabilities and threat surfaces
How attackers abuse AI in software systems
Objective: Build a mental model for AI-related risks in secure dev work.

2. Secure AI-Assisted Code Generation

Risks in AI-generated code (vulnerabilities, licensing, dependencies)
Secure prompt engineering and safety guardrails
Objective: Use tools like Copilot/ChatGPT securely and responsibly in pipelines.

3. Secure AI Development Lifecycle (SDLC + MLOps)

Security in data ingestion, modeling, deployment
DevSecOps/MLOps integration with security gates
Objective: Embed controls across AI system development workflows.

4. Threat Modeling for AI/ML Pipelines

Data poisoning, model theft, insecure inference
How AI threat modeling differs from traditional apps
Objective: Apply threat modeling to proactively identify AI-specific risks.

5. OWASP Top 10 for LLMs

Prompt injection, data leakage, model abuse
Case studies and tested mitigations
Objective: Secure LLM features and avoid common developer pitfalls.

Day 2: Securing AI Systems in Production

1. AI Supply Chain Security

Vetting third-party models, datasets, and libraries
Trust and integrity in data lineage and model provenance
Objective: Lock down the AI software supply chain.

2. Adversarial Machine Learning

Evasion, poisoning, model inversion, gradient masking
Defensive training and detection techniques
Objective: Recognize and respond to adversarial threats effectively.

3. Red Teaming AI Systems

AI-specific pentesting strategies
Tools, frameworks, and Red vs. Blue exercises
Objective: Operationalize adversarial testing to improve resilience.

4. AI Model Updates & Patching

Secure versioning, rollback strategies
Emergency response to discovered model vulnerabilities
Objective: Create patchable, resilient AI deployment pipelines.

5. Model Interpretability & Security

Using explainability tools to spot anomalies
Balancing transparency with exploitability
Objective: Detect bias, malicious behavior, and model theft vectors.

6. Access Control for AI Components

Role-based access to inference endpoints, model APIs
Credential hygiene and zero-trust enforcement
Objective: Harden model endpoints and AI-connected services.

7. Drift Detection & Monitoring

Monitoring for concept/data drift and security regression
Triggering retraining or re-validation workflows
Objective: Maintain model integrity and relevance over time.

8. Wrap-up & Action Plan

Recap of core practices and threats
Toolkits, references, and next-step strategies
Objective: Leave with a clear roadmap for securing AI in real-world environments.

Summary:
Attendees will gain both strategic insight and tactical techniques for using AI securely in software development, and for defending AI systems themselves against evolving threats. The workshop blends secure dev, MLOps, and adversarial resilience—equipping developers and security engineers with skills for modern AI-integrated applications.

Jim Manico
CEO, Manicode Security

Jim Manico is the founder of Manicode Security where he trains software developers on secure coding and security engineering. He is also an investor/advisor for KSOC, Nucleus Security, Signal Sciences, and BitDiscovery. Jim is a frequent speaker on secure software practices, is a Java Champion, and is the author of 'Iron-Clad Java - Building Secure Web Applications' from Oracle Press. Jim also volunteers for OWASP as the project co-lead for the OWASP ASVS and the OWASP Proactive Controls.

    NDC Conferences uses cookies to see how you use our website. We also have embeds from YouTube and Vimeo. How do you feel about that?