Coursera
AI Security: Security in the Age of Artificial Intelligence Specialization

Gain next-level skills with Coursera Plus for $199 (regularly $399). Save now.

Coursera

AI Security: Security in the Age of Artificial Intelligence Specialization

Build Secure AI Systems End-to-End. Learn to identify, prevent, and respond to AI-specific threats across the entire ML lifecycle.

Reza Moradinezhad
Starweaver
Ritesh Vajariya

Instructors: Reza Moradinezhad

Included with Coursera Plus

Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Secure AI systems using static analysis, threat modeling, and vulnerability assessment techniques

  • Implement production security controls including monitoring, incident response, and patch management

  • Conduct red-teaming exercises and build resilient defenses against AI-specific attack vectors

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English
Recently updated!

December 2025

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Advance your subject-matter expertise

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from Coursera

Specialization - 13 course series

What you'll learn

  • Configure Bandit, Semgrep, PyLint to detect AI vulnerabilities: insecure model deserialization, hardcoded secrets, unsafe system calls in ML code.

  • Apply static analysis to fix AI vulnerabilities (pickle exploits, input validation, dependencies); create custom rules for AI security patterns.

  • Implement pip-audit, Safety, Snyk for dependency scanning; assess AI libraries for vulnerabilities, license compliance, and supply chain security.

Skills you'll gain

Category: Vulnerability Scanning
Category: Dependency Analysis
Category: Analysis
Category: DevSecOps
Category: Continuous Integration
Category: Threat Modeling
Category: Vulnerability Assessments
Category: Secure Coding
Category: MLOps (Machine Learning Operations)
Category: AI Security
Category: Open Source Technology
Category: Supply Chain
Category: PyTorch (Machine Learning Library)
Category: AI Personalization
Category: Application Security
Category: Program Implementation

What you'll learn

  • Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.

  • Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.

  • Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.

Skills you'll gain

Category: Security Testing
Category: Threat Modeling
Category: DevSecOps
Category: DevOps
Category: System Monitoring
Category: MITRE ATT&CK Framework
Category: CI/CD
Category: Test Case
Category: Secure Coding
Category: Integration Testing
Category: Prompt Engineering
Category: MLOps (Machine Learning Operations)
Category: Application Security
Category: AI Security
Category: Continuous Integration
Category: Threat Detection
Category: Unit Testing
Category: Scripting
Category: Continuous Monitoring

What you'll learn

  • Analyze inference bottlenecks to identify optimization opportunities in production ML systems.

  • Implement model pruning techniques to reduce computational complexity while maintaining acceptable accuracy.

  • Apply quantization methods and benchmark trade-offs for secure and efficient model deployment.

Skills you'll gain

Category: Convolutional Neural Networks
Category: Process Optimization
Category: Model Deployment
Category: Network Performance Management
Category: Project Performance
Category: Model Evaluation
Category: Benchmarking
Category: Keras (Neural Network Library)
Category: Cloud Deployment
Category: Network Model

What you'll learn

  • Apply infrastructure hardening in ML environments using secure setup, IAM controls, patching, and container scans to protect data.

  • Secure ML CI/CD workflows through automated dependency scanning, build validation, and code signing to prevent supply chain risks.

  • Design resilient ML pipelines by integrating rollback, drift monitoring, and adaptive recovery to maintain reliability and system trust.

Skills you'll gain

Category: Identity and Access Management
Category: CI/CD
Category: Resilience
Category: AI Personalization
Category: Vulnerability Scanning
Category: DevSecOps
Category: Responsible AI
Category: Security Controls
Category: Compliance Management
Category: MLOps (Machine Learning Operations)
Category: Hardening
Category: Engineering
Category: Model Evaluation
Category: Vulnerability Assessments
Category: AI Security
Category: Continuous Monitoring
Category: Containerization
Category: Infrastructure Security
Category: Threat Modeling
Category: Model Deployment

What you'll learn

  • Execute secure deployment strategies (blue/green, canary, shadow) with traffic controls, health gates, and rollback plans.

  • Implement model registry governance (versioning, lineage, stage transitions, approvals) to enforce provenance and promote-to-prod workflows.

  • Design monitoring triggering runbooks; secure updates via signing + CI/CD policy for auditable releases and controlled rollback.

Skills you'll gain

Category: CI/CD
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: AI Security
Category: Model Deployment
Category: Cloud Deployment
Category: Data-Driven Decision-Making
Category: MLOps (Machine Learning Operations)
Category: DevOps

What you'll learn

  • Analyze and identify a range of security vulnerabilities in complex AI models, including evasion, data poisoning, and model extraction attacks.

  • Apply defense mechanisms like adversarial training and differential privacy to protect AI systems from known threats.

  • Evaluate the effectiveness of security measures by designing and executing simulated adversarial attacks to test the resilience of defended AI model.

Skills you'll gain

Category: Threat Modeling
Category: Analysis
Category: Security Engineering
Category: Vulnerability Assessments
Category: Information Privacy
Category: Data Integrity
Category: Generative Adversarial Networks (GANs)
Category: AI Security
Category: Responsible AI
Category: Data Validation
Category: Security Strategy
Category: Security Testing
Category: Model Evaluation
Category: Cyber Threat Hunting
Category: Design

What you'll learn

  • Analyze real-world AI security, privacy, and access control risks to understand how these manifest in their own organizations.

  • Design technical controls and governance frameworks to secure AI systems, guided by free tools and industry guidelines.

  • Assess privacy laws' impact on AI, draft compliant policies, and tackle compliance challenges.

Skills you'll gain

Category: Risk Management Framework
Category: Security Controls
Category: Incident Response
Category: Identity and Access Management
Category: AI Security
Category: Data Security
Category: Governance
Category: Data Governance
Category: Security Awareness
Category: Threat Modeling
Category: Information Privacy
Category: Responsible AI
Category: Cyber Security Policies
Category: Personally Identifiable Information
Category: Generative AI
Category: Data Loss Prevention

What you'll learn

  • Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.

  • Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.

  • Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.

Skills you'll gain

Category: Large Language Modeling
Category: Security Strategy
Category: AI Personalization
Category: Responsible AI
Category: Vulnerability Scanning
Category: System Implementation
Category: Scenario Testing
Category: Security Controls
Category: Vulnerability Assessments
Category: Continuous Monitoring
Category: AI Security
Category: LLM Application
Category: Prompt Engineering
Category: Cyber Security Assessment
Category: Security Testing
Category: Penetration Testing
Category: Threat Modeling

What you'll learn

  • Identify and classify various classes of attacks targeting AI systems.

  • Analyze the AI/ML development lifecycle to pinpoint stages vulnerable to attack.

  • Apply threat mitigation strategies and security controls to protect AI systems in development and production.

Skills you'll gain

Category: MLOps (Machine Learning Operations)
Category: Cybersecurity
Category: Application Lifecycle Management
Category: Threat Detection
Category: Vulnerability Assessments
Category: Threat Modeling
Category: Model Deployment
Category: Security Engineering
Category: Security Controls
Category: Data Security
Category: MITRE ATT&CK Framework
Category: AI Security
Category: Application Security
Category: Responsible AI
Category: Artificial Intelligence and Machine Learning (AI/ML)

What you'll learn

  • Apply machine learning techniques to detect anomalies in cybersecurity data such as logs, network traffic, and user behavior.

  • Automate incident response workflows by integrating AI-driven alerts with security orchestration tools.

  • Evaluate and fine-tune AI models to reduce false positives and improve real-time threat detection accuracy.

Skills you'll gain

Category: Anomaly Detection
Category: Application Performance Management
Category: Time Series Analysis and Forecasting
Category: Data Analysis
Category: Generative AI
Category: Query Languages
Category: Site Reliability Engineering
Category: Process Optimization
Category: Microsoft Azure
Category: Scalability
Category: Data Integration
Category: User Feedback

What you'll learn

  • Apply systematic patching strategies to AI models, ML frameworks, and dependencies while maintaining service availability and model performance.

  • Conduct blameless post-mortems for AI incidents using structured frameworks to identify root causes, document lessons learned, and prevent recurrence

  • Set up monitoring, alerts, and recovery to detect and resolve model drift, performance drops, and failures early.

Skills you'll gain

Category: MLOps (Machine Learning Operations)
Category: System Monitoring
Category: Incident Management
Category: Disaster Recovery
Category: Patch Management
Category: Problem Management
Category: Vulnerability Assessments
Category: Automation
Category: Model Deployment
Category: Dependency Analysis
Category: Site Reliability Engineering
Category: Continuous Monitoring
Category: Dashboard
Category: AI Security
Category: Sprint Retrospectives
Category: Artificial Intelligence
Category: DevOps

What you'll learn

  • Explain the fundamentals of deploying AI models on mobile applications, including their unique performance, privacy, and security considerations.

  • Analyze threats to mobile AI models like reverse engineering, adversarial attacks, and privacy leaks and their effect on reliability and trust.

  • Design a layered defense strategy for securing mobile AI applications by integrating encryption, obfuscation, and continuous telemetry monitoring.

Skills you'll gain

Category: Continuous Monitoring
Category: Encryption
Category: Application Security
Category: Mobile Security
Category: Program Implementation
Category: Apple iOS
Category: Threat Modeling
Category: Security Management
Category: Threat Management
Category: Model Deployment
Category: Information Privacy
Category: System Monitoring
Category: AI Security
Category: Security Requirements Analysis
Category: Mobile Development

What you'll learn

  • Analyze how AI features like sensors, models, and agents make phones attack surfaces and enable deepfake-based scams.

  • Evaluate technical attack paths—zero-permission inference and multi-layer agent attacks—using real research cases.

  • Design a mobile-focused detection and response plan with simple rules, containment steps, and key resilience controls.

Skills you'll gain

Category: Incident Response
Category: Mobile Security
Category: Security Controls
Category: AI Security
Category: Deep Learning
Category: Hardening
Category: Threat Detection
Category: Exploit development
Category: Threat Modeling
Category: Mobile Development Tools
Category: Endpoint Security
Category: Prompt Engineering
Category: Artificial Intelligence
Category: Information Privacy

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructors

Reza Moradinezhad
Coursera
6 Courses4,006 learners
Starweaver
Coursera
511 Courses925,886 learners
Ritesh Vajariya
Coursera
23 Courses11,539 learners

Offered by

Coursera

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

Frequently asked questions