Courses Blog Enroll Now
Day 4 of 5 55 min • includes coding

AI-Powered Vulnerability Scanning

Build a code security reviewer that finds SQL injection, hardcoded credentials, command injection, and insecure deserialization — then explains how to fix each one.

Threat Landscape
Threat Detection
LLM for SecOps
4Vuln Scanning
5AI Security Risks
What You're Building

AI Code Security Reviewer

A Python CLI tool that scans source files, sends code to Claude with a security-focused system prompt, and outputs a structured vulnerability report with line numbers, CVSS severity, and remediation code. Works as a pre-commit hook or CI/CD gate.

01
Target

Create a Vulnerable Python App to Scan

This intentionally vulnerable file contains six common vulnerability classes. Save it as vulnerable_app.py — the scanner will find all of them.

python vulnerable_app.py
# DO NOT DEPLOY — intentionally vulnerable for scanning demo
import sqlite3, subprocess, pickle, os

# VULN 1: Hardcoded credentials
DB_PASSWORD = "admin123"
SECRET_KEY = "hardcoded-jwt-secret-key"
AWS_SECRET = "AKIAIOSFODNN7EXAMPLE/wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

def get_user(username):
    # VULN 2: SQL injection
    conn = sqlite3.connect("users.db")
    cursor = conn.cursor()
    query = f"SELECT * FROM users WHERE username = '{username}'"
    return cursor.execute(query).fetchone()

def run_report(filename):
    # VULN 3: Command injection
    result = subprocess.run(f"cat {filename} | wc -l", shell=True, capture_output=True)
    return result.stdout

def load_session(session_data):
    # VULN 4: Insecure deserialization
    return pickle.loads(session_data)

def read_file(path):
    # VULN 5: Path traversal
    with open("/app/data/" + path) as f:
        return f.read()

def check_admin(user_role):
    # VULN 6: Insecure comparison
    if user_role == "admin" or user_role == 1 or user_role:
        return True
    return False
02
Scanner

Build the AI Security Scanner

python ai_security_scan.py
import anthropic
import json
import sys
import os
from pathlib import Path

client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

SECURITY_SYSTEM_PROMPT = """You are a senior application security engineer (AppSec).
Analyze the provided Python code and identify security vulnerabilities.

Focus on these vulnerability classes:
- SQL injection (use parameterized queries)
- Command injection (avoid shell=True, use shlex)
- Hardcoded secrets/credentials/API keys
- Insecure deserialization (pickle, yaml.load without Loader)
- Path traversal (unsanitized file paths)
- Insecure authentication/authorization logic
- Cryptographic weaknesses (MD5, SHA1 for passwords, hardcoded keys)
- SSRF (unvalidated URLs in requests)

Respond ONLY with a JSON object:
{
  "findings": [
    {
      "id": "VULN-001",
      "type": "SQL_INJECTION | CMD_INJECTION | HARDCODED_SECRET | INSECURE_DESERIALIZATION | PATH_TRAVERSAL | AUTH_BYPASS | CRYPTO_WEAK | SSRF | OTHER",
      "severity": "CRITICAL | HIGH | MEDIUM | LOW | INFO",
      "line_approx": 12,
      "description": "What the vulnerability is and why it's dangerous",
      "affected_code": "the vulnerable line(s)",
      "remediation": "How to fix it",
      "fixed_code": "The corrected code snippet"
    }
  ],
  "risk_summary": "2-3 sentence overall assessment",
  "deploy_safe": false
}"""

def scan_file(filepath: str) -> dict:
    code = Path(filepath).read_text()
    numbered = "\n".join(f"{i+1:4d} | {line}" for i, line in enumerate(code.splitlines()))

    response = client.messages.create(
        model="claude-opus-4-5-20251101",
        max_tokens=4096,
        system=SECURITY_SYSTEM_PROMPT,
        messages=[{"role": "user", "content": f"File: {filepath}\n\n```python\n{numbered}\n```"}]
    )

    return json.loads(response.content[0].text)

def print_report(filepath: str, result: dict):
    findings = result.get("findings", [])
    severity_order = {"CRITICAL": 0, "HIGH": 1, "MEDIUM": 2, "LOW": 3, "INFO": 4}
    findings.sort(key=lambda x: severity_order.get(x["severity"], 5))

    deploy_status = "❌ DO NOT DEPLOY" if not result.get("deploy_safe") else "✅ Deploy OK"
    print(f"\n{'='*65}")
    print(f"SECURITY SCAN: {filepath}")
    print(f"{deploy_status} | {len(findings)} findings")
    print(f"{result.get('risk_summary', '')}")
    print(f"{'='*65}\n")

    for f in findings:
        print(f"[{f['severity']}] {f['id']}: {f['type']}")
        print(f"  Line ~{f.get('line_approx','?')} | {f['description']}")
        print(f"  FIX: {f['remediation']}")
        print(f"  FIXED CODE: {f.get('fixed_code','N/A')[:100]}")
        print()

if __name__ == "__main__":
    target = sys.argv[1] if len(sys.argv) > 1 else "vulnerable_app.py"
    print(f"Scanning {target}...")
    result = scan_file(target)
    print_report(target, result)
    # Save full report
    with open("security_report.json", "w") as f:
        json.dump(result, f, indent=2)
    print("Full report saved to security_report.json")

Run it: python ai_security_scan.py vulnerable_app.py — Claude should find all 6 vulnerability classes in the test file, with severity, line numbers, and fixed code for each.

03
CI/CD Integration

Add as a Pre-Commit Hook

The real value is running this automatically on every commit — catching vulnerabilities before they hit your codebase rather than after. Here's a minimal pre-commit hook.

bash .git/hooks/pre-commit
#!/bin/bash
# AI Security Pre-Commit Hook
# Install: chmod +x .git/hooks/pre-commit

# Get staged Python files
STAGED_PY=$(git diff --cached --name-only --diff-filter=ACM | grep "\.py$")

if [ -z "$STAGED_PY" ]; then
    exit 0
fi

echo "Running AI security scan on staged files..."
BLOCKED=0

for file in $STAGED_PY; do
    # Run scan, check exit code
    python ai_security_scan.py "$file" --ci-mode
    if [ $? -ne 0 ]; then
        echo "BLOCKED: $file has CRITICAL/HIGH vulnerabilities"
        BLOCKED=1
    fi
done

if [ $BLOCKED -eq 1 ]; then
    echo ""
    echo "Commit blocked. Fix security issues before committing."
    exit 1
fi

Add --ci-mode to the scanner

python
# Add to ai_security_scan.py main block:
ci_mode = "--ci-mode" in sys.argv
result = scan_file(target)
print_report(target, result)

if ci_mode:
    # Exit 1 if any CRITICAL or HIGH findings exist
    blockers = [f for f in result.get("findings", [])
                if f["severity"] in ("CRITICAL", "HIGH")]
    sys.exit(1 if blockers else 0)

Day 4 Complete

  • Built an AI-powered code security scanner covering 8 vulnerability classes
  • Structured output includes severity, line numbers, description, and fixed code
  • Tested against an intentionally vulnerable Python file — found all 6 injected vulns
  • Wired into git pre-commit hook to block commits with CRITICAL/HIGH findings
  • Tomorrow: AI security risks — what can go wrong when you deploy AI systems

Go deeper in 3 days.

The live bootcamp covers secure AI system design, red team exercises against AI apps, and AppSec program integration.

Reserve Your Seat →
Finished this lesson?