AI Alert Triage Tool
A Python tool that reads the anomaly alerts from Day 2, sends each one to Claude with context about your environment, and generates a structured investigation brief — severity assessment, likely cause, recommended actions, and escalation decision. Plug this into any ticketing system.
Install the Anthropic SDK
pip install anthropic
# Set your API key — never hardcode it
export ANTHROPIC_API_KEY=sk-ant-your-key-hereGet your API key at console.anthropic.com. The free tier is enough to complete this lesson.
Write the Security Analyst System Prompt
The system prompt is where you establish Claude's role and output format. Be specific about what you need — severity, cause, actions, escalation. Vague prompts produce vague output.
import anthropic import pandas as pd import json import os client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"]) SYSTEM_PROMPT = """You are a senior security analyst at a mid-sized technology company. You receive anomaly detection alerts and produce structured investigation briefs. Your company context: - 200 employees, primarily engineering and operations - Users log in from US cities; international logins are rare and suspicious - Business hours are 7am-8pm local time; after-hours logins need explanation - Normal login volume per user: 3-10 per day For each alert, respond ONLY with a JSON object in this exact format: { "severity": "CRITICAL|HIGH|MEDIUM|LOW", "likely_cause": "1-2 sentence assessment of what probably happened", "confidence": "HIGH|MEDIUM|LOW", "immediate_actions": ["action 1", "action 2", "action 3"], "escalate_to_manager": true|false, "escalate_reason": "why or why not", "investigation_steps": ["step 1", "step 2", "step 3"] } Do not include any text outside the JSON object.""" def triage_alert(alert: dict) -> dict: user_message = f"""Alert Details: Type: {alert['type']} User: {alert['user']} Date: {alert['date']} Detail: {alert['detail']} Severity (detection system): {alert['severity']} Analyze this alert and produce an investigation brief.""" response = client.messages.create( model="claude-opus-4-5-20251101", max_tokens=1024, system=SYSTEM_PROMPT, messages=[{"role": "user", "content": user_message}] ) try: brief = json.loads(response.content[0].text) brief["alert"] = alert # Attach original for reference return brief except json.JSONDecodeError: return {"error": "Parse failed", "raw": response.content[0].text, "alert": alert}
Process All Alerts and Generate Report
def run_triage(): # Load alerts from Day 2 alerts_df = pd.read_csv("alerts.csv") alerts = alerts_df.to_dict("records") print(f"Processing {len(alerts)} alerts...\n") briefs = [] for i, alert in enumerate(alerts, 1): print(f"[{i}/{len(alerts)}] Triaging {alert['type']} for {alert['user']}...") brief = triage_alert(alert) briefs.append(brief) # Print summary to console if "error" not in brief: escalate = "ESCALATE" if brief.get("escalate_to_manager") else "monitor" print(f" → {brief['severity']} | {escalate} | {brief['likely_cause'][:80]}...") print() # Save full briefs to JSON with open("investigation_briefs.json", "w") as f: json.dump(briefs, f, indent=2, default=str) # Print critical/high escalations print("\n=== ESCALATE NOW ===") for brief in briefs: if brief.get("escalate_to_manager") and "error" not in brief: alert = brief["alert"] print(f"\nUSER: {alert['user']} | {alert['type']}") print(f"CAUSE: {brief['likely_cause']}") print("IMMEDIATE ACTIONS:") for action in brief.get("immediate_actions", []): print(f" • {action}") print(f"\nFull briefs saved to investigation_briefs.json") if __name__ == "__main__": run_triage()
Run it: python secops_triage.py — Claude will analyze each alert and output severity, likely cause, and recommended actions. Bob's brute force and Carol's new-location login should both trigger escalation.
Batch Analysis and Context Enrichment
When you have multiple alerts for the same user in a short window, analyzing them together gives Claude more context and better output. This is the difference between "Bob had 45 failed logins" and "Bob had 45 failed logins at 2am from an IP in the Tor exit node list, followed by a successful login 10 minutes later."
def analyze_user_timeline(user: str, alerts: list, log_df: pd.DataFrame) -> str: """Analyze all alerts for one user with full context.""" user_alerts = [a for a in alerts if a["user"] == user] user_logs = log_df[log_df["user"] == user].tail(50) # Summarize recent log activity log_summary = user_logs[["timestamp","result","location","ip"]].to_string(index=False) prompt = f"""Analyze this user's complete activity timeline and provide a threat assessment. User: {user} Active Alerts: {json.dumps(user_alerts, default=str)} Recent Activity Log (last 50 events): {log_summary} Provide: 1. Overall threat level (CRITICAL/HIGH/MEDIUM/LOW/BENIGN) 2. Most likely scenario (3-4 sentences) 3. Top 3 immediate actions 4. Whether to disable account now""" response = client.messages.create( model="claude-opus-4-5-20251101", max_tokens=1200, messages=[{"role": "user", "content": prompt}] ) return response.content[0].text
Data privacy considerations
You're sending employee log data to an external API. Before doing this in production, check your organization's data handling policies. Options: (1) anonymize usernames before sending, (2) use AWS Bedrock to keep data within your cloud account (you built this in the AWS course), (3) run a self-hosted model with Ollama for fully on-premise analysis.
Day 3 Complete
- Built a prompt-based alert triage tool using the Anthropic SDK
- System prompt structure: role, company context, output format (JSON schema)
- Processed anomaly alerts from Day 2 into structured investigation briefs
- Extended to timeline-based user analysis for multi-alert correlation
- Tomorrow: AI-powered vulnerability scanning and code review