Day 5 of 5 — Final Lesson
⏱ ~80 minutes
Claude Code Mastery — Day 5

Production Workflows and Team Integration

Day 5 is about making Claude Code part of your infrastructure — not just your personal workflow. CI/CD integration, custom commands your whole team can use, hooks that enforce standards automatically, and an honest take on how Claude Code compares to the other tools in your stack.

Claude Code in CI/CD

Claude Code can run in non-interactive mode inside CI/CD pipelines. Common use cases: automated code review on every PR, automated security scans, and documentation generation.

GitHub Actions — Automated PR Review

YAML — .github/workflows/claude-review.yml
name: Claude Code Review

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  review:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write

    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install Claude Code
        run: npm install -g @anthropic-ai/claude-code

      - name: Get diff
        id: diff
        run: |
          git diff origin/${{ github.base_ref }}...HEAD > diff.txt

      - name: Run Claude Code review
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
        run: |
          claude --print "Review this PR diff for:
          1. Security vulnerabilities
          2. Missing error handling
          3. Performance issues
          4. Code style inconsistencies with CLAUDE.md

          Diff:
          $(cat diff.txt)

          Return a structured review. If no issues found,
          say 'LGTM' with a brief explanation." > review.txt

      - name: Post review comment
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs')
            const review = fs.readFileSync('review.txt', 'utf8')
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## Claude Code Review\n\n${review}`
            })
💡
Keep CI reviews focused. A review that finds everything will be noise. Configure your CI Claude Code review to check only the things your team consistently misses. Security on auth changes, performance on database queries, test coverage on new functions.

Custom Slash Commands

You can create custom slash commands that your team uses inside Claude Code interactive sessions. These are scripts stored in .claude/commands/ at your project root.

Bash — Create Custom Commands
# Create the commands directory
mkdir -p .claude/commands

# Create a security-review command
cat > .claude/commands/security-review.md << 'EOF'
Review the current file or the files I specify for
security vulnerabilities. Focus on:
- SQL injection and other injection attacks
- Authentication and authorization bypasses
- Exposed sensitive data (tokens, passwords, PII)
- Unvalidated user input
- Missing rate limiting
- CSRF vulnerabilities

For each issue found:
1. Quote the vulnerable code
2. Explain why it's a vulnerability
3. Provide the fixed version
EOF

# Create a generate-tests command
cat > .claude/commands/gen-tests.md << 'EOF'
Generate comprehensive tests for the file or function
I specify. Include:
- Happy path cases
- Edge cases (empty input, null, boundary values)
- Error cases
- At least one integration test if appropriate

Match the testing framework in package.json.
Run the tests when done and fix any failures.
EOF

Once created, your team runs these in any Claude Code session:

Terminal — Using Custom Commands
claude

> /security-review src/api/payments.ts

> /gen-tests src/utils/validation.ts

Hooks: Automatic Enforcement

Hooks run automatically before or after Claude Code makes changes. You define them in your CLAUDE.md or in .claude/settings.json. Use them to enforce standards without relying on developers to remember.

JSON — .claude/settings.json
{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "hooks": [
          {
            "type": "command",
            "command": "npm run lint:fix",
            "description": "Auto-fix lint errors after edits"
          }
        ]
      }
    ],
    "Stop": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "npm test -- --passWithNoTests",
            "description": "Run tests after every session"
          }
        ]
      }
    ]
  }
}
ℹ️
Hooks run in your terminal environment. They have access to your project's dev tools. The most useful hooks: auto-lint after edits, auto-run tests after a session completes, and auto-format code. These keep Claude Code output consistent with your codebase standards.

Claude Code vs. Cursor vs. Copilot: The Honest Comparison

CapabilityClaude CodeCursorGitHub Copilot
Interface Terminal/CLI VSCode fork (IDE) IDE extension
Inline autocomplete No Excellent Excellent
Multi-file agentic edits Best in class Good (Composer) Limited
Full codebase navigation Excellent Good Limited
CLAUDE.md / project rules Yes (native) Yes (.cursorrules) Limited
Test write + run loop Yes, automatic Requires setup Write only
CI/CD integration Yes (CLI-native) No Partial (Copilot for PRs)
Custom commands / hooks Yes Limited No
Learning curve Terminal comfort required Familiar (IDE) Minimal (extension)
Cost API usage (~$10-30/mo typical) $20/mo (Pro) $10-19/mo

The practical answer: These aren't mutually exclusive. Many developers run Copilot for inline autocomplete while they type, and Claude Code for anything that requires reading or refactoring across files. Cursor is the best single-tool option if you want everything in one IDE. Claude Code wins on automation, CI/CD, and complex multi-file tasks.

💻 Day 5 Exercise
Set Up Claude Code for a Real Project with CLAUDE.md and Hooks

This is the capstone exercise. Set up Claude Code as a permanent part of your development workflow — not just a one-time tool.

  1. If you haven't already, write a thorough CLAUDE.md for your main project. Include commands, architecture, conventions, and at least 5 "important context" notes.
  2. Create at least 2 custom slash commands in .claude/commands/ for tasks you do repeatedly.
  3. Configure at least 1 hook in .claude/settings.json — auto-lint after edits is a good start.
  4. Commit all of the above to your repo. Teammate test: if a developer cloned this repo fresh and ran claude, would they have what they need?
  5. Set up the GitHub Actions CI review workflow (even just on a test branch). Review one of your own PRs with it.

Day 5 Summary

  • Claude Code runs in CI/CD pipelines for automated PR review, security scanning, and quality checks — via claude --print in non-interactive mode.
  • Custom slash commands in .claude/commands/ are shared across your team — codify your most common review and generation patterns.
  • Hooks enforce standards automatically: auto-lint after edits, auto-test after sessions. Set once, forget about it.
  • Use Claude Code alongside other tools, not instead of them. Copilot for autocomplete, Claude Code for complex agentic tasks.
Final Challenge

You now have everything you need to make Claude Code a genuine force multiplier in your team's workflow. The challenge: propose and implement one Claude Code integration at your company. One custom command your team actually uses. One CI check that catches real issues. One CLAUDE.md that every developer benefits from. Measure the impact after 2 weeks. That measurement — concrete time saved, bugs caught, standards maintained — is what makes the case for investing more.

Course Complete.

You've gone from installation to production-grade Claude Code workflows. The next step: building AI systems from scratch — agents, RAG pipelines, production APIs.

Join the 3-Day Bootcamp →