Purple Team: Attack and Defense United

Purple teaming brings offensive and defensive security together. Instead of red team vs. blue team in isolation, purple team exercises create a feedback loop where attacks are tested, detections are validated, and both sides learn from each other in real time.

The Purple Advantage

Traditional pentests end with a report that sits on a shelf. Purple team exercises result in immediate improvements: detection rules are tuned, response playbooks are validated, and both attackers and defenders understand each other's perspective.

The Purple Team Model

┌─────────────────────────────────────────────────────────────────────┐
│                    PURPLE TEAM WORKFLOW                              │
├─────────────────────────────────────────────────────────────────────┤
│                                                                       │
│  ┌─────────────┐         ┌─────────────┐         ┌─────────────┐    │
│  │  RED TEAM   │────────►│  PURPLE     │◄────────│  BLUE TEAM  │    │
│  │  (Attack)   │         │  FACILITATOR│         │  (Defend)   │    │
│  └──────┬──────┘         └──────┬──────┘         └──────┬──────┘    │
│         │                       │                       │            │
│         ▼                       ▼                       ▼            │
│  ┌─────────────┐         ┌─────────────┐         ┌─────────────┐    │
│  │  Execute    │         │  Document   │         │  Monitor    │    │
│  │  Technique  │─────────│  Findings   │─────────│  Detect     │    │
│  └──────┬──────┘         └──────┬──────┘         └──────┬──────┘    │
│         │                       │                       │            │
│         ▼                       ▼                       ▼            │
│  ┌─────────────┐         ┌─────────────┐         ┌─────────────┐    │
│  │  Validate   │         │  Gap        │         │  Tune       │    │
│  │  Bypass     │◄────────│  Analysis   │────────►│  Detection  │    │
│  └─────────────┘         └─────────────┘         └─────────────┘    │
│                                                                       │
│  CONTINUOUS CYCLE: Attack → Detect → Analyze → Improve → Repeat     │
│                                                                       │
└─────────────────────────────────────────────────────────────────────┘
                

Using MITRE ATT&CK Framework

Structure exercises around MITRE ATT&CK tactics and techniques for consistent, measurable results.

ATT&CK-Based Exercise Planning

EXERCISE STRUCTURE (Per Technique):

1. TECHNIQUE SELECTION
   └── Choose ATT&CK technique (e.g., T1059.001 - PowerShell)

2. RED TEAM PREPARATION
   ├── Research technique variations
   ├── Prepare multiple execution methods
   └── Document expected artifacts

3. BLUE TEAM PREPARATION
   ├── Review existing detections
   ├── Identify data sources needed
   └── Prepare monitoring dashboards

4. EXECUTION
   ├── Red team executes technique
   ├── Blue team monitors in real-time
   └── Document timestamps and details

5. ANALYSIS
   ├── What was detected?
   ├── What was missed?
   ├── What artifacts were generated?

6. IMPROVEMENT
   ├── Create/tune detection rules
   ├── Update response playbooks
   └── Document lessons learned

Sample Exercise: Initial Access

ATT&CK ID Technique Red Action Blue Validation
T1566.001 Phishing: Attachment Send macro-enabled doc Email gateway, endpoint alerts
T1566.002 Phishing: Link Send credential harvesting link URL filtering, proxy logs
T1190 Exploit Public App Exploit known CVE WAF alerts, IDS signatures
T1133 External Remote Services Use stolen VPN creds VPN logs, impossible travel

Atomic Red Team

Atomic Red Team provides small, focused tests mapped to ATT&CK techniques. Perfect for purple team exercises.

Running Atomic Tests

# Install Atomic Red Team
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam -getAtomics

# List available tests for a technique
Invoke-AtomicTest T1059.001 -ShowDetailsBrief

# Execute a specific test
Invoke-AtomicTest T1059.001 -TestNumbers 1

# Execute with cleanup
Invoke-AtomicTest T1059.001 -TestNumbers 1 -Cleanup

# Get prerequisites
Invoke-AtomicTest T1059.001 -GetPrereqs

Common Atomic Tests

# Execution - PowerShell (T1059.001)
Invoke-AtomicTest T1059.001

# Persistence - Registry Run Key (T1547.001)
Invoke-AtomicTest T1547.001

# Defense Evasion - Clear Windows Event Logs (T1070.001)
Invoke-AtomicTest T1070.001

# Credential Access - Mimikatz (T1003.001)
Invoke-AtomicTest T1003.001

# Discovery - System Information (T1082)
Invoke-AtomicTest T1082

# Lateral Movement - Remote Services (T1021)
Invoke-AtomicTest T1021.002  # SMB/Windows Admin Shares

Detection Engineering

Building Detection Rules

# Example Sigma rule for PowerShell Base64 execution
title: PowerShell Base64 Encoded Command
status: experimental
logsource:
    product: windows
    service: powershell
detection:
    selection:
        EventID: 4104
        ScriptBlockText|contains:
            - '-encodedcommand'
            - '-enc'
            - '-e '
            - 'FromBase64String'
    condition: selection
falsepositives:
    - Legitimate admin scripts
level: medium
tags:
    - attack.execution
    - attack.t1059.001

Detection Validation Checklist

FOR EACH TECHNIQUE TESTED:

□ DETECTION FIRED
  └── Which rule/alert triggered?
  └── How long to alert? (Time to Detect)
  └── Was context sufficient for investigation?

□ LOGS CAPTURED
  └── All expected log sources collected?
  └── Retention sufficient for investigation?
  └── Critical fields present?

□ RESPONSE VALIDATED
  └── Alert routed correctly?
  └── Playbook followed?
  └── Containment possible?

□ GAPS IDENTIFIED
  └── What wasn't detected?
  └── What logs were missing?
  └── What would improve detection?

□ IMPROVEMENTS MADE
  └── Detection rule created/tuned
  └── Log collection configured
  └── Playbook updated

Purple Team Exercise Templates

Exercise 1: Phishing to Domain Admin

SCENARIO: Simulated spear-phishing campaign leading to domain compromise

OBJECTIVES:
├── Test email gateway effectiveness
├── Validate endpoint detection capabilities
├── Test lateral movement detection
└── Validate DA compromise alerting

ATTACK CHAIN:
1. Initial Access: Macro-enabled document via email
2. Execution: PowerShell download cradle
3. Persistence: Scheduled task creation
4. Discovery: AD enumeration with BloodHound
5. Credential Access: Kerberoasting
6. Lateral Movement: Pass-the-Hash
7. Privilege Escalation: DCSync

DETECTION CHECKPOINTS:
├── Email gateway blocked/quarantined?
├── Macro execution blocked?
├── PowerShell execution logged?
├── Scheduled task creation alerted?
├── LDAP enumeration detected?
├── Kerberos TGS requests anomaly?
├── Pass-the-Hash indicators?
└── DCSync replication detected?

SUCCESS CRITERIA:
├── ≥80% attack stages detected
├── Time-to-detect <15 minutes for critical
└── Response playbook successfully executed

Exercise 2: Ransomware Simulation

SCENARIO: Simulate ransomware attack without actual encryption

OBJECTIVES:
├── Test ransomware detection mechanisms
├── Validate backup and recovery procedures
├── Test incident response capabilities
└── Measure time to containment

ATTACK CHAIN:
1. Initial Access: Exploited VPN vulnerability
2. Persistence: Registry run key + scheduled task
3. Defense Evasion: Disable Windows Defender
4. Discovery: Network share enumeration
5. Collection: Identify high-value files
6. Exfiltration: Stage data for exfil
7. Impact: Simulate encryption (file rename only)

DETECTION CHECKPOINTS:
├── VPN anomaly detected?
├── Persistence mechanisms alerted?
├── AV tampering detected?
├── Mass file access patterns?
├── Large data staging detected?
├── Encryption behavior indicators?
└── Backup integrity verified?

TABLETOP ADDITIONS:
├── Ransom negotiation simulation
├── Communications plan execution
├── Legal/PR notification process
└── Recovery timeline estimation

Measuring Success

Purple Team Metrics

Metric Description Target
Detection Coverage % of ATT&CK techniques with detection >70%
Mean Time to Detect (MTTD) Time from attack to alert <15 min
Mean Time to Respond (MTTR) Time from alert to containment <1 hour
False Positive Rate % of alerts that are benign <30%
Detection Fidelity Quality of detection context High
Techniques Tested/Quarter Continuous testing velocity >20

ATT&CK Heatmap

Track coverage visually with ATT&CK Navigator:

COVERAGE LEVELS:
┌────────────────────────────────────────────┐
│ ████  Detected & Validated (Tested)        │
│ ▓▓▓▓  Detection Exists (Not Tested)        │
│ ░░░░  Partial Detection                     │
│ ____  No Detection                          │
└────────────────────────────────────────────┘

Export from ATT&CK Navigator and track over time.
Goal: Reduce gaps, validate detections quarterly.

Purple Team Tools

Tool Type Purpose
Atomic Red Team Attack Simulation Small, focused ATT&CK tests
MITRE Caldera Attack Simulation Automated adversary emulation
Sigma Detection Generic detection rule format
ATT&CK Navigator Planning Visualize coverage and gaps
Vectr Management Track purple team operations
TheHive Response Incident response platform

Best Practices

Purple Team Success Factors
  • Collaboration Over Competition: Both teams share the goal of improving security
  • Start Simple: Begin with well-known techniques before advanced attacks
  • Document Everything: Findings are only valuable if captured and acted upon
  • Iterate Continuously: Regular exercises beat annual pentests
  • Include All Stakeholders: SOC, IR, engineering should participate
  • Focus on Improvements: The goal is better security, not proving points
  • Measure Progress: Track metrics over time to show value
Common Pitfalls
  • Treating it as a competition rather than collaboration
  • Testing too many techniques without proper analysis
  • Not following up on identified gaps
  • Excluding key team members from exercises
  • Focusing only on detection, ignoring response
  • Not documenting findings and improvements