The array is intact. The controller just can’t decide which group is telling the truth.
When a RAID 60 array suddenly shows no virtual disk, or the LUN size shrinks/changes after a reboot, it’s almost never a drive failure.
It’s a metadata disagreement between the two RAID-6 groups.
The controller sees Group A reporting one parity epoch, Group B reporting another, and chooses the safe option:
It hides the virtual disk rather than risk the wrong stripe order.
This behavior is documented extensively in TN-R60-001 — and it’s one of the most misunderstood RAID 60 failure symptoms.
1. What’s Actually Going Wrong (Plain English)
When the system restarts, the controller expects both RAID-6 groups to agree on:
- member order
- stripe width
- block sequence
- parity epoch
- geometry
- last known write
But in real failures, Group A and Group B often disagree because of:
- incomplete parity commits
- backplane timing differences
- foreign-config residue
- staggered drive initialization
- partial rebuild attempts
- NVRAM cache drift
The controller sees these mismatches and responds with a protective lockout:
- Virtual Disk disappears
- Or the LUN reports the wrong size
- Or multiple “foreign configs” appear
None of these behaviors are destructive.
They are the controller saying:
“I can’t safely assemble RAID-0 from mismatched groups.”
2. Recognizing This Failure Pattern
Typical symptoms:
- All drives show GOOD
- Controller sees two RAID-6 groups, but not the combined RAID-0
- Capacity is smaller or missing
- Foreign configs appear after restart
- Rebuild won’t start
- “Offline / Missing” VD
This is ALWAYS a cross-group mismatch, not a drive failure.
3. What This Means for Your Data
The data is usually recoverable because neither group is destroyed.
The failure is mathematical, not physical:
- RAID-6 Group A progressed further
- RAID-6 Group B lagged or drifted
- The RAID-0 layer cannot map offsets safely
- Controller refuses to “guess”
ADR’s virtual reconstruction determines:
- Which group holds the authoritative epoch
- Which mapping is correct
- How the combined geometry must be reconciled
4. What NOT To Do
Do NOT:
- Import foreign config
- Start rebuild
- Replace additional drives
- Change slot order
- Run filesystem repair
- Let the controller perform automatic rebuild
- Force initialize / reconfigure
Each of these actions risks:
- overwriting correct parity
- committing invalid blocks
- splitting the groups further
- destroying the only good epoch
5. What To Do Instead (Correct Triage)
Step 1 — Clone all drives
Including “good” ones. Parity failures often come from survivors.
Step 2 — Extract metadata from both groups
Get:
- RAID-6 group order
- epoch numbers
- commit sequences
- block layout
- stripe maps
Step 3 — Virtualize each RAID-6 group separately
Treat them as two independent arrays.
Step 4 — Identify the authoritative parity epoch
Determine which group “moved ahead.”
Step 5 — Reconstruct RAID 60 virtually
Only once the math aligns.
Step 6 — Mount safely in a validated staging model
Diagnostic Overview
- Device: RAID 60 (two RAID-6 groups striped as RAID-0)
- Observed State: Virtual Disk missing, wrong size, or not mountable after restart
- Likely Cause: Parity epoch disagreement, foreign-config asymmetry, or staggered stripe commits
- Do NOT: Import foreign config, launch rebuild, replace more drives, or reinitialize
- Recommended Action: Clone all disks, extract group metadata, virtualize groups offline, determine authoritative epoch, correct group alignment before any repair
RAID Triage Center – RAID 60 Triage – RAID 60 Technical Notes