All Drives Look Good — But the Array Won’t Mount. Now What?
RAID 60 is sold as “bulletproof” — dual parity + striping across multiple RAID 6 groups.
But in real-world failures, RAID 60 breaks in ways RAID cards don’t warn you about:
- A single RAID 6 subgroup is out of sync
- Stripe groups no longer agree on parity epoch
- One group’s metadata rolled back after power loss
- A foreign config appears on only one subgroup
- A rebuild started on one group but not the others
RAID 60 doesn’t degrade gracefully —
it fails catastrophically when even one subgroup falls out of line.
1. What’s Actually Going Wrong? (Plain English)
Your disks may all show “Healthy”…
Your controller may show “Optimal”…
…but RAID 60 is broken if any stripe group reports:
- mismatched parity
- stale metadata
- a rebuild that started only partially
- slot reordering
- a foreign config the other groups don’t share
RAID 0 sits on top of the entire structure.
If even one RAID 6 unit can’t answer a read, RAID 0 collapses —
and the entire array drops offline.
2. The Real Failure Surfaces in RAID 60
A. Group-Level Parity Divergence
A single RAID 6 group updated parity after a power loss, while the others didn’t.
B. Stripe Alignment Breaks Across Groups
One group “thinks” the stripe width or block number changed.
C. Mixed Epoch / Generation Numbers
Controller cache and on-disk headers disagree across groups.
D. Controller Foreign Config Split-Brain
Only one RAID 6 subgroup shows as foreign — the others do not.
E. Rebuild Stalls or Starts on the Wrong Group
Even a partial rebuild attempt can ruin consistency.
These behaviors are invisible to most admin tools —
but fatal to RAID 60 integrity.
3. What This Means for Your Data
The array is usually recoverable only if imaged first
Drives may appear healthy, but metadata is not
Parity math cannot be trusted across all stripe groups
A forced foreign import can permanently destroy layout
An incomplete rebuild can overwrite valid parity
4. Symptom → Explanation → What to Do Next
Symptom 1: RAID 60 Offline — All Drives Healthy
Cause: One stripe group disagrees with the others on parity epoch or metadata.
Next Step: Image all members; verify metadata alignment across groups.
Symptom 2: Foreign Config Only on Some Drives
Cause: A controller rollback event or staggered hot-swap created split-brain metadata.
Next Step: Do not import foreign; inspect subgroup headers offline.
Symptom 3: Rebuild Will Not Start (or Starts on One Group Only)
Cause: A partial rebuild attempt exists, freezing the entire upper-layer RAID 0.
Next Step: Extract all metadata before any action.
Symptom 4: RAID 60 Mounts — But Data is Missing / Corrupt
Cause: One group returned outdated parity during a rebuild attempt.
Next Step: Rebuild structure from disk images, not controller state.
5. How ADR Recovers RAID 60 Safely
ADR’s RAID Inspector™ evaluates:
- Per-group parity rotation
- Stripe-set alignment
- NVRAM vs. on-disk epoch
- Inter-group metadata coherence
- Rebuild residues
- Controller rollback behavior
This allows reconstruction of each RAID 6 subgroup independently, then validation of the recombined RAID 60 structure — without destructive writes.
6. For Immediate Help
If your RAID 60 is offline, the failure condition already exceeded its redundancy.
Before you touch anything: Call ADR at 1-800-228-8800
Diagnostic Overview
- Device: RAID 60 array (dual RAID 6 groups, striped under RAID 0)
- Observed State: All drives “Good” — Virtual Disk missing or offline
- Likely Cause: Cross-group parity divergence, staggered stripe commits, or foreign-config drift
- Do NOT: Import foreign configs, launch rebuild, replace more drives, or force reinitialize
- Recommended Action: Clone all disks, extract group metadata, virtualize groups offline, evaluate alignment before any repair
RAID Triage Center – RAID 60 Triage – RAID 60 Technical Notes