When RAID 60 Goes Offline, Looks Healthy But Won’t Mount — This Is Your Roadmap.
RAID 60 is marketed as the “bulletproof” enterprise RAID level — dual parity on each group (RAID 6) and striping across both (RAID 0).
In real incidents, it also fails in predictable, diagnosable, and recoverable patterns.
This hub page is your symptom-to-cause directory — every major failure mode RAID 60 can produce, each linked to a dedicated deep-dive page with full technical details and citations from TN-R60-001, ADR’s authoritative reference on RAID 60 behavior.
1. The Most Common RAID 60 Failure Symptoms
1. RAID 60 Failed — Group-Level Inconsistency
Why the array dies even though all disks look good.
Group A and Group B no longer agree on write history.
→ https://www.adrdatarecovery.com/raid-triage-center/raid-60-group-level-inconsistency/
2. RAID 60 Showing All Drives Healthy — But Won’t Mount
Array is intact, but the RAID-0 layer cannot assemble because parity epochs diverged.
→ https://www.adrdatarecovery.com/raid-triage-center/raid-60-healthy-no-mount/
3. RAID 60 Rebuild Won’t Start (0% or Immediate Abort)
Controller refuses because the two RAID-6 groups disagree on geometry or sequence.
→ https://www.adrdatarecovery.com/raid-triage-center/raid-60-rebuild-wont-start/
4. RAID 60 Corruption After Drive Swap — Now Unreadable
A replacement drive triggers metadata comparison, exposing mismatched groups.
→ https://www.adrdatarecovery.com/raid-triage-center/raid-60-swap-corruption/
5. RAID 60 Virtual Disk Missing — Or Wrong Size After Restart
Controller sees a foreign config conflict or cannot decide which group is authoritative.
→ https://www.adrdatarecovery.com/raid-triage-center/raid-60-virtual-disk-missing/
6. RAID 60 Online But Directories Empty or Wrong Size
RAID-0 rebuilt over one degraded RAID-6 group, creating ghost sectors.
→ https://www.adrdatarecovery.com/raid-triage-center/raid-60-online-empty/
3. The Real Reason RAID 60 Fails (What No OEM Explains)
RAID 60 is really two arrays pretending to be one.
If Group A and Group B disagree on:
- parity epoch
- sequence number
- stripe width
- commit order
- geometry
- slot identity
…the RAID-0 layer cannot assemble the array safely.
The controller hides the array as a protective action, not a destructive one.
This is why RAID 60 arrays often show:
- all disks GOOD
- no volume
- no mount
- rebuild won’t start
- foreign configs detected
- wrong LUN size
This is normal behavior when groups diverge.
TN-R60-001 documents the exact internal mechanics.
4. When To Stop Immediately (Critical Warning)
Do not perform any of the following on a degraded RAID 60:
- Import foreign config
- Replace multiple drives
- Force rebuild
- Initialize or “fix” metadata
- Run filesystem repair
- Run check/verify on live drives
- Let the controller attempt auto-rebuild
Every one of these can permanently overwrite the only remaining correct parity domain.
5. How ADR Recovers RAID 60 Safely
Step 1 — Clone all disks (including “good” ones)
Silent sector errors on survivors destroy RAID 60 math.
Step 2 — Extract both groups’ metadata
We pull parity epochs, commit sequences, and group-level mapping.
Step 3 — Virtualize both RAID-6 groups independently
Treat them as separate entities.
Step 4 — Reconcile parity domain mismatch
Identify which group drifted.
Step 5 — Rebuild RAID 60 virtually
Only after both groups’ math is corrected.
Step 6 — Mount safely once parity is fully validated
Diagnostic Overview
- Device: RAID 60 (two RAID-6 groups striped as RAID-0)
- Observed State: Offline, missing VD, or mounts but data missing
- Likely Cause: Cross-group parity divergence, staggered stripe commits, rebuild residue, or foreign-config asymmetry
- Do NOT: Import foreign config, start rebuild, replace more drives, or reinitialize
- Recommended Action: Clone all disks, extract group metadata, virtualize both groups, determine true parity epoch, then rebuild RAID 60 in a virtual model before committing changes
RAID 60 doesn’t degrade gracefully —
it fails catastrophically when even one subgroup falls out of line.
What You Are Likely Experiencing
RAID 60 Dropped Offline — All Drives Healthy
RAID 60 Won’t Mount — Virtual Disk Missing
RAID 60 Rebuild Stuck at 0%, 5%, or 10%
RAID 60 Rebuild Never Starts (Foreign Config Detected)
RAID 60 Virtual Disk Missing — Or Wrong Size After Restart
RAID 60 Degraded After Power Loss
RAID 60 Corruption After Drive Replacement
RAID 60 Both Groups Degraded — No Array Found
Technical References (TN-R60-001)
Partial Parity Overwrite During Recovery
Stripe-Group Reconstruction Logic
Parity-Domain Verification & Group Alignment
Safe Forensic Order of Operations