A RAID 60 array can look healthy, show all disks as “GOOD,” and still collapse instantly.
Why?
Because RAID 60 depends on both RAID-6 groups agreeing on the same write history.
If the two groups don’t match — even by one commit — the RAID-0 layer refuses to assemble the array.
This is called group-level inconsistency, and it is the #1 hidden reason RAID 60 arrays fail suddenly without warning.
1. What “Group-Level Inconsistency” Really Means (Plain English)
RAID 60 is made of:
- Group A = RAID-6
- Group B = RAID-6
- RAID 0 stripes across both
For RAID 60 to work:
- Group A must know what Group B knows
- Group B must have the same write sequence as Group A
- Both groups must reach the same parity epoch
If one group is even slightly ahead — or slightly behind — the RAID-0 layer cannot safely:
- rebuild
- mount
- read
- write
- validate parity
- trust metadata
The controller then hides the entire array to avoid corruption.
This is a protection mechanism, not a catastrophic failure.
It saves the array from overwriting good data with bad math.
2. How Group-Level Inconsistency Happens
Group-level inconsistency almost always starts long before the array fails.
A. One Group Advanced Further Than the Other
Group A may complete a write at timestamp 08:31:22
Group B may complete the same write at 08:31:21
→ this 1-tick difference cascades on the next drive swap or reboot.
B. Staggered Stripe Writes
Writes hit the groups in this order:
- Group A
- Group B (delayed due to latency)
Later, a drive swap forces verification — and the mismatch surfaces.
C. Background Verify or Patrol Read Interrupted
One group updates parity metadata mid-scan
The other group does not
→ Divergent write epoch.
D. Prior Incomplete Rebuild
A rebuild that stopped at 2% updated:
- parity block on Group A
- metadata on Group B
- none on remaining survivors
This creates split-brain parity.
E. Power Loss During Commit Window
If power drops:
- Group A commits write #4281
- Group B only logs write #4280
Upon restart, the controller detects disagreement and drops the array.
3. What It Looks Like to You (the User)
Group-level inconsistency creates classic RAID 60 false-alarm symptoms:
SYMPTOM 1 — All Disks GOOD, Array Offline
The system shows full health — but no volume.
This is the #1 tell-tale sign.
SYMPTOM 2 — “Virtual Disk Missing” or “Foreign Config Detected”
Controller refuses to choose which group is correct.
SYMPTOM 3 — Wrong LUN Size or Wrong Logical Drive Geometry
Common in Dell PERC and HP Smart Array.
SYMPTOM 4 — Rebuild Won’t Start (0% forever)
Because the controller cannot reconcile the two parity domains.
SYMPTOM 5 — Array Mounts But Files Are Wrong Size or Corrupt
The controller guessed — and guessed wrong.
SYMPTOM 6 — Controller Reboots Loop on Import Attempt
Typical when metadata epochs disagree.
4. Why You Cannot Fix Group-Level Inconsistency With Normal Tools
You cannot “force rebuild” a RAID 60 with mismatched groups.
The rebuild has no reference truth.
You cannot choose which group is correct without metadata extraction.
Controllers do not expose parity epochs or sequence frames.
You cannot run filesystem repair.
FSCK/CHKDSK will destroy data if parity math is invalid.
You cannot re-import foreign configs safely.
If wrong, it overwrites the only valid parity domain.
Why?
Because RAID 60 isn’t one array —
it’s two arrays pretending to be one.
Only a logic-layer reconstruction can solve this.
5. What ADR Does to Correct Group-Level Inconsistency
ADR’s RAID Inspector™ is built specifically for parity-domain reconciliation.
Step 1 — Image All Drives (Including Survivors)
Latent reads become corrupt writes in RAID 60. We don’t risk it.
Step 2 — Extract All Group Metadata
We pull:
- sequence counters
- parity epochs
- commit logs
- stripe-offset maps
- prior rebuild frames
Step 3 — Virtualize Both RAID-6 Groups Independently
They must be treated as separate arrays first.
Step 4 — Compare & Reconcile Parity Domains
This determines:
- the true write order
- which group diverged
- the exact commit split
Step 5 — Merge Groups Into a Corrected RAID 60 Structure
Using forensic parity math — not controller guesses.
Step 6 — Mount Only After Full Verification
Filesystem repairs only happen once parity is mathematically trusted.
Diagnostic Overview
- Device: RAID 60 array (two independent RAID-6 groups striped under RAID-0)
- Observed State: Array offline or failing to initialize even though all drives show “Good”
- Likely Cause: One RAID-6 group diverged in parity epoch, stripe order, or commit timing, breaking cross-group alignment
- Do NOT: Import foreign configs, launch rebuild, force initialize, or replace additional drives
- Recommended Action: Clone all disks, reconstruct each RAID-6 group offline, verify parity-domain agreement, and only then virtualize RAID 60 for safe recovery
RAID Triage Center – RAID 60 Triage – RAID 60 Technical Notes