When the controller says “Everything’s fine” but your OS can’t see a usable volume.
This is one of the most frustrating RAID 60 failures:
- The controller BIOS or management tool says all drives are Good / Optimal
- The RAID 60 virtual disk shows Online
- But the OS either:
- doesn’t see a mountable volume at all, or
- sees a disk/LUN that can’t mount, looks RAW, or throws filesystem errors
On the surface, everything appears healthy. Underneath, one stripe group is out of sync — and the RAID-0 layer is feeding the OS a layout it can’t trust.
1. What’s Really Going Wrong (Plain English)
RAID 60 is two RAID-6 groups (Group A + Group B) striped together as RAID-0.
Your controller only knows two basic truths:
- “Are the members there?”
- “Does the metadata look internally consistent enough to present a volume?”
It does not fully understand:
- whether parity in Group A matches Group B
- whether every stripe returns correct data
- whether previous rebuild attempts partially overwrote blocks
So you get this split reality:
- Controller view: All disks OK, volume Online
- OS view: Volume present but unmountable, RAW, or crashing under load
That’s the hallmark of cross-group parity mismatch or partial overwrite — the array can talk, but what it’s saying no longer matches what the filesystem expects.
2. How This Failure Usually Looks
Common symptoms:
- Controller BIOS / GUI:
- All drives Good / Optimal
- Both RAID-6 groups Optimal
- RAID 60 virtual disk Online
- Operating system / hypervisor:
- Disk appears but has no valid partition table, or
- Volume shows as RAW / unformatted
- Mount attempts fail or hang
- Filesystem repair tools (chkdsk, fsck, zpool import, etc.):
- report massive metadata damage
- see wildly inconsistent structures
- or refuse to run
You might also see:
- directories that appear but are empty
- “disk I/O” errors when reading specific areas
- application crashes when accessing certain LUNs
The key signal: hardware thinks the array is fine; the filesystem violently disagrees.
3. What This Means for Your Data
When RAID 60 shows all drives healthy but the volume won’t mount, it usually means at least one of the following happened:
- A previous rebuild partially wrote over one RAID-6 group
- Background initialization touched parity regions after a failure
- A consistency check ran against mismatched groups
- A prior unsafe shutdown left one group an epoch behind
- A “helpful” tool tried to “repair” the filesystem in place
The result:
- The controller can still assemble a RAID 60 layout
- But the on-disk structures (superblocks, MFT/inodes, journals, etc.) no longer line up with reality
- Every attempt to mount or repair on the live array risks making that damage permanent
Your data is often still recoverable —
but only by reconstructing the correct stripe order and parity domains offline, then repairing the filesystem on a safe copy.
4. What NOT To Do
Do NOT:
- ❌ Run filesystem repair (chkdsk, fsck, zpool/zfs repair) on the live RAID 60
- ❌ Reinitialize or “recreate” the virtual disk
- ❌ Start or restart a rebuild
- ❌ Run “consistency check” / “verify and fix” on the controller
- ❌ Replace more drives just because SMART looks marginal
- ❌ Move drives between slots or to a new controller and import foreign config
All of these risk:
- zeroing already-damaged metadata
- committing the wrong parity state permanently
- spreading localized corruption across the array
5. What To Do Instead (Correct Triage)
Step 1 — Freeze the current state
- Stop any background tasks (consistency checks, verifications, etc.)
- Do not attempt new repairs or rebuilds.
Step 2 — Clone every member disk
- Make bit-for-bit images of all drives, including the ones marked “Good.”
- Preserve slot → serial → WWN mapping for each disk.
Step 3 — Reconstruct both RAID-6 groups virtually
- Treat Group A and Group B as independent arrays.
- Use parity analysis to verify:
- correct member order
- stripe size
- parity rotation
- epoch alignment
Step 4 — Rebuild RAID 60 in a virtual model
- Once both groups are mathematically sound, assemble RAID 60 offline.
- Validate that blocks in corresponding stripes read consistently.
Step 5 — Analyze and repair the filesystem on the virtual array
Extract data to a new, known-good storage platform.
Diagnostic Overview
- Device: RAID 60 array (two RAID-6 groups striped as RAID-0)
- Observed State: All drives show “Good” and virtual disk Online, but OS cannot mount or sees volume as RAW
- Likely Cause: Cross-group parity mismatch, partial overwrite from prior rebuild/verify, or filesystem metadata damage on a still-present LUN
- Do NOT: Run filesystem repair on live array, start rebuild, run verify/fix, reinitialize, or replace additional drives without imaging
- Recommended Action: Clone all disks, reconstruct each RAID-6 group offline, virtualize RAID 60, then perform filesystem repair and data recovery on the reconstructed image
RAID Triage Center – RAID 60 Triage – RAID 60 Technical Notes