In this scenario, the controller no longer even tries to assemble the RAID 60 volume.

Instead, you see:

  • both underlying RAID-6 groups flagged Degraded,
  • multiple drives marked “rebuild,” “failed,” or “foreign,”
  • and no RAID 60 virtual disk presented at all.

This is one of the scariest views an admin can see — but even here, the controller is behaving in a way that still leaves room for recovery, if you handle it correctly.


RAID 60 relies on:

  • Group A (RAID-6) providing a complete stripe set, and
  • Group B (RAID-6) providing a complete stripe set,

so that the RAID-0 layer can weave them into one volume.

When both groups are degraded at the same time, it usually means:

  • each group has lost redundancy in ways the controller can’t reconcile,
  • one or more groups experienced partial rebuilds or failed attempts,
  • metadata and parity states diverged far enough that the controller no longer trusts either side.

To prevent catastrophic parity write-back, the controller hides the RAID 60 virtual disk entirely.


This often follows a combination of:

  • drive failures in both groups over time,
  • drive removals/reinsertions in the wrong order,
  • attempted rebuilds that stalled or failed,
  • foreign config imports that were only partially correct,
  • power loss or firmware changes mid-rebuild.

Seen from the controller’s perspective:

  • Group A’s parity history is incomplete or inconsistent,
  • Group B’s parity history is also compromised,
  • any attempt to treat either group as authoritative risks making the situation worse.

So it chooses the safest possible response: no array presented.


This is a serious state, but not always hopeless.

Key realities:

  • There may still be enough surviving parity information across both groups to reconstruct data in a virtual environment.
  • The live controller, however, is no longer the right tool — it sees too many contradictions.
  • Any attempt to “force” it to behave (force imports, initializations, rebuilds) risks shredding what’s left of the good parity domains.

The question is no longer, “How do we bring the array back online?”
It is, “How do we safely extract and reconcile what is left across both degraded groups?”


Do not:

  • repeatedly try different foreign-config import combinations,
  • replace more drives to “get groups back to Optimal,”
  • initialize or recreate the RAID 60 on the same disks,
  • run any controller-level “rebuild,” “verify/fix,” or “consistency” passes,
  • attempt to migrate disks to a new controller and retry imports.

Those actions can:

  • overwrite parity across both groups,
  • scramble already fragile metadata,
  • and destroy your ability to reconstruct the layout offline.

Step 1 — Document everything

  • Capture:
    • current controller configuration,
    • group membership,
    • which drives belong to which subgroup,
    • any error or event logs.

Step 2 — Clone every member disk

  • Image every drive, including those marked failed or foreign.
  • Preserve clear slot → serial → group mapping.

Step 3 — Rebuild each RAID-6 group virtually

  • Analyze Group A’s members and Group B’s members independently.
  • Use parity and metadata analysis to:
    • determine when each group last made a consistent write,
    • reconstruct the most plausible layout for each group.

Step 4 — Determine the feasible recovery target

  • You may find:
    • one group is salvageable, the other only partially;
    • or both are salvageable but only up to a prior point in time.
  • Decide whether your best target is:
    • full array at earlier epoch, or
    • partial recovery of subsets of data.

Step 5 — Assemble RAID 60 virtually and extract data

  • Where parity math allows, reconstruct both groups and the RAID 60 stripe in a virtual model.
  • Extract intact data to new storage; do not attempt to “repair” the original array.

Diagnostic Overview

  • Device: RAID 60 array (two RAID-6 groups under RAID-0)
  • Observed State: Both RAID-6 groups show Degraded and no RAID 60 virtual disk is presented
  • Likely Cause: Multiple failures and/or incomplete rebuilds in both groups, causing unreconcilable parity and metadata divergence
  • Do NOT: Force foreign imports, rebuild, initialize, replace additional drives, or move disks to another controller for “trial” imports
  • Recommended Action: Document current state, clone all disks, reconstruct each group offline, determine feasible recovery targets, and extract data from a virtual RAID 60 model instead of the live array

RAID Triage CenterRAID 60 TriageRAID 60 Technical Notes