A RAID 50 rebuild begins only when every RAID-5 group passes its pre-rebuild checks.
If even one group fails validation, the controller halts the entire operation at 0% — protecting parity rather than risking collapse.

That’s why you may see:

  • “Rebuild Started”
  • Status: 0%
  • No progress for hours
  • Drives showing Online / Good
  • Logs showing “inconsistent metadata,” “foreign state,” or “pre-check failed”

This page explains what causes RAID 50 rebuilds to freeze at 0%, what the controller is protecting, and how to safely proceed.


  • Rebuild stuck at 0%, not moving
  • One RAID-5 group flagged as:
    • “Inconsistent”
    • “Foreign metadata detected”
    • “Degraded / Pre-check failed”
  • Other groups appear normal
  • VD may be:
    • Offline
    • Degraded
    • “Ready for Rebuild” but not progressing
  • Controller logs show:
    • “Rebuild aborted — group not ready”
    • “Consistency check mismatch”
    • “Span validation failed”

A RAID 50 rebuild cannot start unless every group can produce known-good parity.
If one group is uncertain, the rebuild is frozen to prevent cross-group corruption.


A RAID 50 is really multiple RAID-5 sets working cooperatively.
If one group’s metadata, parity epoch, or member identity doesn’t match the others, the controller won’t risk writing anything.

Typical causes:

1. Metadata mismatch between RAID-5 groups

  • One group shows older parity epoch
  • Another shows newer
  • The controller refuses to merge them

2. Latent sector errors in a surviving drive

  • A member of one group has unreadable sectors
  • Rebuild pre-check detects the problem and halts at 0%

3. Cache/NVRAM drift

  • Cached writes acknowledged in one group
  • Not in another
  • Controller sees inconsistency and stops

4. Prior aborted rebuild

  • Group A started a rebuild
  • Group B didn’t
  • Now epochs disagree and the controller freezes rebuild

5. Slot mis-order after maintenance

  • A group’s members were reseated
  • Identity/order mismatch triggers pre-check failure

6. Foreign config present on one group

  • Even one drive marked Foreign can block all rebuild activity

7. Controller firmware or NVRAM corruption

  • Partial metadata commit
  • Corruption in map tables
  • Span validation fails and rebuild never starts

A RAID 50 rebuild at 0% is not a malfunction — it’s a safety brake.


These actions often destroy recoverable RAID 50 data:

  • Do NOT force rebuild
  • Do NOT clear foreign config
  • Do NOT import foreign config blindly
  • Do NOT mark drives Online/Good manually
  • Do NOT delete and recreate the RAID 50
  • Do NOT run filesystem repair tools
  • Do NOT reboot repeatedly trying to “kickstart” it

The unmoving 0% is telling you:
“Parity is not trustworthy — do not continue.”


ADR-safe steps:

  • Document the current controller state
    • Which group is blocking rebuild
    • Slot → serial mapping
    • Group IDs, parity epochs, and flags
  • Export all metadata
    • RAID config
    • Foreign reports
    • Controller logs
    • NVRAM/cache snapshots (if available)
  • Clone every drive bit-for-bit
    • RAID 50 failures frequently expose latent errors
    • Images allow safe virtual reconstruction
  • Analyze group headers
    • Compare parity epoch across RAID-5 groups
    • Identify mismatched or stale metadata
    • Confirm correct drive order
  • Rebuild virtually
    • Reconstruct the failed RAID-5 group on images
    • Validate parity consistency
    • Stitch RAID-5 groups together into a virtual RAID-0
    • Recover the filesystem offline

ADR’s internal tools validate group-level parity before attempting any rebuild operation.

Diagnostic Overview

  • Array Type: RAID 50 — RAID-5 Groups Striped in RAID-0
  • Controller State: Rebuild Initiated, Stuck at 0%
  • Likely Cause: Group Metadata Mismatch, Epoch Drift, LSEs, or Foreign State
  • Do NOT: Force Rebuild, Clear/Import Foreign Config, or Recreate the Array
  • Recommended Action: Clone All Drives, Extract Group Headers, Validate Parity Epochs, Virtual Rebuild on Images