When RAID 5 Refuses to Heal Itself
You swapped the failed drive.
You expected the rebuild to kick in immediately.
But instead… nothing.
0%.
No movement.
Just a silent, stalled array that refuses to rebuild — and a growing anxiety that something bigger is wrong.
A rebuild that never starts is not a controller bug.
It is your RAID warning you that proceeding could destroy the data it’s trying to protect.
This page explains what the controller is seeing, why it stops at 0%, what NOT to do, and how to safely diagnose the real underlying fault.
What You See
- Rebuild shows 0% indefinitely — no progress
- New drive shows Online, Ready, Rebuild, or Unconfigured Good
- Controller logs indicate:
- “Reconstruction cannot begin”
- “Inconsistent metadata signatures”
- “Background initialization incomplete”
- “Parity verification failed”
- Array appears Degraded but not actively rebuilding
- No new read/write errors — just… nothing
Why It Happens
1. Metadata Mismatch Stops Rebuild at the Starting Line
If the new drive’s metadata does not match the surviving members — even slightly — the controller will refuse to rebuild.
Ref: TN-R5-001 §4
2. Latent Sector Errors on Surviving Drives
If the controller detects a single unreadable sector on a survivor, it cannot safely generate parity.
Rebuild halts before it begins.
Ref: TN-R5-001 §3
3. Cache/NVRAM Epoch Drift After Power Events
If the controller’s cached stripe map doesn’t match what’s on disk, it will freeze rebuild to avoid parity overwrite.
Ref: TN-R5-001 §6
4. Background Initialization or Consistency Check Previously Interrupted
If a background process was aborted earlier, the controller forces a pause until stripe coherence is verified.
Ref: TN-R5-001 §6
5. Wrong Slot, Wrong Drive, or “Near-Miss” Geometry
Using a drive with slightly different sector size, reported capacity, or timing characteristics can freeze the rebuild at 0%.
Ref: TN-R5-001 §2
What NOT To Do
- Do NOT force the rebuild
- Do NOT clear the foreign configuration before cloning
- Do NOT move drives to different slots
- Do NOT recreate the array with “same settings”
- Do NOT assume the new drive is the problem
- Do NOT reboot repeatedly (this increases epoch drift)
Any of these actions risks parity overwrite — and permanent data loss.
What You CAN Do
- Clone all drives before attempting any changes
- Export controller logs and current configuration
- Map drives by slot → serial → WWN
- Test surviving members for pending or unreadable sectors
- Verify the new disk’s reported geometry and firmware match
- Validate metadata signatures across all drives
- Reconstruct the stripe layout offline
- Start recovery from the verified virtual RAID image — not the degraded hardware
Rebuilds don’t stall for no reason.
They stall because something deeper needs attention.
Diagnostic Overview
- Array Type: RAID 5 — Single Parity Set
- Controller State: Rebuild Paused at 0%
- Likely Cause: LSEs on Survivors, Metadata Epoch Drift, or Disk Geometry Mismatch
- Do NOT: Force Rebuild or Clear Foreign Config Before Imaging
- Recommended Action: Clone Members, Validate Metadata, Verify Geometry, Reconstruct Layout Offline