This is one of the most unsettling Synology moments.

You log into DSM expecting everything to be fine — all your drives show Healthy, no SMART alerts, no red icons.
But at the top of Storage Manager, you see the warning:

“RAID Group Degraded.”

Nothing failed.
Nothing was replaced.
Nothing obvious happened.

Yet Synology is treating your perfectly normal-looking array as damaged.

And here’s the truth Synology doesn’t explain clearly:
A RAID group can degrade even when every disk is technically healthy — because the problem lives in metadata, not in the drives.


  • RAID Group: Degraded
  • All drives show Normal/Healthy
  • No SMART warnings, no bad sectors reported
  • No recorded drive drops or failures
  • DSM logs show warnings about:
    • “I/O error during rebuild”
    • “System partition mismatch”
    • “Write failure”
    • “mdadm: resync aborted”
  • Storage Manager refuses to repair, or repair goes nowhere

Synology arrays depend on Linux MD metadata, DSM layers, and dual system partitions.
A RAID group degrades with no failed drives when:

Metadata drift occurs across members

  • Slightly out-of-sync MD epoch counters
  • Partially written superblocks
  • DSM updates interrupted mid-write
  • Cache not flushed on shutdown

A drive briefly dropped and came back

Even a 1-second disconnect can:

  • Un-sync system partitions
  • Flag dirty/uncertain regions
  • Cause mdadm to mark the array degraded for safety

Sync failure during background repair

If a drive returns a single UNC read during routine parity check:

  • mdadm halts
  • DSM surfaces “Degraded”
  • Drives remain “Healthy” — misleading

Filesystem layer conflict

Btrfs or ext4 metadata may disagree with RAID layer.
Drives look fine — but the MD layer knows parity is not trustworthy.


  • Don’t remove/re-add drives “because they’re healthy”
  • Don’t run DSM “Repair” repeatedly
  • Don’t reboot in hopes it will clear the message
  • Don’t try to rebuild from Storage Manager without understanding the mismatch
  • Don’t assume SMART clean = array safe

These actions risk overwriting metadata that proves which blocks changed.


  • Check DSM logs for: I/O errors, aborted resync, system partition mismatch
  • Run dmesg tail for controller, SATA, or timeout warnings
  • Export MD configuration (mdadm –detail –scan)
  • Clone each drive BEFORE any forced operation
  • Validate:
    • mdadm UUIDs
    • Superblock epochs
    • Member order
    • System partition agreement
  • Check SMART pending sectors (not just reallocated)
  • Determine if the array was mid-write during power loss or update

  • Drives may be healthy but disagree about the last known write
  • Parity mismatch means the RAID can’t rebuild safely
  • The array is still recoverable — but only if metadata is preserved
  • Forced operations can overwrite the only evidence of parity alignment

Synology’s caution is correct: it’s protecting your data by refusing to guess.

Diagnostic Overview

  • Device: Synology NAS (DSM / Linux MD)
  • Observed State: RAID Group Degraded — All Drives Healthy
  • Likely Cause: Metadata drift, partial writes, or system partition mismatch
  • Do NOT: Re-add drives, repeatedly repair, or reboot hoping it will clear
  • Recommended Action: Export MD metadata, clone drives, validate epochs and alignment