When DSM Reports “Healthy” — But Your Shared Folders Are Empty
This is one of the scariest Synology moments.
DSM boots normally.
Storage Manager shows Healthy.
SMART tests pass.
No red icons.
But when you browse your shared folders:
No files.
Sometimes whole shares are missing.
Sometimes the NAS mounts, but the directories appear empty.
It doesn’t make sense — the hardware is fine, the RAID group looks normal, and DSM isn’t warning about corruption. Yet something is clearly wrong.
Here’s the truth Synology doesn’t tell you:
A “Healthy” status doesn’t mean your filesystem is intact — it only means the RAID layer isn’t reporting a failure.
1. What You See
- RAID Group: Healthy
- Storage Pool: Healthy
- Shared folders: Missing, empty, or unmounted
- DSM won’t mount specific volumes or says “Folder path does not exist”
- File Services still run — but expose nothing
- Btrfs volumes may show:
- No snapshots
- No shared-folder metadata
- No visible subvolumes
- Logs show:
- “Failed to mount volume”
- “btrfs: open_ctree failed”
- “Superblock mismatch”
2. Why It Happens (Real Synology Behavior)
Filesystem corruption at Btrfs or EXT4 layer
- RAID can be perfectly healthy
- But the filesystem tree/root/superblocks can still be damaged
- DSM reports “Healthy” because RAID ≠ FS
System partitions out of sync
A brief drive drop may:
- Cause metadata mismatch
- Prevent DSM from mounting system-level structures even though RAID is clean
Unexpected shutdown during snapshot/metadata commit
- Btrfs is vulnerable to mid-transaction inconsistency
- Leads to a “healthy RAID, broken tree root” condition
DSM update partially applied
If DSM updates system partitions but fails before writing volume metadata:
- Volumes appear empty
- Shares disappear
- RAID still shows “Healthy”
Volume mounted read-only
To prevent damage, DSM may mount Btrfs RO.
Users see “empty folders” because the writable layer never attaches.
3. What NOT To Do
- Don’t format the volume just because DSM recommends it
- Don’t delete/recreate shared folders (destroys metadata that proves layout)
- Don’t run fsck.ext4 or btrfs check –repair without cloning
- Don’t reset DSM
- Don’t initialize storage pool unless your goal is permanent data loss
DSM’s repair prompts are dangerous when the filesystem root is unstable.
4. What You CAN Do
- Check
/var/log/messagesand kernel logs for btrfs/EXT4 mount errors - Confirm whether DSM mounted the volume read-only
- Export MD metadata and RAID layout (even though RAID looks fine)
- Clone drives BEFORE any filesystem-level repair
- Validate superblocks, tree roots, checksums
- Use btrfs-progs offline against cloned images — never on live NAS media
- Recover snapshots directly from binary if metadata tree partially exists
5. What This Means for Your Data
- Your data is likely still present on disk
- RAID being healthy is a good sign — the blocks aren’t missing
- The filesystem index (the map of where files live) is damaged
- Recovery is high-probability if no write operations overwrite metadata
- Forced DSM repair often permanently destroys the remaining index structures
This problem looks catastrophic, but it’s usually recoverable with correct triage.
Diagnostic Overview
- Device: Synology NAS (DSM / Btrfs or EXT4)
- Observed State: RAID Healthy — Shared Folders Empty
- Likely Cause: Filesystem metadata corruption, system partition mismatch, or read-only mount
- Do NOT: Format, recreate shares, reset DSM, or run repair tools on live disks
- Recommended Action: Clone drives, check logs, validate superblocks and FS tree roots offline