How to Create RAID 1 Arrays in Fedora

I recently became the proud owner of an Apple MacBook Pro with an M2 Pro chip, which meant my old PC got a new life as a home server running Fedora. The server houses six hard drives that I configured into three RAID 1 arrays for redundant storage.

This guide walks through the complete process: creating RAID 1 arrays with mdadm, formatting them, configuring auto-mount on startup, and handling drive failures.

What is RAID 1?

RAID 1 (mirroring) writes identical data to two drives simultaneously. If one drive fails, the other continues working with no data loss. You sacrifice 50% of your total storage capacity for redundancy.

Important: RAID is not a backup. It protects against drive failure, not accidental deletion, ransomware, or file corruption. Always maintain separate backups.

My Setup

Array Capacity Drives Purpose
md0 3 TB 2x 3TB HDD Media storage
md1 750 GB 2x 750GB HDD Documents
md2 18 TB 2x 18TB HDD Archives

Server specs:

  • OS: Fedora 41 (Workstation)
  • System drive: 500GB NVMe SSD (separate from RAID arrays)

Prerequisites

  1. Install mdadm (usually pre-installed on Fedora):
sudo dnf install mdadm
  1. Identify your drives - you need pairs of drives with the same capacity:
lsblk -o NAME,SIZE,TYPE,MODEL

Example output:

NAME      SIZE TYPE MODEL
sda       2.7T disk WDC WD30EFRX-68E
sdb       2.7T disk WDC WD30EFRX-68E
nvme0n1 465.8G disk Samsung 980 PRO
sdd      16.4T disk ST18000NM000J-2T
sde      16.4T disk ST18000NM000J-2T
sdf     698.6G disk WDC WD7500BPVT-2
sdg     698.6G disk WDC WD7500BPVT-2

Warning: Double-check drive identifiers before proceeding. Using the wrong drive will destroy data. Drive letters can change between reboots—use lsblk with MODEL or serial numbers to confirm.

Overview: Creating Multiple Arrays

When setting up multiple RAID arrays, it's more efficient to:

  1. Partition ALL drives first
  2. Create ALL arrays
  3. Wait for initial sync (or proceed with caution)
  4. Format ALL arrays
  5. Create ALL mount points
  6. Configure mdadm.conf ONCE
  7. Configure fstab ONCE
  8. Run dracut ONCE

This guide follows that workflow.


Step 1: Partition All Drives

Each drive needs a partition before creating the RAID array. We'll create a single partition spanning each entire drive.

Partition the 3TB drives (sda and sdb)

sudo fdisk /dev/sda

Inside fdisk:

  1. Press g to create a new GPT partition table
  2. Press n to create a new partition
  3. Press Enter three times to accept defaults (partition 1, first sector, last sector)
  4. Press t to change partition type
  5. Type 29 for "Linux RAID"
  6. Press w to write changes and exit

Repeat for the second drive:

sudo fdisk /dev/sdb

Partition the 750GB drives (sdf and sdg)

sudo fdisk /dev/sdf
# Same steps: g, n, Enter×3, t, 29, w

sudo fdisk /dev/sdg
# Same steps: g, n, Enter×3, t, 29, w

Partition the 18TB drives (sdd and sde)

sudo fdisk /dev/sdd
# Same steps: g, n, Enter×3, t, 29, w

sudo fdisk /dev/sde
# Same steps: g, n, Enter×3, t, 29, w

Verify all partitions

lsblk /dev/sda /dev/sdb /dev/sdd /dev/sde /dev/sdf /dev/sdg

Expected output:

NAME   SIZE TYPE
sda    2.7T disk
└─sda1 2.7T part
sdb    2.7T disk
└─sdb1 2.7T part
sdd   16.4T disk
└─sdd1 16.4T part
sde   16.4T disk
└─sde1 16.4T part
sdf  698.6G disk
└─sdf1 698.6G part
sdg  698.6G disk
└─sdg1 698.6G part

Step 2: Create All RAID 1 Arrays

Create md0 (3TB array)

sudo mdadm --create /dev/md0 \
    --level=1 \
    --raid-devices=2 \
    --bitmap=internal \
    --name="RAID-1_3TB" \
    /dev/sda1 /dev/sdb1

When prompted "Continue creating array?", type y and press Enter.

Create md1 (750GB array)

sudo mdadm --create /dev/md1 \
    --level=1 \
    --raid-devices=2 \
    --bitmap=internal \
    --name="RAID-1_750GB" \
    /dev/sdf1 /dev/sdg1

Create md2 (18TB array)

sudo mdadm --create /dev/md2 \
    --level=1 \
    --raid-devices=2 \
    --bitmap=internal \
    --name="RAID-1_18TB" \
    /dev/sdd1 /dev/sde1

Parameters explained:

  • --create /dev/mdX — Create array with this device name
  • --level=1 — RAID level 1 (mirror)
  • --raid-devices=2 — Number of drives in the array
  • --bitmap=internal — Enable write-intent bitmap (faster resync after unclean shutdown)
  • --name="..." — Human-readable name for the array

Verify all arrays

cat /proc/mdstat

Output:

Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      17578193920 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.1% (17578880/17578193920) finish=1823.4min speed=160652K/sec
      bitmap: 131/131 pages [524KB], 65536KB chunk

md1 : active raid1 sdg1[1] sdf1[0]
      732440576 blocks super 1.2 [2/2] [UU]
      bitmap: 0/6 pages [0KB], 65536KB chunk

md0 : active raid1 sdb1[1] sda1[0]
      2930132992 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: <none>

The [UU] indicates both drives are active. During initial sync, you'll see a progress indicator and estimated completion time.

Important: Initial Sync Time

Array Size Approximate Sync Time
750 GB 1-2 hours
3 TB 4-8 hours
18 TB 24-48 hours

You CAN use the array during initial sync, but:

  • Performance will be reduced
  • If a drive fails during sync, you may lose data
  • It's safer to wait for sync to complete before storing important data

Monitor sync progress:

watch cat /proc/mdstat

Detailed array information

sudo mdadm --detail /dev/md0
sudo mdadm --detail /dev/md1
sudo mdadm --detail /dev/md2

Step 3: Create Filesystems

Format all arrays with ext4:

sudo mkfs.ext4 -L "RAID-1_3TB" /dev/md0
sudo mkfs.ext4 -L "RAID-1_750GB" /dev/md1
sudo mkfs.ext4 -L "RAID-1_18TB" /dev/md2

The -L flag sets a filesystem label, making it easier to identify each array.

Get UUIDs for all arrays

sudo blkid /dev/md0 /dev/md1 /dev/md2

Output:

/dev/md0: LABEL="RAID-1_3TB" UUID="f47ac10b-58cc-4372-a567-0e02b2c3d479" TYPE="ext4"
/dev/md1: LABEL="RAID-1_750GB" UUID="7c9e6679-7425-40de-944b-e07fc1f90ae7" TYPE="ext4"
/dev/md2: LABEL="RAID-1_18TB" UUID="550e8400-e29b-41d4-a716-446655440000" TYPE="ext4"

Save these UUIDs—you'll need them for auto-mounting.


Step 4: Create Mount Points and Mount

# Create all mount points
sudo mkdir -p /mnt/raid1-3tb
sudo mkdir -p /mnt/raid1-750gb
sudo mkdir -p /mnt/raid1-18tb

# Mount all arrays
sudo mount /dev/md0 /mnt/raid1-3tb
sudo mount /dev/md1 /mnt/raid1-750gb
sudo mount /dev/md2 /mnt/raid1-18tb

# Verify
df -h /mnt/raid1-*

Step 5: Configure Auto-Mount on Startup

Save mdadm configuration

Generate the configuration for ALL arrays at once:

sudo mdadm --detail --scan | sudo tee /etc/mdadm.conf

Your /etc/mdadm.conf should now contain:

ARRAY /dev/md/RAID-1_3TB metadata=1.2 UUID=f47ac10b:58cc4372:a5670e02:b2c3d479
ARRAY /dev/md/RAID-1_750GB metadata=1.2 UUID=7c9e6679:742540de:944be07f:c1f90ae7
ARRAY /dev/md/RAID-1_18TB metadata=1.2 UUID=550e8400:e29b41d4:a7164466:55440000

Note: mdadm uses a different UUID format (colon-separated) than blkid (hyphen-separated). This is normal.

Regenerate initramfs

sudo dracut --force

This ensures the RAID configuration is included in the boot image.

Add all arrays to /etc/fstab

Edit fstab:

sudo nano /etc/fstab

Add these lines (use your actual UUIDs from blkid):

# RAID 1 Arrays
UUID=f47ac10b-58cc-4372-a567-0e02b2c3d479  /mnt/raid1-3tb    ext4  defaults,nofail  0  2
UUID=7c9e6679-7425-40de-944b-e07fc1f90ae7  /mnt/raid1-750gb  ext4  defaults,nofail  0  2
UUID=550e8400-e29b-41d4-a716-446655440000  /mnt/raid1-18tb   ext4  defaults,nofail  0  2

Options explained:

  • defaults — Standard mount options (rw, suid, dev, exec, auto, nouser, async)
  • nofail — Boot continues even if the array fails to mount (prevents boot failure if a drive dies)
  • 0 — Don't dump (backup) this filesystem
  • 2 — Filesystem check order (2 = check after root filesystem)

Test the configuration

# Unmount all arrays
sudo umount /mnt/raid1-3tb
sudo umount /mnt/raid1-750gb
sudo umount /mnt/raid1-18tb

# Test fstab (mounts all entries)
sudo mount -a

# Verify all mounted
df -h /mnt/raid1-*

Step 6: Set Permissions

By default, mount points are owned by root. To allow your user to write:

# Change ownership of all arrays
sudo chown -R $USER:$USER /mnt/raid1-3tb
sudo chown -R $USER:$USER /mnt/raid1-750gb
sudo chown -R $USER:$USER /mnt/raid1-18tb

Or create shared folders with specific permissions:

sudo mkdir /mnt/raid1-3tb/shared
sudo chown $USER:$USER /mnt/raid1-3tb/shared

Monitoring and Maintenance

Check array status

Quick status of all arrays:

cat /proc/mdstat

Detailed status of all arrays:

sudo mdadm --detail /dev/md0 /dev/md1 /dev/md2

Or check a specific array:

sudo mdadm --detail /dev/md0

Enable email notifications

Configure mdadm to send email alerts on failures. Edit the config:

sudo nano /etc/mdadm.conf

Add at the top of the file:

MAILADDR your-email@example.com

Then enable the monitoring service:

sudo systemctl enable mdmonitor
sudo systemctl start mdmonitor

Scheduled checks

Fedora runs weekly RAID consistency checks via systemd timers. Verify:

systemctl status mdcheck_start.timer

Handling Drive Failures

Detecting a failed drive

A failed drive shows as [U_] instead of [UU]:

cat /proc/mdstat
md0 : active raid1 sdb1[1] sda1[0](F)
      2930132992 blocks [2/1] [U_]

The (F) marks the failed drive, and [U_] shows only one drive is active.

Removing a failed drive

# Mark as failed (if not already)
sudo mdadm --manage /dev/md0 --fail /dev/sda1

# Remove from array
sudo mdadm --manage /dev/md0 --remove /dev/sda1

Replacing a drive

  1. Physically replace the failed drive with a new one of equal or greater size

  2. Identify the new drive:

lsblk -o NAME,SIZE,TYPE,MODEL
  1. Partition the new drive (same as Step 1):
sudo fdisk /dev/sda
# Press: g, n, Enter×3, t, 29, w
  1. Add to array:
sudo mdadm --manage /dev/md0 --add /dev/sda1
  1. Monitor rebuild progress:
watch cat /proc/mdstat

The rebuild can take hours for large drives. The array remains usable during rebuild, but with reduced redundancy—if the second drive fails during rebuild, you lose everything.

Rebuild speed

Check current rebuild speed limits:

cat /proc/sys/dev/raid/speed_limit_min
cat /proc/sys/dev/raid/speed_limit_max

Temporarily increase for faster rebuild (values in KB/s):

echo 200000 | sudo tee /proc/sys/dev/raid/speed_limit_min

Troubleshooting

Array not assembling on boot

  1. Check mdadm.conf has correct entries:
sudo mdadm --detail --scan
cat /etc/mdadm.conf
  1. Regenerate initramfs:
sudo dracut --force
  1. Check for UUID mismatches between mdadm.conf and actual arrays

"mdadm: /dev/md0 not identified in config file"

Update the config file:

sudo mdadm --detail --scan | sudo tee /etc/mdadm.conf
sudo dracut --force

Array shows as inactive

Force assembly:

sudo mdadm --assemble --force /dev/md0 /dev/sda1 /dev/sdb1

Checking filesystem integrity

# Unmount first
sudo umount /mnt/raid1-3tb

# Check filesystem
sudo e2fsck -f /dev/md0

# Remount
sudo mount /dev/md0 /mnt/raid1-3tb

Quick Reference

Task Command
Check all arrays cat /proc/mdstat
Detailed info (all) sudo mdadm --detail /dev/md0 /dev/md1 /dev/md2
Detailed info (one) sudo mdadm --detail /dev/md0
List all arrays sudo mdadm --detail --scan
Mark drive failed sudo mdadm --manage /dev/md0 --fail /dev/sda1
Remove drive sudo mdadm --manage /dev/md0 --remove /dev/sda1
Add drive sudo mdadm --manage /dev/md0 --add /dev/sda1
Stop array sudo mdadm --stop /dev/md0
Assemble array sudo mdadm --assemble /dev/md0
Update config sudo mdadm --detail --scan | sudo tee /etc/mdadm.conf

Summary

Creating RAID 1 arrays in Fedora with mdadm:

  1. Partition all drives with Linux RAID type
  2. Create arrays with mdadm --create
  3. Wait for initial sync (especially for large arrays)
  4. Format with your preferred filesystem
  5. Mount and configure auto-mount in /etc/fstab
  6. Save configuration to /etc/mdadm.conf
  7. Regenerate initramfs with dracut --force
  8. Monitor with mdmonitor service for failure alerts

Remember: RAID provides redundancy, not backup. A RAID 1 array protects you from a single drive failure, but not from accidental deletion, ransomware, fire, or theft. Always maintain off-site backups of important data.


References