From: Mhc wiki - gnu linux and rfid fun
I recently had the need to convert my home server setup from single disk to raid 1 without loosing data or reinstall the system. I found various articles around for this but mostly for old version of redhat/centos, debian/ubuntu and older initramfs/grub version. For personal reference and to thank all the people that share information i'm writing this article.
Main reason for this headache: have a more safe place to store some important data and, since I use mostly really cheap or "cost zero" hardware, a safer place for my CentOS 7 installation.
I'm no expert, follow this instruction at your own risk. I'm not responsible for data loss, or any damage that might occur following this instructions. It just worked for me.
Remember that raid 1 is not a backup, always do your backups!!!
This is a really small wiki for personal use, no talk/discuss or user registration is allowed.
Google+ Post Google+ Post Discussion
fell free to contact me for any info, comments, personal experience or correction to this page
"" [dot] "" [at] "" [dot] ""
1x segate barracuda 500gb as /dev/sda with 3 partitions.
/dev/sda1 /boot /dev/sda2 swap /dev/sda3 /
Current partitions are XFS not using LVM
I'm adding a second identical disk /dev/sdb for raid 1 setup. The raid will be a linux software raid managed by "mdadm" be sure to have package installed.
yum install mdadm
Be also sure to have a lot of patience, junk food and caffeine as usual :)
A full working backup of everything.
/dev/sdb pretty obvious.
Create identical partition scheme as curent disk /dev/sda
sfdisk -d /dev/sda | sfdisk /dev/sdb
Check
fdisk -l
Convert new disk /dev/sdb partitions to "Linux raid autodetect"
fdisk /dev/sdb use "t" to convert all 3 partitions to "fd"
Check
fdisk -l
Create for all partition on new disk /dev/sdb
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1 mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2 mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/sdb3
Check
cat /proc/mdstat
Create for all newly created raid 1 partition
mkfs.xfs /dev/md0 mkswap /dev/md1 mkfs.xfs /dev/md2
Mount both / and /boot
mount /dev/md2 /mnt/ mount /dev/md0 /mnt/boot/
rsync -auxHAXSv --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/mnt/* /* /mnt rsync -auxHAXSv /boot/* /mnt
I'm no rsync expert this did the job for me.
Mount both / and /boot (should be already mounted)
mount /dev/md2 /mnt/ mount /dev/md0 /mnt/boot/
System information
mount --bind /proc /mnt/proc mount --bind /dev /mnt/dev mount --bind /sys /mnt/sys mount --bind /run /mnt/run
Jail! No harm to current system.
chroot /mnt/
Edit fstab with new dirve UUID information
blkid /dev/md* /dev/md0: UUID="your-UUID" TYPE="xfs" /dev/md1: UUID="your-UUID" TYPE="swap" /dev/md2: UUID="your-UUID" TYPE="xfs"
vim /etc/fstab UUID=your-UUID / xfs defaults 0 0 UUID=your-UUID /boot xfs defaults 0 0 UUID=your-UUID swap swap defaults 0 0
mdadm --detail --scan > /etc/mdadm.conf
Backup current and create new initramfs
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bck dracut --mdadmconf --fstab --add="mdraid" --filesystems "xfs ext4 ext3 tmpfs devpts sysfs proc" --add-drivers="raid1" --force /boot/initramfs-$(uname -r).img $(uname -r) -M
Add some default parameters to grub
vim /etc/default/grub GRUB_CMDLINE_LINUX="rd.auto rd.auto=1 rhgb quiet" GRUB_PRELOAD_MODULES="mdraid1x"
grub2-mkconfig -o /boot/grub2/grub.cfg
Install grub on new disk /dev/sdb
grub2-install /dev/sdb
At this point you can reboot the system choosing new disk /dev/sdb from bios, or plug old disk /dev/sda out. if all worked out system will boot, check mount points and raid status
swapon -s Filename Type Size Used Priority /dev/md1 partition 12279804 0 -1
mount -t xfs /dev/md2 on / type xfs (rw,relatime,attr2,inode64,noquota) /dev/md0 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
cat /proc/mdstat
Or if didn't work out.. well we didn't touch any data or anything else on original disk so read more, start over.. don't complaint u'r use to it :)
Now we ca add old disk /dev/sda to the array. Change partition type to "Linux raid autodetect".
fdisk /dev/sda use "t" to convert all 3 partitions to "fd"
Add disk to raid 1 array
mdadm --manage /dev/md0 --add /dev/sda1 mdadm --manage /dev/md1 --add /dev/sda2 mdadm --manage /dev/md2 --add /dev/sda3
Check rebuild
watch -n1 "cat /proc/mdstat"
Reinstall grub on /dev/sda
grub2-install /dev/sda
add to /etc/mdadm.conf
vim /etc/mdadm.conf MAILADDR root
The status of raid device will be checked once a week by default
cat /etc/cron.d/raid-check # Run system wide raid-check once a week on Sunday at 1am by default 0 1 * * Sun root /usr/sbin/raid-check
to change parameters check /etc/sysconfig/raid-check
Use smart features if available on hard drives
yum install smartmontools
This is my personal configuration: comment all lines in /etc/smartmontools/smartd.conf and add
/dev/sda -H -C 0 -U 0 -m root /dev/sda -a -o on -S on -s (S/../.././02|L/../../1/04) /dev/sdb -H -C 0 -U 0 -m root /dev/sdb -a -o on -S on -s (S/../.././02|L/../../1/04)
#EoF profit! ;P