Basically every mdadm guide says you can only grow all the disks in an array (change to the size of the underlying disks, each of them larger, I mean) this by failing out the disks one at at time and repartitioning and adding them back.
Which means N raid resyncs.
This is not actually true at all. Here’s a copy of the runbook I used for this at a previous job.
Turn off the NFS instance
Detach the current disks
Run an aws-raid-manager -r/--restore with a new, larger disk size
Turn on the NFS instance
IMPORTANT: You will probably have to wait for the mdadm sync to finish; cat /proc/mdstat to see status. Feel free to try to continue before it's done, but it probably won't work.
Get the old and new component sizes
cd /sys/block/md127/md
cat component_size
cat dev-*/size
Check that the reported device size is bigger than the component size, and that they are all the same.
If they are *all* the same (both the reported device size and the component size), then mdadm doesn't see the larger partitions and you'll have to convince it
Use sudo mdadm --detail /dev/md127 | grep -i size to check what size it thinks the disks are and confirm that that makes sense
Use sudo sfdisk -s /dev/xvd[fghi][0-5] to see what the new component size should be. Make sure it compares correctly to the current component size (i.e. double or whatever)
cd /sys/block/md127/md ; for dev in dev-xvd*
do
sudo sh -c "echo '[the new size]' >$dev/size"
done
This is probably as dangerous as it sounds, but you're working from a backup, so enh
check the component sizes per above
if, as is likely, most or all are slightly smaller than what you expect, re-run with that smaller value and continue on
Otherwise, no idea; good luck
Grow to the new size
sudo mdadm --grow /dev/md127 --size=[the dev-*/size from above]
sudo pvresize /dev/md127
sudo lvextend -l+100%FREE /dev/mapper/cytoweb-data
df -h
sudo xfs_growfs /nfs/cytoweb-data
df -h
Destroy the old disks