Penguin

If you've ever been in the situation where you are doing a full upgrade of the disks in a server, you'll face the problem of getting the data from one set of disks to the other, without wanting to reinstall from scratch.

Here's how I'd do it:

  • Install the new hardware in such a way that the old hardware is still the default. IE, your old disks are still the boot disks, and the new hardware comes up somewhere else entirely. This is easy if you're migrating from normal IDE to IDE-RAID or to SCSI or something, not so easy if moving from one SCSI controller to another.
  • Boot into your old OS, and then partition and format the new drives as you would like them setup. Set up RAID volumes here too..
  • Mount the new rootfs under /mnt/root
  • Mount all the new partitions at appropriate points under /mnt/root, eg the new var goes to /mnt/root/var. You'll need to create appropriate dirs for these mountpoints first, of course.
  • Make sure you are in runlevel 1 - single user mode. If you aren't, you might have files like the mailspool change while you are copying. This will mean you LOSE DATA. DANGER.
  • recursively copy all the other directories (ie, *exclusing /mnt) into /mnt/root. Don't copy /proc or /tmp either eg

cp -a /bin /boot /dev /etc /home /lib /opt /root /sbin /usr /var /mnt/root/

  • Have a coffee or two
  • Make /proc and /tmp

mkdir /mnt/root/proc mkdir /mnt/root/tmp

  • Set the permissions on /tmp properly (rwxrwxrwt)

chmod 1777 /mnt/root/tmp

  • Once this has finished, you can use chroot to set the new partition as the root for your shell, and verify that things work

chroot /mnt/root /bin/bash

  • Make sure everything looks like its ok.
  • Reinstall your bootloader. if you are in the chroot as described above. Note that <new boot device> is just that! if you are booting off a SCSI RAID array, it'll possibly be /dev/md0. If it's an IDE disk that will eventually be on /dev/hda but is currently on /dev/hde, try /dev/hde - but I can't guarantee that'll work. You might want to make a bootdisk so you can boot off that when you yank the old disks.

    lilo -b <new boot device>

    • If you aren't in the chroot, specify the config file

    lilo -C /mnt/root/etc/lilo.conf -b <new boot device>

  • Reboot, remove the old disks, and hopefully boot off the new ones.

I cheat a little when building a linux box that is primarily a server rather than a workstation. Normally a linux server has plenty of disks, and quite often more than one. So I attempt to have a completely separate root drive from any data drives. For example - a 36 Gb IDE root drive and four 275 Gb drives in a software RAID-0

belt:# df -h Filesystem Size Used Avail Use% Mounted on /dev/hda1 18G 1.5G 16G 9% / /dev/md0 1.1T 739G 379G 67% /backup

Doing it this way means that the data drives could be moved to another box with minimal fuss, and if a data drive goes then the root drive can still boot. The root drive speed doesn't really matter once the machine is up, but the data drives do.