Monday, June 16, 2008

Removing Logical Devices Using LVM On Linux And Unix

Hey there,

A long weekend of getting confused over factorials and prime numbers is finally over, so today, we're going to finish one part of what we started in our previous posts regarding getting started with LVM logical volume management and making use of LVM's monitoring and display commands, and close this little circle by using those same monitoring commands to aid in disk removal with LVM.

Refer back to those two initial posts for any additional details, if you'd like them, but, for today's purposes, we're just going to use their outcome as the assumed setup which we'll be removing.

In a nutshell, we have one physical volume composed of one volume group which, in turn, contains only one logical volume (or you could think of it the other way around). Hopefully this is all coming back to us (including me ;)

Note: Based on your particular vendor distribution , or version, of LVM, the output of these commands may be slightly different. You shouldn't notice any significant difference, though.

The first thing we'll want to do, to undo all the hard work we've put into creating this little setup, is to remove the logical volume (lvol01) from the volume group (vg01). This is done easily with the lvchange command, like so:

host # lvchange -a n /dev/vg01/lvol01

lvchange doesn't spit any output to the terminal by default, if everything goes well. But, if we double-check that the return code from the above command is 0, then we've successfully made the lvol01 logical volume unavailable (-a for availability status with n following it to indicate that we want that status set to no).

Next, we'll remove the logical volume, allowing the naturally occuring backup of the volume group, from vg01:

host # lvremove -f /dev/vg01/lvol01
lvremove -- doing automatic backup of volume group "vg01"
lvremove -- logical volume "/dev/vg01/lvol01" successfully removed

The next two things to do will be to change the volume group's status and remove the volume group using (yes, it's true ;) vgchange and vgremove, like this:

host # vgchange -a n /dev/vg01
vgchange -- volume group "vg01" successfully deactivated

host # vgremove /dev/vg01
vgremove -- volume group "vg01" successfully removed

The most assured way to finish this (unless your flavour of Linux doesn't allow it) would be to run pvchange and then pvremove to finish off the process.

host # pvchange -x n /dev/hdd1 <-- This will tell LVM not to allow allocation of any physical extents from this physical volume (See our original post on getting started with LVM for a slightly more in-depth explanation of these).

host # pvremove -f /dev/hdd1 <-- This will wipe the label on the physical drive so that LVM will no longer recognize it as a physical volume.

And you're all set! Your monitoring and display commands (pvscan, pvdisplay, vgscan, vgdisplay, lvscan and lvdisplay) will now show you disheartening results (that's assuming you didn't want to do this, which goes contrary to the intent of the post ;)

Tomorrow, we'll put a together a quick command reference for LVM, including (of course) a few extra commands so it's not "all" fluff ;)