Hey there,
Today, we're going to follow up on our previous post regarding getting started with LVM logical volume management and go over a few (perhaps) boring but essential commands you'll want to know in order to keep tabs on your setup both in good times and in bad.
Refer back to our initial post on getting started with LVM for any additional details, if you'd like them, but, for today's purposes, we're just going to use the outcome of that post as the simple setup that we'll be monitoring/displaying today. In a nutshell, we have one physical volume composed of one volume group which, in turn, contains only one logical volume (or you could think of it the other way around). Unless I'm completely off-center here today, our output from today should be extremely reminiscent of yesterday's ;)
Note: Based on your particular version, or vendor distribution of LVM, the output of these commands may be slightly different. Not so much so that it should make a significant difference.
Basically, our monitoring and displaying commands are broken down into three groups: Those dealing with "physical volumes," those dealing with "logical volumes" and those dealing with "volume groups." We'll step through them (not over and/or around them, even though that would be more polite ;) one by one (in twos):
Physical Volumes:
The two commands we'll be using here are pvscan and pvdisplay.
pvscan, as with all of the following commands, pretty much does what the name implies. It scans your system for LVM physical volumes. When used straight-up, it will list out all the physical volumes it can find on the system, including those "not" associated with volume groups (output truncated to save on space):host # pvscan
pvscan
pvscan -- reading all physical volumes (this may take a while...)
...
pvscan -- ACTIVE PV "/dev/hda1" is in no VG [512 MB]
...
pvscan -- ACTIVE PV "/dev/hdd1" of VG "vg01"[512 MB / 266 MB free]
...
Next, we'll use pvdisplay to display our only physical volume:
host # pvdisplay /dev/hdd1 <-- Note that you can leave the /dev/hdd1, or any specification, off of the command line if you want to display all of your physical volumes. We just happen to know we only have one and are being particular ;)...
PV Name /dev/hdd1
VG Name vg01
PV Size 512 MB
...
Other output should include whether or not the physical volume is allocatable (or "can be used" ;), total physical extents (see our post on getting started with LVM for a little more information on PE's), free physical extents, allocated physical extents and the physical volume's UUID (Identifier).
Volume Groups:
The two commands we'll be using here are vgscan and vgdisplay.
vgscan will report on all existing volume groups, as well as create a file (generally) called /etc/lvmtab (Some versions will create an /etc/lvmtab.d directory as well):host # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found active volume group "vg01"
...
vgdisplay can be used to check on the state and condition of our volume group(s). Again, we're specifying our volume group on the command line, but this is not necessary:host # vgdisplay vg01
...
VG Name vg01
...
VG Size 246 MB
...
this command gives even more effusive output. Everything from the maximum logical volumes the volume group can contain (including how many it currently does and how many of those are open), separate (yet similar) information with regards to the physical volumes it can encompass, all of the information you've come to expect about the physical extents and, of course, each volume's UUID.
Logical volumes:
The two commands we'll be using here are (you guessed it ;) lvscan and lvdisplay.
lvscan will give us a report on all of our logical volumes, and their state (ACTIVE/INACTIVE or possibly in ERROR) on our system, like so:host # lvscan
...
lvscan -- ACTIVE "/dev/vg01/lvol01" [246 MB]
lvscan -- 1 logical volumes with 246 MB total in 1 volume group
lvscan -- 1 active logical volume
Once this step has been completed (and it should be completed at least once!), we can use lvdisplay to check out our logical volume(s). We'll be specifically looking at lvol01:host # lvdisplay lvol01
...
LV Name /dev/vg01/lvol01
VG Name vg01
...
and we'll get our standard UUID information, read/write access, availability status and information on the logical volume size (among other useful information, hopefully including the block device the logical volume is associated with :)
It should be noted that the logical volume size is one of the areas we generally look at first. If, for some reason, the output from "df" seems less than what you intended, this will give you a confident indication of whether or not your logical volume is too small, or you just need to grow the filesystem, resident on it, to use all of the logical volume's space!
...And that about wraps up the "information gathering" section of this series of posts on LVM. In future posts we'll be looking at ways to manipulate volumes (both in groups and individually in their physical and logical aspects; although the physical volume is actually a logical representation of a logical representation of a physical device... but I think we went into too much detail about that gordian knot in our previous post, and, even then, haven't begun to scratch the surface ;). In future posts we'll also be looking at LVM on Unix and Linux and the relationship between the two, which is becoming less complex every day!
Cheers,
, Mike