LVM Data Migration and Recovery: Working with vgexport & vgimport
Russell
I. Henmi
HP
Education Consultant
Hewlett-Packard
Company
100
Mayfield Avenue MS 37LE
Mountain
View, CA 94043
(650)
691-3133
(650)
691-3284 [fax]
russell_henmi@hp.com
Abstract
What if:
· … a system crashes, destroying the root volume group but leaving the data volumes intact. Is there any way to recover the data without time-consuming restores from backup tapes?
· … you need to upgrade your 10.20 system to 11.00. How can you guarantee that data volume groups will not be affected?
· … the development group's project is too large to transfer via traditional means (ftp, tapes, etc.). Is there any way to simply physically move the disks from one system to another?
Logical Volume Manager (LVM), though a powerful tool, can be time-consuming to set up. During system recovery, when time is of the essence, you need to bring your file systems on-line as quickly as possible. The built-in functionality of the vgexport and vgimport commands can help to make your job as a HP-UX administrator much easier.
Through this session you will learn how to manage your data volume groups more efficiently by:
· Safely removing volume groups from live systems without data loss.
· Reading configurations from existing LVM physical volumes to recover volume groups from crashed systems.
· Transferring volume groups and the disks contained within them between live systems.
Background
To understand how and why the vgexport and vgimport commands work, we first have to remember how LVM physical volumes (PVs) and volume groups (VGs) are formed.
Recall that to add a disk to a VG, we must first run a pvcreate on the disk, creating an area known as the Physical Volume Reserved Area (PVRA). This area contains static information about the disk (now properly referred to as a PV) such as size and eventually some VG membership information.
However, do we know what volume group this disk belongs to? At this point we do not. That information will be provided by the vgcreate (or vgextend, for pre-existing VGs) command. The vgcreate command not only fills in this information but also builds a second area, the Volume Group Reserved Area (VGRA), to hold more information about the PV and its role in the VG. Think of the two reserved areas like labels on a floppy disk, describing the contents and ownership of the disks.
As long as this disk is part of its VG, the reserved areas will provide information to the LVM subsystem whenever we attempt to perform any actions on the disk. This information is compared against the /etc/lvmtab file to verify that the disk belongs to an active VG on the system. Additionally, the kernel accesses the LVM devices through the device files under the /dev/<vgname> directory.
But what happens when we reuse a disk from a defunct VG? To get rid of a VG, you execute a vgremove on the unneeded VG (which updates /etc/lvmtab), delete the /dev/<vgname> directory, and then attempt to execute a pvcreate to prep the disk for use in a new VG:
#
pvcreate /dev/dsk/c0t5d0
pvcreate: The physical volume already belongs to a
volume group
Oops! It appears that the disk wasn't unused after all. You run a vgdisplay to see to whom it belongs:
pvdisplay: Couldn't find the volume group to which
physical volume … belongs.
pvdisplay: Cannot display physical volume "/dev/dsk/c0t5d0".
What happened? We removed the disk from the VG so why didn't the pvcreate and the pvdisplay work? The problem is that the vgreduce and vgremove commands do not update the disk's reserved areas (i.e. execute a pvcreate) as it leaves the VG. Why not?
If it did update the disk's reserved areas, this would be like the paper label on my floppy disk magically changing as soon as you decided to give it to someone else. What you have to do, of course, is put a new label over the old one to cover up (i.e. blank out) the information that already exists. Now that the disk has no identifying marks to tie to its previous owner and purpose, you can use it for whatever you want.
This analogy directly relates to our LVM problem. The reason we couldn't pvcreate our disk is that the command won't work if there is information already present - even if the volume group no longer exists! We need to overwrite (i.e. blank out) the reserved areas to make the disk forget that it was ever a PV that belonged to a VG. We accomplish this by using the -f ("force") option of the pvcreate command. Once this is done, we can use the PV in whatever VG we want.
The important thing to remember is that it is very difficult to accidentally "lose" our reserved areas. As long as the PVs are removed intact from the VG (i.e. without disk corruption), the reserved areas will not "forget" any of the information that they contain (such as the physical-to-logical extent mapping). In most cases, this is irrelevant since we don't want the disks to transfer their information to the new VG, but when we want to recover a PV's "identity," the feature becomes invaluable.
CAUTION: Due to this aspect of the LVM subsystem, it is recommended that system administrators NEVER use the pvcreate -f command/option set on the initial attempt to set up the PV. Because forcing the option could remove potentially vital information, the "force" option should only be used AFTER the initial pvcreate fails.
The vgexport command: A data migration tool
If we are planning to disconnect a VG from a system and maintain data integrity, we need to remove all traces of the VG from the system while leaving the PV's physical extents intact. Recall that VGs are tied to the system by 1) an entry in /etc/lvmtab and 2) a directory called /dev/<vgname>. In order to remove these connections while leaving the LVM reserve areas and extents intact we need the vgexport command.
For the purpose of this exercise, we'll call our original VG "vgbeaches" and we're going to import it onto a new system with the same names.
CAUTION: While this should not result in data loss, we can never be too careful when it come to our data. Therefore, we are assuming that a complete backup has been taken of at least your target VG before we start. Remember, there are two types of sysadmins: the paranoid ones and the ones who should be.
Let's assume we have a preexisting VG called vgbeaches that we're going to export. The /dev/vgbeaches directory looks like this:
# ll /dev/vgbeaches
total 0
brw-r-----
1 root sys 64 0x010001 Jan 6 15:40 pebble_lv
brw-r-----
1 root sys 64 0x010002 Jan 6 15:40
pismo_lv
brw-r-----
1 root sys 64 0x010003 Jan 6 15:40 palm_lv
crw-r--r--
1 root sys 64 0x010000 Jan 6 15:38 group
crw-r-----
1 root sys 64 0x010001 Jan 6 15:40 rpebble_lv
crw-r-----
1 root sys 64 0x010002 Jan 6 15:40 rpismo_lv
crw-r-----
1 root sys 64 0x010003 Jan 6 15:40 rpalm_lv
crw-r-----
1 root sys 64 0x010004 Jan 6 15:40 ravila_lv
brw-r-----
1 root sys 64 0x010004 Jan 6 15:40 avila_lv
Note the minor number of the group file (0x010000) and the LV index numbers. LV index numbers are the last two digits of the minor number of the LV device files (i.e. "01" for /dev/vgbeaches/pebble_lv).
We should also note the physical disks associated with vgbeaches:
# vgdisplay -v /dev/vgbeaches
--- Volume groups ---
VG Name /dev/vgbeaches
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 4
Open LV 4
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 1016
VGDA 4
PE Size (Mbytes) 4
Total PE
374
Alloc PE 0
Free PE 374
Total PVG 0
--- Physical volumes ---
PV Name /dev/dsk/c0t4d0
PV Status available
Total PE 250
Free PE 250
PV Name /dev/dsk/c0t5d0
PV Status available
Total PE 124
Free PE 124
1. Make sure that the LVs from our target VG are not in use. This can be easily accomplished via the umount command. Be sure to remove the entries for these LVs from your /etc/fstab file so that the system does not try to remount the missing LVs at boot time.
2. Deactivate the VG with the command line:
#
vgchange -a n /dev/vgbeaches
Volume group "/dev/vgbeaches" has been
successfully changed.
where the -a
n option stands for the activation status (reset it to "no").
The vgchange command will generate an error message if you missed anything in Step #1, since you cannot deactivate a VG with a currently mounted LV.
3. Now, we're ready to use the vgexport command. The syntax we'll use is:
#
vgexport -sv -m /tmp/vgbeaches.map /dev/vgbeaches
where -v is the verbose option and -m generates a map file. The -s option exports the VG as shared - a useful feature for high availability situations that will become more useful when we use the vgimport command later.
The output from the vgexport command is:
Beginning the export process on Volume Group
"vgbeaches".
/dev/dsk/c0t4d0
/dev/dsk/c0t5d0
Volume group "vgbeaches" has been successfully
removed.
Note that both disks from the VG show up in the output. You must remember that you cannot export parts of a VG with vgexport - it's all or nothing.
Let's check if we did make the system forget about our VG:
# ll /dev/vgbeaches
/dev/vgbeaches not found
# strings /etc/lvmtab
vg00
/dev/dsk/c0t6d0
Looks like it worked!
Now, what about this "map file" thing? Let's see what's in it:
# cat /tmp/vgbeaches.map
VGID 7825fd6b372fd130
1 pebble_lv
2 pismo_lv
3 palm_lv
4 avila_lv
We immediately recognize the names of our LVs from the VG, but what are the numbers? Think back to the beginning of the procedure … these appear to be the LV index numbers from the LV device files! The map file is used to preserve the customized names we assigned the LVs when we created them. Otherwise, when we restore the VG later, the LVs will be named in the form lvol<index number> (i.e. lvol1).
The VGID is added to the mapfile by the -s option. It will be used by vgimport to restore all of the PVs in the VG in one step.
4. The final step is to shutdown the system and remove the disks so that they can be transferred to the new system.
# shutdown -h now
The vgimport command: Bringing it all back together
Now that we have prepped our VG for transport, the question become, "How do we get it back?" Intuition tells us that if we have a vgexport command there will most likely be a vgimport command to complement it. Unfortunately, our problem cannot be solved with just this one command. To understand why, we must look again at what defines a VG.
Several things allow the system and the sysadmin to distinguish between distinct VGs:
1. The VG name (i.e. vg01, vgbeaches)
2. The /dev/<VG Name>/group file (specifically, the minor number from this device file)
3. The specific PVs associated with the VG (i.e. /dev/dsk/c0t5d0)
These characteristics do define a VG provided that they are unique on the system! Can you imagine what would happen if we had two VGs with the same minor number?
The minor number is essential to the way LVM works because, like most UNIX applications, LVM refers to its components by a numerical value, not a name. The names are only there for the human users who have an easier time referencing and remembering things by word-based handles. To see the mapping of minor numbers to their associated names just execute a ls -l command on the device file in question.
Because we will be adding our exported VG to a system with existing VGs, any or all of these characteristics may already belong to someone else. The power of the LVM subsystem is that its modularity allows us to be very flexible when we restore the VG. All the characteristics of the VG - name, minor number, even the PV's SCSI address - can be changed when we add it to the new system.
Note: A major "gotcha" to watch for is related to the vgexport/vgimport command set's "all or nothing" attitude. Just as you can't vgexport partial VGs, you cannot successfully vgimport partial VGs. You must vgimport all the PVs at the same time; you cannot vgimport PVs to an existing VG.
1. Add the disks to be imported onto your system. Remember that most SE-SCSI devices should only be connected when the system is halted and powered off. Certain types of disk hardware (i.e. high availability storage cabinets, FW-SCSI devices) do allow for "hot-swapping," but always check your documentation before doing so.
Do not forget to change the SCSI address of the imported devices if there is a conflict with an existing device.
Once all this is done, let the system power up and then log on as root.
2. Locate the device files for your imported devices. The device files may be different than they were on the original machine if you have changed the SCSI address or if you have connected to a differently numbered SCSI controller. We can determine the device files with an ioscan command.
# ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type
Description
=========================================================================
disk 0 2/0/1.2.0 sdisk CLAIMED
DEVICE TOSHIBA CD-ROM
/dev/dsk/c0t2d0 /dev/rdsk/c0t2d0
disk 1 2/0/1.4.0 sdisk CLAIMED
DEVICE HP
C3324A
/dev/dsk/c0t4d0 /dev/rdsk/c0t4d0
disk 2 2/0/1.5.0 sdisk CLAIMED
DEVICE SEAGATE ST3600N
/dev/dsk/c0t5d0 /dev/rdsk/c0t5d0
disk 3 2/0/1.6.0 sdisk CLAIMED
DEVICE SEAGATE ST34573N
/dev/dsk/c0t6d0 /dev/rdsk/c0t6d0
3. The first step of a vgimport procedure is to create the VG's directory and group file - just like for the vgcreate. We are making a new VG, as far as this new system is concerned, so we have to follow the same steps … with one obvious exception: DO NOT pvcreate the PVs! The whole reason we exported the VG from the other system was to preserve the PVRA and VGRA, both of which would be destroyed by a pvcreate.
So, what do we call our new VG and what minor number do we use? It really doesn’t matter as long as both are unique on our system. Let's find out what other VGs are currently using:
# ls -l /dev/*/group
crw-r----- 1 root root 64 0x000000 Jun 7 1998
/dev/vg00/group
Only the root VG (vg00) exists on this system so we can use any name and minor number we want. By default on a HP-UX system, the valid minor numbers are 00-09 and can be used in any order. For now, let's assume we're going to re-import VG with the same names and minor number as before.
# mkdir /dev/vgislands
# mknod /dev/vgbeaches/group c 64 0x010000
# ls -l /dev/*/group
crw-r----- 1 root root 64 0x000000 Jun 7 1998
/dev/vg00/group
crw-r--r-- 1 root sys 64 0x010000
Jan 6 15:38
/dev/vgbeaches/group
4. We're finally ready to
run the vgimport
command. Let's
use the -p
(preview) option to see what happens before we commit to anything:
# vgimport -vp vgbeaches /dev/dsk/c0t4d0 /dev/dsk/c0t5d0
Beginning the import process on Volume Group
"vgbeaches".
Logical volume "/dev/vg01/lvol1" has been
successfully created with lv number 1.
Logical volume "/dev/vg01/lvol2" has been
successfully created with lv number 2.
Logical volume "/dev/vg01/lvol3" has been
successfully created with lv number 3.
Logical volume "/dev/vg01/lvol4" has been
successfully created with lv number 4.
Volume group "/dev/vg01" has been successfully
created.
Warning: A backup of this volume group may not exist on
this machine.
Please remember to take a backup using the vgcfgbackup
command after activating
the volume group.
What happened?
Notice that we recovered the right number of LVs but they have the
generic names.
Why didn't the custom names we created before come back?
That's right
- we forgot the map file we created when we used vgexport on the
VG! The map
file is just ASCII text - which means we could recreate it by hand - but
to
re-create the VG exactly as it existed originally we would want that original
map. We can
bring the map file to the new system via ftp, DAT tape, or some other transport mechanism. Additionally, we need the
VGID number from the original map to use the -s option
successfully.
Many people
believe that the use of the vgexport/vgimport commands and the map file is the only supported
ways to change the name of an existing LV. LVs can actually be renamed by using the mv command to
modify the name of their device files - just remember to keep the block and
character device file names consistent (i.e. mylvol and rmylvol).
Let's run vgimport and see
what happens:
# vgimport -sv -m /tmp/mapfile vgbeaches
Beginning the import process on Volume Group "vgbeaches".
Logical volume "/dev/vg01/pebble_lv" has been successfully
created with lv number 1.
Logical volume "/dev/vg01/pismo_lv" has been successfully
created with lv number 2.
Logical volume "/dev/vg01/palm_lv" has been successfully created
with lv number 3.
Logical volume "/dev/vg01/avila_lv" has been successfully
created with lv number 4.
Volume group "/dev/vgbeaches" has been successfully created.
# ll /dev/vgislands
total 0
brw-r----- 1 root sys
64 0x010001 Jan 6 15:43
pebble_lv
brw-r----- 1 root sys 64 0x010002
Jan 6 15:43
pismo_lv
brw-r----- 1 root sys 64 0x010003
Jan 6 15:43
palm_lv
crw-r--r-- 1 root sys 64 0x010000
Jan 6 15:42
group
crw-r----- 1 root sys 64 0x010001
Jan 6 15:43
rpebble_lv
crw-r----- 1 root sys 64 0x010002
Jan 6 15:43
rpismo_lv
crw-r----- 1 root sys 64 0x010003
Jan 6 15:43
rpalm_lv
crw-r----- 1 root sys 64 0x010004
Jan 6 15:43
ravila_lv
brw-r----- 1 root sys 64 0x010004
Jan 6 15:43
avila_lv
The options are the
same as in vgexport: -v for verbose mode, -m to specify the map file, and -s to import the VG
as shareable.
However, the
-s option has
another benefit.
Notice that in the preview where we didn't use the map file, the device
files for each of the disks in the VG had to be specified on the command line in
order to be included in the vgimport.
Omitting a disk would be disastrous since the VG cannot import disks
after the fact - either you vgimport all of them or you don't run the command.
Using the
-s option in
conjunction with our original map file (the one with the VGID number) allowed us
to leave out the disk device files on the vgimport command line. The LVM subsystem takes the VGID and searches the system's
disks to find all the devices that match the VGID. Once located, the
disks are all included in the vgimport.
5. In order to use the vgcfgbackup command
as recommended by the vgimport command, we need to first activate the VG:
# vgchange -a y /dev/vgbeaches
Activated volume group
Volume group "/dev/vgbeaches" has been successfully changed.
6. Now, let's backup the
configuration and verify for ourselves that the VG has been added to the system
with the new characteristics we specified.
# vgcfgbackup /dev/vgbeaches
Volume Group configuration for /dev/vgbeaches has been saved in
/etc/lvmconf/vgbeaches.conf
# vgdisplay -v /dev/vgbeaches
--- Volume groups ---
VG Name
/dev/vgbeaches
VG Write Access
read/write
VG Status
available
Max LV
255
Cur LV
4
Open LV
4
Max PV
16
Cur PV
2
Act PV
2
Max PE per PV
1016
VGDA
4
PE Size (Mbytes)
4
Total PE
374
Alloc PE
12
Free PE
362
Total PVG
0
--- Logical
volumes ---
LV Name
/dev/vgbeaches/pebble_lv
LV Status
available/syncd
LV Size (Mbytes)
12
Current LE
3
Allocated PE
3
Used PV
1
LV Name
/dev/vgbeaches/pismo_lv
LV Status
available/syncd
LV Size (Mbytes)
12
Current LE
3
Allocated PE
3
Used PV
1
LV Name /dev/vgbeaches/palm_lv
LV Status
available/syncd
LV Size (Mbytes)
12
Current LE
3
Allocated PE
3
Used PV
1
LV Name /dev/vgbeaches/avila_lv
LV Status
available/syncd
LV Size (Mbytes)
12
Current LE
3
Allocated PE
3
Used PV
1
--- Physical
volumes ---
PV Name
/dev/dsk/c0t4d0
PV Status
available
Total PE
250
Free PE
238
PV Name
/dev/dsk/c0t5d0
PV Status
available
Total PE
124
Free PE
124
#
vgdisplay -v /dev/vgbeaches
---
Volume groups ---
VG
Name
/dev/vgbeaches
VG Write
Access
read/write
VG
Status
available
Max
LV
255
Cur
LV
4
Open
LV
4
Max
PV 16
Cur
PV
2
Act
PV
2
Max PE
per PV
1016
VGDA
4
PE Size
(Mbytes)
4
Total
PE
374
Alloc
PE
12
Free
PE
362
Total
PVG
0
---
Logical volumes ---
LV
Name
/dev/vgbeaches/pebble_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
LV
Name
/dev/vgbeaches/pismo_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
LV
Name /dev/vgbeaches/palm_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
LV
Name
/dev/vgbeaches/avila_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
---
Physical volumes ---
PV
Name
/dev/dsk/c0t4d0
PV
Status
available
Total
PE
250
Free
PE
238
PV
Name
/dev/dsk/c0t5d0
PV
Status
available
Total
PE
124
Free
PE
124
# ll
/dev/vgbeaches
total
0
brw-r----- 1 root sys 64 0x010001
Jan 6 15:40
pebble_lv
brw-r----- 1 root sys 64 0x010002
Jan 6 15:40
pismo_lv
brw-r----- 1 root sys 64 0x010003
Jan 6 15:40
palm_lv
crw-r--r-- 1 root sys 64 0x010000
Jan 6 15:38
group
crw-r----- 1 root sys 64 0x010001
Jan 6 15:40
rpebble_lv
crw-r----- 1 root sys 64 0x010002
Jan 6 15:40
rpismo_lv
crw-r----- 1 root sys 64 0x010003
Jan 6 15:40
rpalm_lv
crw-r----- 1 root sys 64 0x010004
Jan 6 15:40
ravila_lv
brw-r----- 1 root sys 64 0x010004
Jan 6 15:40
avila_lv
#
vgchange -a n /dev/vgbeaches
Volume
group "/dev/vgbeaches" has been successfully changed.
# vgdisplay /dev/vgbeaches
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group
"/dev/vgbeaches".
#
vgexport -sv -m /tmp/vgbeaches.map /dev/vgbeaches
Beginning the export process on Volume Group
"vgbeaches".
/dev/dsk/c0t4d0
/dev/dsk/c0t5d0
Volume group "vgbeaches" has been successfully
removed.
# ll /dev/vgbeaches
/dev/vgbeaches not found
# cat
/tmp/vgbeaches.map
VGID 7825fd6b372fd130
1 pebble_lv
2 pismo_lv
3 palm_lv
4 avila_lv
# ftp
annette
Connected to annette.
220
annette FTP server (Version 1.7.212.1 Wed Jan 6 15:39:19 GMT 1999) ready.
Name
(annette:root): root
331
Password required for root.
Password:
230 User
root logged in.
Remote
system type is UNIX.
Using
binary mode to transfer files.
ftp>
put /tmp/vgbeaches.map /tmp/vgbeaches.map
200 PORT
command successful.
150
Opening BINARY mode data connection for /tmp/vgbeaches.map.
226
Transfer complete.
44 bytes
sent in 0.00 seconds (3274.75 Kbytes/s)
ftp>
bye
221
Goodbye.
# mkdir
/dev/vgbeaches
# mknod
/dev/vgbeaches/group c 64 0x090000
# cat
/tmp/vgbeaches.map
1
pebble_lv
2
pismo_lv
3
palm_lv
4
avila_lv
#
vgimport -sv -m /tmp/vgbeaches.map /dev/vgbeaches
Beginning the import process on Volume Group "vgbeaches".
Logical
volume "/dev/vg01/pebble_lv" has been successfully created with lv
number 1.
Logical
volume "/dev/vg01/pismo_lv" has been successfully created with lv
number 2.
Logical
volume "/dev/vg01/palm_lv" has been successfully created with lv
number 3.
Logical
volume "/dev/vg01/avila_lv" has been successfully created with lv
number 4.
Volume
group "/dev/vgbeaches" has been successfully created.
# ll
/dev/vgbeaches
total
0
brw-r----- 1 root sys 64 0x090002
Jan 6 15:43
pismo_lv
brw-r----- 1 root sys 64 0x090003
Jan 6 15:43
palm_lv
crw-r--r-- 1 root sys 64 0x090000
Jan 6 15:42
group
brw-r----- 1 root sys 64 0x090004
Jan 6 15:43
avila_lv
brw-r----- 1 root sys 64 0x090001
Jan 6 15:43
pebble_lv
crw-r----- 1 root sys 64 0x090002
Jan 6 15:43
rpismo_lv
crw-r----- 1 root sys 64 0x090003
Jan 6 15:43
rpalm_lv
crw-r----- 1 root sys 64 0x090004
Jan 6 15:43
ravila_lv
crw-r----- 1 root sys 64 0x090001
Jan 6 15:43
rpebble_lv
#
vgdisplay -v /dev/vgbeaches
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgbeaches".
#
vgchange -a y /dev/vgbeaches
Activated volume group
Volume
group "/dev/vgbeaches" has been successfully changed.
#
vgcfgbackup vgbeaches
Volume
Group configuration for /dev/vgbeaches has been saved in
/etc/lvmconf/vgbeaches.conf
#
vgdisplay -v /dev/vgbeaches
---
Volume groups ---
VG
Name
/dev/vgbeaches
VG Write
Access
read/write
VG
Status
available
Max
LV
255
Cur
LV
4
Open
LV
4
Max
PV
16
Cur
PV
2
Act
PV
2
Max PE
per PV 1016
VGDA
4
PE Size
(Mbytes)
4
Total
PE
374
Alloc
PE
12
Free
PE
362
Total
PVG
0
---
Logical volumes ---
LV
Name
/dev/vgbeaches/pebble_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
LV
Name
/dev/vgbeaches/pismo_lv
LV
Status available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
LV
Name
/dev/vgbeaches/palm_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE
3
Used
PV
1
LV
Name
/dev/vgbeaches/avila_lv
LV
Status
available/syncd
LV Size
(Mbytes)
12
Current
LE 3
Allocated PE
3
Used
PV
1
---
Physical volumes ---
PV
Name
/dev/dsk/c0t4d0
PV
Status
available
Total
PE
250
Free
PE
238
PV
Name
/dev/dsk/c0t5d0
PV
Status
available
Total
PE
124
Free
PE
124
#
pvdisplay /dev/dsk/c0t4d0
---
Physical volumes ---
PV
Name
/dev/dsk/c0t4d0
VG
Name /dev/vgbeaches
PV
Status
available
Allocatable
yes
VGDA
2
Cur
LV
4
PE Size
(Mbytes)
4
Total
PE
250
Free
PE
238
Allocated PE
12
Stale
PE
0
IO
Timeout
default
#
lvdisplay /dev/vgbeaches/pebble_lv
---
Logical volumes ---
LV
Name
/dev/vgbeaches/pebble_lv
VG
Name
/dev/vgbeaches
LV
Permission
read/write
LV
Status
available/syncd
Mirror
copies
0
Consistency Recovery MWC
Schedule
parallel
LV Size
(Mbytes)
12
Current
LE
3
Allocated PE 3
Stripes
0
Stripe
Size (Kbytes) 0
Bad
block
on
Allocation
strict
IO
Timeout (Seconds) default