서버를 운영하다 보면 파일시스템 구성, 확장은 자주하는 일이다. 축소는.. 거의 해본적이 없다.
왜냐하면 축소하는 과정에서 데이터 손실이 발생할 수 있기 때문이다. 축소 작업은 해당 데이터 용량을 산정해서 용량 만큼 신규 Disk를 추가하여 옮기는 게 괜찮은 선택이다.
- 여러 개의 하드 디스크나 파티션을 하나의 논리적인 그룹으로 묶고, 이 그룹을 통해 유연하게 볼륨을 관리
[root@test ~]# pvcreate /dev/sdf Writing physical volume data to disk "/dev/sdf" Physical volume "/dev/sdf" successfully created [root@test ~]# pvdisplay -v Scanning for physical volume names --- Physical volume --- PV Name /dev/sda VG Name vg02 PV Size 1.00 TiB / not usable 4.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 262143 Free PE 0 Allocated PE 262143 PV UUID MY0EIp-eQjq-5lEi-NdkL-Pjxj-yil4-ebnAjq "/dev/sdf" is a new physical volume of "500.00 GiB" --- NEW Physical volume --- PV Name /dev/sdf VG Name PV Size 500.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID EUz9Gl-fRpw-WZdt-czYI-7WET-IaBf-XGR4cc [root@test ~]# vgextend vg02 /dev/sdf Volume group "vg02" successfully extended [root@test ~]# vgdisplay -v Finding all volume groups Finding volume group "vg02" --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 6 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 6 Act PV 6 VG Size 5.49 TiB PE Size 4.00 MiB Total PE 1438714 Alloc PE / Size 1310715 / 5.00 TiB Free PE / Size 127999 / 500.00 GiB VG UUID zchT2V-6MbA-Vyy7-MscI-F1YQ-zzBf-aVSYWD --- Logical volume --- LV Path /dev/vg02/lvol02 LV Name lvol02 VG Name vg02 LV UUID p9kNfv-nnzq-Lr2A-twzY-KgeK-k56Y-MfyYE2 LV Write Access read/write LV Creation host, time test, 2014-01-08 14:01:23 +0900 LV Status available # open 1 LV Size 5.00 TiB Current LE 1310715 Segments 5 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Physical volumes --- PV Name /dev/sda PV UUID MY0EIp-eQjq-5lEi-NdkL-Pjxj-yil4-ebnAjq PV Status allocatable Total PE / Free PE 262143 / 0 PV Name /dev/sdf PV UUID EUz9Gl-fRpw-WZdt-czYI-7WET-IaBf-XGR4cc PV Status allocatable Total PE / Free PE 127999 / 127999 [root@test ~]# lvextend -l 1438714 /dev/mapper/vg02-lvol02 Extending logical volume lvol02 to 5.49 TiB Logical volume lvol02 successfully resized [root@test ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/cciss/c0d0p2 20158332 6377272 12757060 34% / tmpfs 1928896 88 1928808 1% /dev/shm /dev/cciss/c0d0p1 495844 33297 436947 8% /boot /dev/mapper/vg02-lvol02 5284446632 4725770104 290242096 95% /ACRONIS [root@test ~]# resize2fs /dev/mapper/vg02-lvol02 ## 버전에 따라 resize2fs or xfs_growfs 새로운 공간을 인식하고 사용할 수 있게 설정 resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/mapper/vg02-lvol02 is mounted on /ACRONIS; on-line resizing required old desc_blocks = 320, new_desc_blocks = 352 Performing an on-line resize of /dev/mapper/vg02-lvol02 to 1473243136 (4k) blocks. The filesystem on /dev/mapper/vg02-lvol02 is now 1473243136 blocks long. [root@test ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/cciss/c0d0p2 20158332 6377272 12757060 34% / tmpfs 1928896 88 1928808 1% /dev/shm /dev/cciss/c0d0p1 495844 33297 436947 8% /boot /dev/mapper/vg02-lvol02 5800503720 4725767288 780095228 86% /ACRONIS
축소하고자 하는 파일시스템에 따라 해당 서비스를 중지하고 umount 후 작업을 한다.
축소 커맨드 history
service nfs stop umount /ACRONIS e2fsck -f /dev/mapper/vg02/lvol02 resize2fs /dev/mapper/vg02-lvol02 5T lvreduce -L 5T /dev/mapper/vg02-lvol02 rezise2fs /dev/mapper/vg02-lvol02 5100G lvreduce -L 5100G /dev/mapper/vg02-lvol02 vgreduce vg02 /dev/sdg vgreduce vg02 /dev/sdf pvremove /dev/sdg pvremove /dev/sdf
축소 커맨드
[root@test /]# service nfs stop Shutting down NFS daemon: [ OK ] Shutting down NFS mountd: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] [root@test /]# umount /ACRONIS [root@test /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p2 20G 6.1G 13G 34% / tmpfs 1.9G 88K 1.9G 1% /dev/shm /dev/cciss/c0d0p1 485M 33M 427M 8% /boot /dev/cciss/c0d0p5 5.8G 140M 5.4G 3% /home [root@test /]# e2fsck -f /dev/mapper/vg02-lvol02 e2fsck 1.41.12 (17-May-2010) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/mapper/vg02-lvol02: ***** FILE SYSTEM WAS MODIFIED ***** /dev/mapper/vg02-lvol02: 5647/499384320 files (0.9% non-contiguous), 1212784053/1997530112 blocks [root@test /]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol02 vg02 -wi-a--- 7.44t [root@test /]# pvs PV VG Fmt Attr PSize PFree /dev/sda vg02 lvm2 a-- 1024.00g 0 /dev/sdb vg02 lvm2 a-- 1024.00g 0 /dev/sdc vg02 lvm2 a-- 1024.00g 0 /dev/sdd vg02 lvm2 a-- 1024.00g 0 /dev/sde vg02 lvm2 a-- 1024.00g 0 /dev/sdf vg02 lvm2 a-- 500.00g 0 /dev/sdg vg02 lvm2 a-- 1.95t 0 [root@test /]# vgs VG #PV #LV #SN Attr VSize VFree vg02 7 1 0 wz--n- 7.44t 0 [root@test /]# blkid 목록 확인 [root@test /]# resize2fs /dev/mapper/vg02-lvol02 5T resize2fs 1.41.12 (17-May-2010) Resizing the filesystem on /dev/mapper/vg02-lvol02 to 1342177280 (4k) blocks. The filesystem on /dev/mapper/vg02-lvol02 is now 1342177280 blocks long. [root@test /]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol02 vg02 -wi-a--- 7.44t [root@test /]# lvreduce -L 5T /dev/mapper/vg02-lvol02 WARNING: Reducing active logical volume to 5.00 TiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lvol02? [y/n]: y Reducing logical volume lvol02 to 5.00 TiB Logical volume lvol02 successfully resized [root@test /]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol02 vg02 -wi-a--- 5.00t [root@test /]# vgs VG #PV #LV #SN Attr VSize VFree vg02 7 1 0 wz--n- 7.44t 2.44t [root@test /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p2 20G 6.1G 13G 34% / tmpfs 1.9G 88K 1.9G 1% /dev/shm /dev/cciss/c0d0p1 485M 33M 427M 8% /boot /dev/cciss/c0d0p5 5.8G 140M 5.4G 3% /home [root@test /]# vgs VG #PV #LV #SN Attr VSize VFree vg02 7 1 0 wz--n- 7.44t 2.44t [root@test /]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol02 vg02 -wi-a--- 5.00t [root@test /]# pvs PV VG Fmt Attr PSize PFree /dev/sda vg02 lvm2 a-- 1024.00g 0 /dev/sdb vg02 lvm2 a-- 1024.00g 0 /dev/sdc vg02 lvm2 a-- 1024.00g 0 /dev/sdd vg02 lvm2 a-- 1024.00g 0 /dev/sde vg02 lvm2 a-- 1024.00g 0 /dev/sdf vg02 lvm2 a-- 500.00g 499.98g /dev/sdg vg02 lvm2 a-- 1.95t 1.95t [root@test /]# resize2fs /dev/mapper/vg02-lvol02 5100G resize2fs 1.41.12 (17-May-2010) Resizing the filesystem on /dev/mapper/vg02-lvol02 to 1336934400 (4k) blocks. The filesystem on /dev/mapper/vg02-lvol02 is now 1336934400 blocks long. [root@test /]# lvreduce -L 5100G /dev/mapper/vg02-lvol02 WARNING: Reducing active logical volume to 4.98 TiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lvol02? [y/n]: y Reducing logical volume lvol02 to 4.98 TiB Logical volume lvol02 successfully resized [root@test /]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol02 vg02 -wi-a--- 4.98t [root@test /]# pvs PV VG Fmt Attr PSize PFree /dev/sda vg02 lvm2 a-- 1024.00g 0 /dev/sdb vg02 lvm2 a-- 1024.00g 0 /dev/sdc vg02 lvm2 a-- 1024.00g 0 /dev/sdd vg02 lvm2 a-- 1024.00g 0 /dev/sde vg02 lvm2 a-- 1024.00g 19.98g /dev/sdf vg02 lvm2 a-- 500.00g 500.00g /dev/sdg vg02 lvm2 a-- 1.95t 1.95t [root@test /]# vgs VG #PV #LV #SN Attr VSize VFree vg02 7 1 0 wz--n- 7.44t 2.46t [root@test /]# vgs VG #PV #LV #SN Attr VSize VFree vg02 7 1 0 wz--n- 7.44t 2.46t [root@test /]# vgreduce vg02 /dev/sdg Removed "/dev/sdg" from volume group "vg02" [root@test /]# pvs PV VG Fmt Attr PSize PFree /dev/sda vg02 lvm2 a-- 1024.00g 0 /dev/sdb vg02 lvm2 a-- 1024.00g 0 /dev/sdc vg02 lvm2 a-- 1024.00g 0 /dev/sdd vg02 lvm2 a-- 1024.00g 0 /dev/sde vg02 lvm2 a-- 1024.00g 19.98g /dev/sdf vg02 lvm2 a-- 500.00g 500.00g /dev/sdg lvm2 a-- 1.95t 1.95t [root@test /]# vgreduce vg02 /dev/sdf Removed "/dev/sdf" from volume group "vg02" [root@test /]# pvs PV VG Fmt Attr PSize PFree /dev/sda vg02 lvm2 a-- 1024.00g 0 /dev/sdb vg02 lvm2 a-- 1024.00g 0 /dev/sdc vg02 lvm2 a-- 1024.00g 0 /dev/sdd vg02 lvm2 a-- 1024.00g 0 /dev/sde vg02 lvm2 a-- 1024.00g 19.98g /dev/sdf lvm2 a-- 500.00g 500.00g /dev/sdg lvm2 a-- 1.95t 1.95t [root@test /]# pvremove /dev/sdg Labels on physical volume "/dev/sdg" successfully wiped [root@test /]# pvremove /dev/sdf Labels on physical volume "/dev/sdf" successfully wiped [root@test /]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol02 vg02 -wi-a--- 4.98t [root@test /]# vgdisplay --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 5 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 5 Act PV 5 VG Size 5.00 TiB PE Size 4.00 MiB Total PE 1310715 Alloc PE / Size 1305600 / 4.98 TiB Free PE / Size 5115 / 19.98 GiB VG UUID zchT2V-6MbA-Vyy7-MscI-F1YQ-zzBf-aVSYWD [root@test /]# mount /dev/mapper/vg02-lvol02 /ACRONIS [root@test /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p2 20G 6.1G 13G 34% / tmpfs 1.9G 88K 1.9G 1% /dev/shm /dev/cciss/c0d0p1 485M 33M 427M 8% /boot /dev/cciss/c0d0p5 5.8G 140M 5.4G 3% /home /dev/mapper/vg02-lvol02 5.0T 4.5T 259G 95% /ACRONIS [root@test /]# service nfs start WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Stopping RPC idmapd: [ OK ] Starting RPC idmapd: [ OK ] Starting NFS daemon: [ OK ] [root@test /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p2 20G 6.1G 13G 34% / tmpfs 1.9G 88K 1.9G 1% /dev/shm /dev/cciss/c0d0p1 485M 33M 427M 8% /boot /dev/cciss/c0d0p5 5.8G 140M 5.4G 3% /home /dev/mapper/vg02-lvol02 5.0T 4.5T 259G 95% /ACRONIS
장치 추가 → 파티션 작업 → Physical Volume 생성 → Volume Group 생성 → Logical Volume 생성 → 파일시스템 생성 →마운트
Physical Volume : /dev/sda1, /dev/sdb1과 같은 파티션, 실제 물리 디스크 장치
PV생성 : pvcreate
PV제거 : pvremove
PV조회 : pvdisplay
Volume Group : Physical Volume이 모인 그룹
VG생성 : vgcreate
VG제거 : vgremove
VG조회 : vgdisplay
VG확장 : vgextend (PV추가)
VG축소 : vgreduce (PV제거)
VG속성변경 : vgchange
VG이름변경 : vgrename
Logical Volume : Volume Group을 나눈 논리적인 파티션
LV생성 : lvcreate
LV제거 : lvremove
LV조회 : lvdisplay
LV확장 : lvextend
LV축소 : lvreduce
LV이름변경 : lvrename
테스트서버 파티션 정보
4000GB씩 할당되어 있는 /dev/sdd, /dev/sde를 VolGroup01이라는 볼륨그룹으로 생성하고 6000GB는 /backup으로 할당하고, 나머지 2000GB는 /log로 구성
LVM Type파티션생성
[root@ServerA]# fdisk /dev/sdd
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-486333, default 1): <엔터>
Last cylinder or +size or +sizeM or +sizeK (1-267349, default 267349): <엔터>
Command (m for help): t
Hex code (type L to list codes): 8e // 파티션을 LVM 속성으로 변경하기 위한 작업
Command (m for help): w
Calling ioctl() to re-read partition table.
Syncing disks.
/dev/sde도 동일하게 수행
Parted를 이용하여 파티션 생성 및 속성 변경
[root@ServerA]# parted /dev/sdd
(parted) mklabelgpt
(parted) mkpart primary 0- 4000GB // -1은 마지막까지를 의미
(parted) set 1 lvm on
(parted) p
Model: DELL PERC H710P (scsi)
Disk /dev/sdd: 4000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 4000GB 4000GB primary lvm
(parted) quit
Information: Don't forget to update /etc/fstab, if necessary.
물리적 볼륨(PV : Physical Volume) 생성
[root@ServerA]# pvcreate /dev/sdd1
Writing physical volume data to disk "/dev/sdd1"
Physical volume "/dev/sdd1" successfully created
물리적 볼륨 삭제
[root@ServerA]# pvremove /dev/sde1
Labels on physical volume "/dev/sde1" successfully wiped
물리적 볼륨 (PV: Phsical Volume) 확인
[root@ServerA]# pvdisplay
"/dev/sdd1" is a new physical volume of "2.00 TB"
--- NEW Physical volume ---
PV Name /dev/sdd1
VG Name
PV Size 2.00 TB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID R6iprm-IGbx-cqde-rk70-37SL-p0Rv-9MjzVA
볼륨그룹(LVM)을 생성하여 PV를 추가(확장)하기
vgcreate [볼륨 그룹명][장치명] [장치명][root@ServerA]# vgcreate VolGroup01 /dev/sdd1 /dev/sde1
Volume group "VolGroup01" successfully created
볼륨그룹 활성화
vgchange -a y [볼륨 그룹명][root@ServerA]# vgchange -a y VolGroup01
1 logical volume(s) in volume group "VolGroup01" now active
볼륨그룹 비활성화
vgchange -a n [볼륨 그룹명][root@ServerA]# vgchange -a n VolGroup01
0 logical volume(s) in volume group "VolGroup01" now active
만들어진 볼륨그룹에 PV 추가
vgextend [볼륨 그룹명][장치명]
[root@ServerA]# vgextend VolGroup01 /dev/sde1
Volume group "VolGroup01" successfully extended
만들어진 볼륨그룹에 PV 제거
vgreduce [볼륨 그룹명][장치명]
[root@ServerA]# vgreduce VolGroup01 /dev/sde1
Removed "/dev/sde1" from volume group "VolGroup01"
볼륨그룹 이름 변경
vgrename [기존 볼륨그룹 이름][변경할 볼륨그룹 이름]
[root@ServerA]# vgrename VolGroup01 Vol01
Volume group "VolGroup01" successfully renamed to "Vol01"
볼륨그룹 삭제 (VG에 LV가 없고 비활성화되어야 삭제 가능)
vgremove /dev/[볼륨 그룹명]
[root@ServerA]# vgremove /dev/VolGroup01
Volume group "VolGroup01" successfully removed
Concat 형태로 논리적 볼륨 (LV: Logical Volume) 생성 (RAID-0), Total PE, Free PE / Size로 확인
lvcreate -L [용량] -n [논리 볼륨명][볼륨 그룹명]
[root@ServerA]# lvcreate -L 6000GB -n backup VolGroup01
Logical volume "backup" created
[root@ServerA]# lvcreate -L 1450.99GB -n log VolGroup01
Rounding up size to full physical extent 1.42 TB
Logical volume "log" created
Stripe 형태로 논리적 볼륨 생성 (RAID-0, 여러개의 디스크 동시 사용)
lvcreate -i [장치개수] -L [용량] -n [논리 볼륨명][볼륨 그룹명]
[root@ServerA]# lvcreate -i 2 -L 500M -n test VolGroup01
Using default stripesize 64.00 KB
Rounding size (125 extents) up to stripe boundary size (126 extents)
Logical volume "test" created
Mirror 형태로 논리적 볼륨 생성 (RAID-1)
lvcreate -m [장치개수] -L [용량] -n [논리 볼륨명][볼륨 그룹명]
[root@ServerA]# lvcreate -m 1 -L 1000M -n test2 VolGroup01
Logical volume "test2" created
논리 볼륨 용량 추가 (마운트 해제 후 파일시스템 재생성, 데이터 삭제됨)
lvextend -L [추가할 용량] /dev/[볼륨 그룹명]/[논리 볼륨명][root@ServerA]# lvextend -L +50M /dev/VolGroup01/test
Rounding up size to full physical extent 52.00 MB
Using stripesize of last segment 64.00 KB
Rounding size (139 extents) down to stripe boundary size for segment (138 extents)
Extending logical volume test to 552.00 MB
Logical volume test successfully resized
논리 볼륨 용량 축소 (마운트 해제 후 파일시스템 재생성, 데이터 삭제 됨)
lvreduce -L [축소할 용량] /dev/[볼륨 그룹명]/[논리 볼륨명][root@ServerA]# lvreduce -L -100M /dev/VolGroup01/test
Rounding size (63 extents) up to stripe boundary size for segment (64 extents)
WARNING: Reducing active logical volume to 256.00 MB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce test? [y/n]: y
Reducing logical volume test to 256.00 MB
Logical volume test successfully resized
논리적 볼륨 삭제
lvremove /dev/[볼륨 그룹명]/[논리 볼륨명][root@ServerA]# lvremove /dev/VolGroup01/backup
Do you really want to remove active logical volume backup? [y/n]: y
Logical volume "backup" successfully removed
논리적 볼륨(LV: Logical Volume) 확인
[root@ServerA]# lvscan
ACTIVE '/dev/VolGroup01/backup' [5.86 TB] inherit
ACTIVE '/dev/VolGroup01/log' [1.42 TB] inherit
파일시스템 생성
mkfs.ext3 /dev/[볼륨 그룹명]/[논리 볼륨명][root@ServerA]# mkfs.ext3 /dev/VolGroup01/backup
<생략>
/dev/VolGroup01/log 파티션도 동일하게 수행
마운트 디렉터리 생성 및 마운트
[root@ServerA]# mkdir /backup
[root@ServerA]# mount /dev/VolGroup01/backup /backup
/dev/VolGroup01/log 파티션도 동일하게 수행
fstab 등록
[root@ServerA]# vi /etc/fstab
/dev/VolGroup01/backup /backup ext3 defaults,noatime 1 2
/dev/VolGroup01/log /log ext3 deafults,noatime 1 2
하드디스크를 확장하면서 기존에 있는 하드디스크를 제거해야 할 때과정
환경은 /dev/sdd1가 VolGroup01에 속해있는데, /dev/sdd1를 빼고 /dev/sde를 더하는 과정
/dev/sde1를 PV생성(교체 전 용량보다 작으면 불가)
[root@ServerA]# pvcreate /dev/sde1
/dev/sde1를 VolGroup01에 추가
[root@ServerA]# vgextend VolGroup01 /dev/sde1
No physical volume label read from /dev/sde1
Writing physical volume data to disk "/dev/sde1"
Physical volume "/dev/sde1" successfully created
Volume group "VolGroup01" successfully extended
/dev/sdd1의 PV들을 /dev/sde1으로 이동
[root@ServerA]# pvmove /dev/sdd1 /dev/sde1
/dev/sdd1: Moved: 0.0%
/dev/sdd1: Moved: 23.9%
/dev/sdd1: Moved: 47.9%
/dev/sdd1: Moved: 72.6%
/dev/sdd1: Moved: 96.9%
/dev/sdd1: Moved: 100.0%
성공적으로 옮겨졌으면 VolGroup01에서 /dev/sdd를 제거
[root@ServerA]# vgreduce VolGroup01 /dev/sdd1
Removed "/dev/sdd1" from volume group "VolGroup01"
데이터의 변동이 많은 /var 등의 디렉터리는 백업 도중 데이터에 오류가 발생 방지를 위해 snapshot LV를 생성하여 데이터를 고정시킨 후 백업
스냅샷 LV 생성
[root@ServerA]# lvcreate -s -L 5g -n testbackup /dev/VolGroup01/test
Logical volume "testbackup" created
위 명령은 /dev/VolGroup01에 대한 스냅샷 LV인 /dev/VolGroup01/testbackup 생성
옵션 -s 는 만들어질 LV가 스냅샷 형식
옵션 -L은 LV의 최대 크기를 지정하며, 원본 LV와 크기를 동일하게 지정하는 것을 권장
사실 스냅샷 LV는 원본이 갱신되는 부분만 고정시키려고 데이터를 가져와 LV에 할당하기 때문에 많은 LV가 필요하지 않음
만든 LV를 마운트 후 백업
[root@ServerA]# mount /dev/VolGroup01/testbackup /mnt
백업 완료 후 언마운트 하고 스냅샷 LV 삭제
[root@ServerA]# umount /mnt
[root@ServerA]# lvremove /dev/VolGroup01/testbackup
Do you really want to remove active logical volume testbackup? [y/n]: y
Logical volume "testbackup" successfully removed