@gy-ban
2017-08-06T08:37:18.000000Z
字数 12900
阅读 1386
lvm
LVM 的全名是 Logical Volume Manager,主要用来对文件系统进行伸缩管理,具体怎么做,我们来一步一步的讲解。
首先我们得要清楚lvm中的几个概念:
Physical Volume, PV
我们实际的 partition 需要调整系统识别码 (system ID) 成为 8e (LVM 的识别码),然后再经过 pvcreate 的命令将他转成 LVM 最底层的实体卷轴 (PV) ,之后才能够将这些 PV 加以利用!
Volume Group, VG
VG是有一个或多个PV组成的一个“大磁盘”,每个 VG 最多仅能包含 65534 个 PE,所以创建VG时,指定PE的大小就决定了VG的最大容量。
Physical Extend, PE
LVM 默认使用 4MB 的 PE 区块,而 LVM 的 VG 最多仅能含有 65534 个 PE ,因此默认的 LVM VG 会有 4M*65534/(1024M/G)=256G。PE是整个 LVM 最小的储存区块,也就是说,其实我们的文件数据都是藉由写入 PE 来处理的。 简单的说,这个 PE 就有点像文件系统里面的 block 大小啦。
Logical Volume, LV
最终的 VG 还会被切成 LV,这个 LV 就是最后可以被格式化使用,由于PE 是整个 LVM 的最小储存单位,那么 LV 的大小就与在此 LV 内的 PE 总数有关。 为了方便使用者利用 LVM 来管理其系统,因此 LV 的装置档名通常指定为『 /dev/vgname/lvname 』的样式
tips:
LVM 之所以可弹性的变更 filesystem 的容量,就是透过交换 PE 来进行数据转换, 将原本 LV 内的 PE 移转到其他装置中以降低 LV 容量,或将其他装置的 PE 加到此 LV 中以加大容量! VG、LV 与 PE 的关系有点像下图:
如上图所示,VG 内的 PE 会分给虚线部分的 LV,如果未来这个 VG 要扩充的话,加上其他的 PV 即可。 而最重要的 LV 如果要扩充的话,也是透过加入 VG 内没有使用到的 PE 来扩充的!
通过 PV, VG, LV 的规划之后,再利用 mkfs 就可以将你的 LV 格式化成为可以利用的文件系统,整个流程由基础到最终的结果如下图:
如此一来,我们就可以利用 LV 来进行系统的挂载了
tips
你应该要觉得奇怪的是, 那么我的数据写入这个 LV 时,到底他是怎么写入硬盘当中的?
依据写入机制的不同,有两种方式:
基本上,LVM 最主要的用处是在实现一个可以弹性调整容量的文件系统上, 而不是在创建一个效能为主的磁碟上,所以,我们应该利用的是 LVM 可以弹性管理整个 partition 大小的用途上,而不是著眼在效能上的。因此, LVM 默认的读写模式是线性模式。并且,如果你使用 triped 模式,任何一个partition坏了,整个数据都没了。
所以,不是很适合使用这种模式!如果要强调效能与备份,那么就直接使用 RAID 即可, 不需要用到 LVM 。
首先,我们使用fdisk创建了三个分区,具体步骤我就不一一详解,这其中需要注意的一点就是要将分区system ID 需要为 8e,其实没有配置成为 8e 也没关系, 不过某些 LVM 的侦测命令可能会侦测不到该 partition。
[root@gy-vm03 ~]# fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610): 800
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0c6b04a7
Device Boot Start End Blocks Id System
/dev/sdb1 1 800 6425968+ 8e Linux LVM
PV相关命令
[root@gy-vm03 ~]# pvscan
No matching physical volumes found
[root@gy-vm03 ~]# pvcreate /dev/sdb{1,2,3}
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
Physical volume "/dev/sdb3" successfully created
[root@gy-vm03 ~]# pvscan
PV /dev/sdb1 lvm2 [6.13 GiB]
PV /dev/sdb2 lvm2 [6.13 GiB]
PV /dev/sdb3 lvm2 [6.13 GiB]
Total: 3 [18.38 GiB] / in use: 0 [0 ] / in no VG: 3 [18.38 GiB]
[root@gy-vm03 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 lvm2 --- 6.13g 6.13g
/dev/sdb2 lvm2 --- 6.13g 6.13g
/dev/sdb3 lvm2 --- 6.13g 6.13g
[root@gy-vm03 ~]# pvdisplay
"/dev/sdb1" is a new physical volume of "6.13 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 6.13 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID G6mlYu-mL9o-JSwU-NRft-ZLQu-Sgpv-m0mczx
"/dev/sdb2" is a new physical volume of "6.13 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb2
VG Name
PV Size 6.13 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID v2PNzU-aFwG-RUfV-N3Oo-CVyZ-6dFn-ZVWypd
"/dev/sdb3" is a new physical volume of "6.13 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb3
VG Name
PV Size 6.13 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID Y3ou2R-zugI-dPLJ-nZzk-EgQe-3Fwt-jHm8DL
这样三个pv就创建好了。
vg相关命令:
和pv不一样,vg是自己指定名字,而pv是使用分区的名字。
vgcreate [-s N[mgt]] VG名称 PV名称
-s :后面接 PE 的大小 (size) ,单位可以是 m, g, t (大小写均可)
[root@gy-vm03 ~]# vgcreate -s 16M ggyy /dev/sdb{1,2}
Volume group "ggyy" successfully created
[root@gy-vm03 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "ggyy" using metadata type lvm2
[root@gy-vm03 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 ggyy lvm2 a-- 6.12g 6.12g
/dev/sdb2 ggyy lvm2 a-- 6.12g 6.12g
/dev/sdb3 lvm2 --- 6.13g 6.13g
[root@gy-vm03 ~]# vgdisplay
--- Volume group ---
VG Name ggyy
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 12.25 GiB
PE Size 16.00 MiB
Total PE 784
Alloc PE / Size 0 / 0
Free PE / Size 784 / 12.25 GiB
VG UUID floxh7-Hexi-7WHJ-ZMwI-Ej0l-RFDy-V32p1u
这里我创建了一个名为ggyy的vg,并且只使用了sdb1和sdb2两个分区。接下里,我们把sdb3也加进来,并移除sdb2
[root@gy-vm03 ~]# vgextend ggyy /dev/sdb3
Volume group "ggyy" successfully extended
[root@gy-vm03 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
ggyy 3 0 0 wz--n- 18.38g 18.38g
[root@gy-vm03 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 ggyy lvm2 a-- 6.12g 6.12g
/dev/sdb2 ggyy lvm2 a-- 6.12g 6.12g
/dev/sdb3 ggyy lvm2 a-- 6.12g 6.12g
[root@gy-vm03 ~]# vgdisplay
--- Volume group ---
VG Name ggyy
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 18.38 GiB
PE Size 16.00 MiB
Total PE 1176
Alloc PE / Size 0 / 0
Free PE / Size 1176 / 18.38 GiB
VG UUID floxh7-Hexi-7WHJ-ZMwI-Ej0l-RFDy-V32p1u
[root@gy-vm03 ~]# vgreduce ggyy /dev/sdb2
Removed "/dev/sdb2" from volume group "ggyy"
[root@gy-vm03 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
ggyy 2 0 0 wz--n- 12.25g 12.25g
[root@gy-vm03 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 ggyy lvm2 a-- 6.12g 6.12g
/dev/sdb2 lvm2 --- 6.13g 6.13g
/dev/sdb3 ggyy lvm2 a-- 6.12g 6.12g
[root@gy-vm03 ~]# vgdisplay
--- Volume group ---
VG Name ggyy
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 12.25 GiB
PE Size 16.00 MiB
Total PE 784
Alloc PE / Size 0 / 0
Free PE / Size 784 / 12.25 GiB
VG UUID floxh7-Hexi-7WHJ-ZMwI-Ej0l-RFDy-V32p1u
lv相关命令
创建LV命令参数
lvcreate [-L N[mgt]] [-n LV名称] VG名称
-L :后面接容量,容量的单位可以是 M,G,T 等,要注意的是,最小单位为 PE,
因此这个数量必须要是 PE 的倍数,若不相符,系统会自行计算最相近的容量。
-l :后面可以接 PE 的个数,而不是数量。若要这么做,得要自行计算 PE 数。
-n :后面接的就是 LV 的名称
通过vgdisplay我们知道,总共有784个PE,那我们先分200PE给mysql卷
[root@gy-vm03 ~]# lvcreate -l 200 -n mysql ggyy
Logical volume "mysql" created
[root@gy-vm03 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ggyy/mysql
LV Name mysql
VG Name ggyy
LV UUID ngBzdl-dzGy-htsf-qfcW-US6P-4PGB-XhPew5
LV Write Access read/write
LV Creation host, time gy-vm03, 2017-08-06 15:36:09 +0800
LV Status available
# open 0
LV Size 3.12 GiB
Current LE 200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
然后在创建一个5G lv
[root@gy-vm03 ~]# vgdisplay
--- Volume group ---
VG Name ggyy
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 12.25 GiB
PE Size 16.00 MiB
Total PE 784
Alloc PE / Size 200 / 3.12 GiB
Free PE / Size 584 / 9.12 GiB
VG UUID floxh7-Hexi-7WHJ-ZMwI-Ej0l-RFDy-V32p1u
[root@gy-vm03 ~]# lvcreate -L 5G -n mongo ggyy
Logical volume "mongo" created
[root@gy-vm03 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
mongo ggyy -wi-a----- 5.00g
mysql ggyy -wi-a----- 3.12g
[root@gy-vm03 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ggyy/mysql
LV Name mysql
VG Name ggyy
LV UUID ngBzdl-dzGy-htsf-qfcW-US6P-4PGB-XhPew5
LV Write Access read/write
LV Creation host, time gy-vm03, 2017-08-06 15:36:09 +0800
LV Status available
# open 0
LV Size 3.12 GiB
Current LE 200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/ggyy/mongo
LV Name mongo
VG Name ggyy
LV UUID 63E9Qe-LnbQ-fdUf-ifnb-x9Rn-TqXl-sNjdiS
LV Write Access read/write
LV Creation host, time gy-vm03, 2017-08-06 15:38:50 +0800
LV Status available
# open 0
LV Size 5.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@gy-vm03 ~]# mkfs.ext4 /dev/ggyy/mysql
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
204800 inodes, 819200 blocks
40960 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=838860800
25 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@gy-vm03 ~]# mkfs.ext4 /dev/ggyy/mongo
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@gy-vm03 ~]# mkdir /mysql
[root@gy-vm03 ~]# mkdir /mongo
[root@gy-vm03 ~]# mount /dev/ggyy/mysql /mysql
[root@gy-vm03 ~]# mount /dev/ggyy/mongo /mongo/
[root@gy-vm03 ~]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 2.9G 14G 18% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devpts 0 0 0 - /dev/pts
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 283M 28M 240M 11% /boot
none 0 0 0 - /proc/sys/fs/binfmt_misc
vmware-vmblock 0 0 0 - /var/run/vmblock-fuse
/dev/mapper/ggyy-mysql
3.1G 4.7M 2.9G 1% /mysql
/dev/mapper/ggyy-mongo
4.8G 10M 4.6G 1% /mongo
如果这个时候mysql的容量不够用了,那我们对其进行扩容操作
[root@gy-vm03 ~]# lvresize -L +2G /dev/ggyy/mysql
Size of logical volume ggyy/mysql changed from 3.12 GiB (200 extents) to 5.12 GiB (328 extents).
Logical volume mysql successfully resized
[root@gy-vm03 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ggyy/mysql
LV Name mysql
VG Name ggyy
LV UUID ngBzdl-dzGy-htsf-qfcW-US6P-4PGB-XhPew5
LV Write Access read/write
LV Creation host, time gy-vm03, 2017-08-06 15:36:09 +0800
LV Status available
# open 1
LV Size 5.12 GiB
Current LE 328
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/ggyy/mongo
LV Name mongo
VG Name ggyy
LV UUID 63E9Qe-LnbQ-fdUf-ifnb-x9Rn-TqXl-sNjdiS
LV Write Access read/write
LV Creation host, time gy-vm03, 2017-08-06 15:38:50 +0800
LV Status available
# open 1
LV Size 5.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@gy-vm03 ~]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 2.9G 14G 18% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devpts 0 0 0 - /dev/pts
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 283M 28M 240M 11% /boot
none 0 0 0 - /proc/sys/fs/binfmt_misc
vmware-vmblock 0 0 0 - /var/run/vmblock-fuse
/dev/mapper/ggyy-mysql
3.1G 4.7M 2.9G 1% /mysql
/dev/mapper/ggyy-mongo
4.8G 10M 4.6G 1% /mongo
通过lvresize,成功的对lv进行了扩容,而且,我们的 LVM 可以线上直接处理,并不需要特别给他 umount,但是文件系统却没有相对添加,这个时候就需要使用 resize2fs来处理一下
resize2fs [-f] [device] [size]
选项与参数:
-f :强制进行 resize 的动作
[device]:装置的文件名称
[size] :可以加也可以不加。如果加上 size 的话,那么就必须要给予一个单位,譬如 M, G 等等。如果没有 size 的话,那么默认使用整个 partition的容量来处理
[root@gy-vm03 ~]# resize2fs /dev/ggyy/mysql
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/ggyy/mysql is mounted on /mysql; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/ggyy/mysql to 1343488 (4k) blocks.
The filesystem on /dev/ggyy/mysql is now 1343488 blocks long.
[root@gy-vm03 ~]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 2.9G 14G 18% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devpts 0 0 0 - /dev/pts
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 283M 28M 240M 11% /boot
none 0 0 0 - /proc/sys/fs/binfmt_misc
vmware-vmblock 0 0 0 - /var/run/vmblock-fuse
/dev/mapper/ggyy-mysql
5.0G 6.3M 4.8G 1% /mysql
/dev/mapper/ggyy-mongo
4.8G 10M 4.6G 1% /mongo
缩减lv容量
注意事项
不能在线缩减,得先卸载切记
确保缩减后的空间大小依然能存储原有的所有数据 在缩减之前应该先强行检查文件,以确保文件系统处于一至性状态
将mongo分区缩减到2G
[root@gy-vm03 ~]# umount /dev/ggyy/mongo
[root@gy-vm03 ~]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 2.9G 14G 18% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devpts 0 0 0 - /dev/pts
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 283M 28M 240M 11% /boot
none 0 0 0 - /proc/sys/fs/binfmt_misc
vmware-vmblock 0 0 0 - /var/run/vmblock-fuse
/dev/mapper/ggyy-mysql
5.0G 6.3M 4.8G 1% /mysql
[root@gy-vm03 ~]# e2fsck -f /dev/ggyy/mongo
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/ggyy/mongo: 11/327680 files (0.0% non-contiguous), 55902/1310720 blocks
# 先将文件系统缩减到2G
[root@gy-vm03 ~]# resize2fs /dev/ggyy/mongo 2G
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/ggyy/mongo to 524288 (4k) blocks.
The filesystem on /dev/ggyy/mongo is now 524288 blocks long.
[root@gy-vm03 ~]# lvresize -L -3G /dev/ggyy/mongo
WARNING: Reducing active logical volume to 2.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mongo? [y/n]: y
Size of logical volume ggyy/mongo changed from 5.00 GiB (320 extents) to 2.00 GiB (128 extents).
Logical volume mongo successfully resized
[root@gy-vm03 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
mongo ggyy -wi-a----- 2.00g
mysql ggyy -wi-ao---- 5.12g
[root@gy-vm03 ~]# mount /dev/ggyy/mongo /mongo
[root@gy-vm03 ~]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 2.9G 14G 18% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devpts 0 0 0 - /dev/pts
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 283M 28M 240M 11% /boot
none 0 0 0 - /proc/sys/fs/binfmt_misc
vmware-vmblock 0 0 0 - /var/run/vmblock-fuse
/dev/mapper/ggyy-mysql
5.0G 6.3M 4.8G 1% /mysql
/dev/mapper/ggyy-mongo
1.9G 7.5M 1.8G 1% /mongo