[关闭]
@cdmonkey 2017-05-11T11:43:54.000000Z 字数 3961 阅读 1503

LVM-相关指令

存储


一、查看信息

pvscan

首先查看物理卷情况:

  1. [root@PBSNFS01 ~]# pvscan
  2. PV /dev/emcpowera VG vg_nfs lvm2 [2.93 TiB / 0 free]
  3. PV /dev/emcpowerb1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  4. PV /dev/emcpowerc1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  5. PV /dev/emcpowerd1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  6. Total: 4 [6.06 TiB] / in use: 4 [6.06 TiB] / in no VG: 0 [0 ]

http://www.cnblogs.com/kerrycode/p/4569515.html

所谓的安全移除:

  1. [root@PBSNFS01 ~]# vgchange -a n vg_data
  2. ...
  3. 0 logical volume(s) in volume group "vg_data" now active
  1. [root@PBSNFS01 ~]# vgremove vg_data
  2. ...
  3. WARNING: 3 physical volumes are currently missing from the system.
  4. Do you really want to remove volume group "vg_data" containing 1 logical volumes? [y/n]: y
  5. Logical volume "lv_data" successfully removed
  6. Volume group "vg_data" not found, is inconsistent or has PVs missing.
  7. Consider vgreduce --removemissing if metadata is inconsistent.

vgscan

查看卷组情况:

  1. [root@PBSNFS01 ~]# vgscan
  2. Reading all physical volumes. This may take a while...
  3. Found volume group "vg_nfs" using metadata type lvm2
  1. [root@PBSNFS01 ~]# vgs
  2. VG #PV #LV #SN Attr VSize VFree
  3. vg_nfs 4 1 0 wz--n- 6.06t 0

lvscan

  1. [root@PBSNFS01 ~]# lvscan
  2. ACTIVE '/dev/vg_nfs/lv_nfs' [6.06 TiB] inherit

二、进行扩容

http://www.cnblogs.com/mchina/p/linux-centos-logical-volume-manager-lvm.html

  1. [root@PBSNFS01 ~]# vgs
  2. VG #PV #LV #SN Attr VSize VFree
  3. vg_nfs 4 1 0 wz--n- 6.06t 0

能够看到,卷组vg_nfs上面已经没有空闲空间了。这时我们首先就需要对卷组进行扩容。最常用方案是增加PV至现有的卷组。

企业案例

审单图片的“NFS”服务器容量已满,需要进行扩容。下面是扩容前的情况:

  1. [root@PBSNFS01 ~]# pvscan
  2. PV /dev/emcpowera VG vg_nfs lvm2 [2.93 TiB / 0 free]
  3. PV /dev/emcpowerb1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  4. PV /dev/emcpowerc1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  5. PV /dev/emcpowerd1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  6. Total: 4 [6.06 TiB] / in use: 4 [6.06 TiB] / in no VG: 0 [0 ]

治国同志已经提前为该服务器分配了一块盘,我们需要进行扫描,以便系统能够识别到新的硬盘设备。

  1. [root@PBSNFS01 ~]# ls /sys/class/fc_host
  2. # 会看到host1-host4,因而需要对这四个host进行扫描操作:
  3. echo "- - -" > /sys/class/scsi_host/host1/scan
  4. echo "- - -" > /sys/class/scsi_host/host2/scan
  5. echo "- - -" > /sys/class/scsi_host/host3/scan
  6. echo "- - -" > /sys/class/scsi_host/host4/scan # 耗时大概五分钟,就能够将设备扫描出来,但是最后这条指令会卡死。

扫描完毕后(未执行其他操作)的情况:

  1. [root@PBSNFS01 ~]# pvscan
  2. Couldn't find device with uuid 7EzxJZ-iWPC-3eFF-Cows-LthP-AwiE-lUdeXK.'
  3. Couldn't find device with uuid aUQ2oC-JRUz-l6xl-CwRj-idPs-WFQT-ikvhLG.'
  4. Couldn't find device with uuid yeZuAV-ciGH-w7AL-p3Gn-0akv-gWxM-fyRjKA.'
  5. PV /dev/emcpowere1 VG vg_data lvm2 [1.04 TiB / 0 free]
  6. PV unknown device VG vg_data lvm2 [1.04 TiB / 0 free]
  7. PV unknown device VG vg_data lvm2 [1.04 TiB / 0 free]
  8. PV unknown device VG vg_data lvm2 [1.04 TiB / 0 free]
  9. PV /dev/emcpowera VG vg_nfs lvm2 [2.93 TiB / 0 free]
  10. PV /dev/emcpowerb1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  11. PV /dev/emcpowerc1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  12. PV /dev/emcpowerd1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
  13. Total: 8 [10.23 TiB] / in use: 8 [10.23 TiB] / in no VG: 0 [0 ]

说明新增的设备中携带了之前的“LVM”信息,并且是残缺的信息,因而首先需要将Unknow状态的PV从该无用VG中移除,最后再将该无用的VG移除。

  1. # 移除未知或已丢失的卷组:
  2. [root@PBSNFS01 ~]# vgreduce --removemissing vg_data
  3. Couldn't find device with uuid 7EzxJZ-iWPC-3eFF-Cows-LthP-AwiE-lUdeXK.'
  4. Couldn't find device with uuid aUQ2oC-JRUz-l6xl-CwRj-idPs-WFQT-ikvhLG.'
  5. Couldn't find device with uuid yeZuAV-ciGH-w7AL-p3Gn-0akv-gWxM-fyRjKA.'
  6. Wrote out consistent volume group vg_data

执行完毕后该无用卷组里面就只剩下一个物理卷了,因而要将其进行安全移除:

  1. [root@PBSNFS01 ~]# vgchange -a n vg_data
  2. 0 logical volume(s) in volume group "vg_data" now active

最后,移除该无用的卷组:

  1. [root@PBSNFS01 ~]# vgremove vg_data
  2. Volume group "vg_data" successfully removed

移除完毕后,再次查看物理卷信息:

  1. [root@PBSNFS01 ~]# pvs
  2. PV VG Fmt Attr PSize PFree
  3. /dev/emcpowera vg_nfs lvm2 a-- 2.93t 0
  4. /dev/emcpowerb1 vg_nfs lvm2 a-- 1.04t 0
  5. /dev/emcpowerc1 vg_nfs lvm2 a-- 1.04t 0
  6. /dev/emcpowerd1 vg_nfs lvm2 a-- 1.04t 0
  7. /dev/emcpowere1 lvm2 --- 1.04t 1.04t
  8. # 不用再执行创建物理卷的操作,因为 emcpowere1 本身已经是个物理卷了。
  9. # 直接对卷组进行扩容:
  10. [root@PBSNFS01 ~]# vgextend vg_nfs /dev/emcpowere1
  11. Volume group "vg_nfs" successfully extended

卷组扩容完毕后再对逻辑卷扩容:

  1. [root@PBSNFS01 ~]# lvextend -l +100%FREE /dev/mapper/vg_nfs-lv_nfs
  2. Size of logical volume vg_nfs/lv_nfs changed from 6.06 TiB (1588245 extents) to 7.10 TiB (1861663 extents).
  3. Logical volume lv_nfs successfully resized

空间扩容完毕后需要执行“resize”操作,耗时比较长,请耐心等待:

  1. [root@PBSNFS01 ~]# resize2fs /dev/mapper/vg_nfs-lv_nfs
  2. resize2fs 1.41.12 (17-May-2010)
  3. Filesystem at /dev/mapper/vg_nfs-lv_nfs is mounted on /home/app/images; on-line resizing required
  4. old desc_blocks = 388, new_desc_blocks = 455
  5. Performing an on-line resize of /dev/mapper/vg_nfs-lv_nfs to 1906342912 (4k) blocks.
  6. The filesystem on /dev/mapper/vg_nfs-lv_nfs is now 1906342912 blocks long.

至此,扩容完毕。

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注