[关闭]
@cdmonkey 2017-02-22T05:55:49.000000Z 字数 8790 阅读 1026

LVM

命令总结


http://blog.mr-zrz.com/%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%88%9B%E5%BB%BAlvm%E5%88%86%E5%8C%BA.html
http://man.linuxde.net/lvcreate

http://ningg.top/use-lvm

有如下存储:

  1. Disk /dev/emcpowerh: 1099.5 GB, 1099511627776 bytes
  2. 255 heads, 63 sectors/track, 133674 cylinders
  3. Units = cylinders of 16065 * 512 = 8225280 bytes
  4. Sector size (logical/physical): 512 bytes / 512 bytes
  5. I/O size (minimum/optimal): 512 bytes / 512 bytes
  6. Disk identifier: 0x00000000
  7. Disk /dev/emcpoweri: 1099.5 GB, 1099511627776 bytes
  8. 255 heads, 63 sectors/track, 133674 cylinders
  9. Units = cylinders of 16065 * 512 = 8225280 bytes
  10. Sector size (logical/physical): 512 bytes / 512 bytes
  11. I/O size (minimum/optimal): 512 bytes / 512 bytes
  12. Disk identifier: 0x00000000
  13. ...

首先创建物理卷(PV):

  1. [root@t9db02 ~]# pvcreate /dev/emcpowerh
  2. Physical volume "/dev/emcpowerh" successfully created
  3. [root@t9db02 ~]# pvcreate /dev/emcpoweri
  4. Physical volume "/dev/emcpoweri" successfully created
  5. [root@t9db02 ~]# pvcreate /dev/emcpowerj
  6. Physical volume "/dev/emcpowerj" successfully created
  7. [root@t9db02 ~]# pvcreate /dev/emcpowerk
  8. Physical volume "/dev/emcpowerk" successfully created
  9. [root@t9db02 ~]# pvcreate /dev/emcpowerl
  10. Physical volume "/dev/emcpowerl" successfully created
  11. [root@t9db02 ~]# pvcreate /dev/emcpowerp
  12. Physical volume "/dev/emcpowerp" successfully created

然后创建卷组(VG):

  1. [root@t9db02 ~]# vgcreate T9DB02 /dev/emcpowerh /dev/emcpoweri /dev/emcpowerj /dev/emcpowerk /dev/emcpowerl /dev/emcpowerp
  2. Found duplicate PV ph5n21iSpK2x0wXc3ANPdn0AldxWQWH2: using /dev/sdr not /dev/sdb
  3. Volume group "T9DB02" successfully created
  4. -------------
  5. [root@t9db02 ~]# vgscan
  6. Reading all physical volumes. This may take a while...
  7. Found duplicate PV ph5n21iSpK2x0wXc3ANPdn0AldxWQWH2: using /dev/sdr not /dev/sdb
  8. Found volume group "T9DB02" using metadata type lvm2
  9. [root@t9db02 ~]# vgdisplay T9DB02
  10. Found duplicate PV ph5n21iSpK2x0wXc3ANPdn0AldxWQWH2: using /dev/sdr not /dev/sdb
  11. --- Volume group ---
  12. VG Name T9DB02
  13. System ID
  14. Format lvm2
  15. Metadata Areas 6
  16. Metadata Sequence No 1
  17. VG Access read/write
  18. VG Status resizable
  19. MAX LV 0
  20. Cur LV 0
  21. Open LV 0
  22. Max PV 0
  23. Cur PV 6
  24. Act PV 6
  25. VG Size 6.00 TiB
  26. PE Size 4.00 MiB
  27. Total PE 1572858
  28. Alloc PE / Size 0 / 0
  29. Free PE / Size 1572858 / 6.00 TiB
  30. VG UUID pfjXHl-iXON-b4Cu-kxjE-1kKP-5G1s-b9PznP

创建逻辑卷(LV):

  1. [root@t9db02 ~]# lvcreate -l 100%FREE T9DB02 -n lv_dbbackup
  2. Found duplicate PV ph5n21iSpK2x0wXc3ANPdn0AldxWQWH2: using /dev/sdr not /dev/sdb
  3. Logical volume "lv_dbbackup" created.

格式化

  1. [root@t9db02 ~]# mkfs.ext4 /dev/mapper/T9DB02-lv_dbbackup
  2. mke2fs 1.41.12 (17-May-2010)
  3. Filesystem label=
  4. OS type: Linux
  5. Block size=4096 (log=2)
  6. Fragment size=4096 (log=2)
  7. Stride=0 blocks, Stripe width=0 blocks
  8. 402653184 inodes, 1610606592 blocks
  9. 80530329 blocks (5.00%) reserved for the super user
  10. First data block=0
  11. Maximum filesystem blocks=4294967296
  12. 49152 block groups
  13. 32768 blocks per group, 32768 fragments per group
  14. 8192 inodes per group
  15. Superblock backups stored on blocks:
  16. 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  17. 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
  18. 102400000, 214990848, 512000000, 550731776, 644972544
  19. Writing inode tables: done
  20. Creating journal (32768 blocks): done
  21. Writing superblocks and filesystem accounting information: done
  22. This filesystem will be automatically checked every 39 mounts or
  23. 180 days, whichever comes first. Use tune2fs -c or -i to override.

挂载

  1. [root@t9db02 ~]# mount -t ext4 /dev/mapper/T9DB02-lv_dbbackup /home/oracle/dbbackup/
  2. [root@t9db02 ~]# df -h
  3. Filesystem Size Used Avail Use% Mounted on
  4. /dev/sda2 518G 19G 473G 4% /
  5. tmpfs 24G 202M 24G 1% /dev/shm
  6. /dev/sda1 190M 41M 140M 23% /boot
  7. /dev/mapper/T9DB02-lv_dbbackup
  8. 6.0T 56M 5.7T 1% /home/oracle/dbbackup

排错:

  1. [root@HISDB ~]# vgscan
  2. Reading all physical volumes. This may take a while...
  3. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdal not /dev/sdaw
  4. Found duplicate PV 0UU5sh3GACDzkxJobWZhvMcvjUV14x46: using /dev/sdz not /dev/sdak
  5. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdaa not /dev/sdal
  6. Found volume group "VolGroup" using metadata type lvm2
  7. Found volume group "HISDB" using metadata type lvm2 # VG还在

但是“LV”处于未激活状态:

  1. [root@HISDB ~]# lvscan
  2. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdal not /dev/sdaw
  3. Found duplicate PV 0UU5sh3GACDzkxJobWZhvMcvjUV14x46: using /dev/sdz not /dev/sdak
  4. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdaa not /dev/sdal
  5. ACTIVE '/dev/VolGroup/LogVol01' [987.25 GiB] inherit
  6. ACTIVE '/dev/VolGroup/LogVol00' [128.00 GiB] inherit
  7. inactive '/dev/HISDB/VolGroup-LogVol02' [3.00 TiB] inherit

进行激活:

  1. [root@HISDB ~]# vgchange -ay HISDB
  2. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdal not /dev/sdaw
  3. Found duplicate PV 0UU5sh3GACDzkxJobWZhvMcvjUV14x46: using /dev/sdz not /dev/sdak
  4. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdaa not /dev/sdal
  5. 1 logical volume(s) in volume group "HISDB" now active
  6. [root@HISDB ~]# lvscan
  7. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdal not /dev/sdaw
  8. Found duplicate PV 0UU5sh3GACDzkxJobWZhvMcvjUV14x46: using /dev/sdz not /dev/sdak
  9. Found duplicate PV 08JUGeuECScNSjsRANkBXhc08QR87drU: using /dev/sdaa not /dev/sdal
  10. ACTIVE '/dev/VolGroup/LogVol01' [987.25 GiB] inherit
  11. ACTIVE '/dev/VolGroup/LogVol00' [128.00 GiB] inherit
  12. ACTIVE '/dev/HISDB/VolGroup-LogVol02' [3.00 TiB] inherit

到此就能够进行正常的挂载了。

逻辑卷无法挂载

下面是故障现象:

  1. [root@T9DB02 ~]# mount /dev/vg_dbbackup/lv_dbbackup /home/oracle/dbbackup/
  2. mount: /dev/vg_dbbackup/lv_dbbackup already mounted or /home/oracle/dbbackup/ busy

无论挂载到本地的任何目录都会报出上面的错误信息。我们首先将相对应的“Mapper”的信息移除掉:

  1. [root@T9DB02 ~]# dmsetup remove vg_dbbackup-lv_dbbackup
  2. device-mapper: remove ioctl failed: Device or resource busy
  3. Command failed # 无法移除,显示该设备正忙。
  4. [root@T9DB02 ~]# fuser -m /dev/mapper/vg_dbbackup-lv_dbbackup
  5. /dev/mapper/vg_dbbackup-lv_dbbackup: 612c
  6. [root@T9DB02 ~]# ps -ef|grep 612
  7. oracle 612 611 0 14:36 pts/0 00:00:00 -bash # Need to kill this
  8. root 5612 4435 0 Jan18 ? 00:00:00 [aio/20]
  9. root 12358 4087 0 16:59 pts/2 00:00:00 grep --color=auto 612
  10. grid 25612 1 0 Jan18 ? 00:00:00 asm_smon_+ASM2
  11. grid 26128 1 0 Jan18 ? 00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d
  12. grid 26129 26128 0 Jan18 ? 00:01:17 /u01/app/11.2.0/grid/opmn/bin/ons -d

杀掉占用该逻辑卷的进程后重新挂载:

  1. [root@T9DB02 ~]# dmsetup remove vg_dbbackup-lv_dbbackup
  2. [root@T9DB02 ~]# dmsetup ls
  3. VolGroup00-lv_swap (253, 1)
  4. VolGroup00-lv_root (253, 0)
  5. [root@T9DB02 ~]# ls /dev/mapper/
  6. control VolGroup00-lv_root VolGroup00-lv_swap
  7. # 没有找到该逻辑卷设备,证明我们已经将其正常的移除了。

重新扫描一下逻辑卷信息,发现该逻辑卷处于未激活的状态:

  1. [root@T9DB02 ~]# lvs
  2. LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  3. lv_root VolGroup00 -wi-ao 1.07T
  4. lv_swap VolGroup00 -wi-ao 16.00G
  5. lv_dbbackup vg_dbbackup -wi--- 3.39T
  6. [root@T9DB02 ~]# lvscan
  7. inactive '/dev/vg_dbbackup/lv_dbbackup' [3.39 TB] inherit
  8. ACTIVE '/dev/VolGroup00/lv_root' [1.07 TB] inherit
  9. ACTIVE '/dev/VolGroup00/lv_swap' [16.00 GB] inherit

重新激活该逻辑卷:

  1. [root@T9DB02 ~]# vgchange -ay vg_dbbackup
  2. 1 logical volume(s) in volume group "vg_dbbackup" now active
  3. ---------------
  4. [root@T9DB02 ~]# lvscan
  5. ACTIVE '/dev/vg_dbbackup/lv_dbbackup' [3.39 TB] inherit
  6. ACTIVE '/dev/VolGroup00/lv_root' [1.07 TB] inherit
  7. ACTIVE '/dev/VolGroup00/lv_swap' [16.00 GB] inherit

激活后重新挂载:

  1. [root@T9DB02 ~]# mount /dev/mapper/vg_dbbackup-lv_dbbackup /beifen/
  2. [root@T9DB02 ~]# df -h
  3. Filesystem Size Used Avail Use% Mounted on
  4. /dev/mapper/VolGroup00-lv_root
  5. 1.1T 516G 495G 52% /
  6. /dev/sda1 190M 14M 167M 8% /boot
  7. tmpfs 253G 634M 252G 1% /dev/shm
  8. /dev/mapper/vg_dbbackup-lv_dbbackup
  9. 3.4T 1.4T 1.8T 44% /beifen

查看下挂载后的情况:

  1. [root@T9DB02 beifen]# ls
  2. 20160426L0.log archivebak databak lost+found mount-2016-04-28.txt
  3. # 事实证明没有丢失任何的数据。

空间扩容

虚拟机的某个逻辑卷分区原大小60G,后将该硬盘的空间增加到100G,那么需要将逻辑卷分区进行扩容。

  1. [root@PBSSXFJRDB ~]# df -h
  2. Filesystem Size Used Avail Use% Mounted on
  3. ...
  4. /dev/mapper/vg_home-lv_home
  5. 59G 390M 56G 1% /home
  1. Command (m for help): w
  2. The partition table has been altered!
  3. Calling ioctl() to re-read partition table.
  4. WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
  5. The kernel still uses the old table. The new table will be used at
  6. the next reboot or after you run partprobe(8) or kpartx(8)

http://haoyou168.blog.51cto.com/284295/325865

  1. [root@PBSSXFJRDB ~]# yum install parted
  2. [root@PBSSXFJRDB ~]# partprobe /dev/sdc
  3. Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdc (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
  1. [root@PBSSXFJRDB ~]# partx /dev/sdc
  2. # 1: 2048-125829119 (125827072 sectors, 64423 MB)
  3. # 2: 125829120-209712509 ( 83883390 sectors, 42948 MB)
  4. # 3: 0- -1 ( 0 sectors, 0 MB)
  5. # 4: 0- -1 ( 0 sectors, 0 MB)
  6. [root@PBSSXFJRDB ~]# ls -al /dev/sd*
  7. brw-rw---- 1 root disk 8, 0 May 24 11:57 /dev/sda
  8. brw-rw---- 1 root disk 8, 1 May 23 18:16 /dev/sda1
  9. brw-rw---- 1 root disk 8, 2 May 23 18:16 /dev/sda2
  10. brw-rw---- 1 root disk 8, 16 May 24 11:57 /dev/sdb
  11. brw-rw---- 1 root disk 8, 17 May 23 18:16 /dev/sdb1
  12. brw-rw---- 1 root disk 8, 32 May 24 11:58 /dev/sdc
  13. brw-rw---- 1 root disk 8, 33 May 23 18:16 /dev/sdc1
  14. brw-rw---- 1 root disk 8, 34 May 24 12:00 /dev/sdc2 # 新增加分区。

进行格式化(略)。

  1. #
  2. [root@PBSSXFJRDB ~]# pvcreate /dev/sdc2
  3. Physical volume "/dev/sdc2" successfully created
  4. #
  5. [root@PBSSXFJRDB ~]# vgextend vg_home /dev/sdc2
  6. Volume group "vg_home" successfully extended
  7. #
  8. [root@PBSSXFJRDB ~]# lvextend -l +100%FREE /dev/mapper/vg_home-lv_home
  9. Size of logical volume vg_home/lv_home changed from 60.00 GiB (15359 extents) to 99.99 GiB (25598 extents).
  10. Logical volume lv_home successfully resized
  11. # 最后:
  12. [root@PBSSXFJRDB ~]# resize2fs /dev/vg_home/lv_home
  13. resize2fs 1.41.12 (17-May-2010)
  14. Filesystem at /dev/vg_home/lv_home is mounted on /home; on-line resizing required
  15. old desc_blocks = 4, new_desc_blocks = 7
  16. Performing an on-line resize of /dev/vg_home/lv_home to 26212352 (4k) blocks.
  17. The filesystem on /dev/vg_home/lv_home is now 26212352 blocks long.

扩容完成:

  1. [root@PBSSXFJRDB ~]# df -h
  2. Filesystem Size Used Avail Use% Mounted on
  3. ...
  4. /dev/mapper/vg_home-lv_home
  5. 99G 398M 93G 1% /home
添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注