> 虚拟化 Virtualization > Proxmox >

为proxmox增加硬盘

https://pve.proxmox.com/wiki/LVM2

1、查看硬盘

#fdisk -l

例如有/dev/sdb

2 新建一个分区

# sgdisk -N 1 /dev/sdb

3 创建pv

#pvcreate --metadatasize 250k -y -ff /dev/sdb1

4 创建vg

#vgcreate pve1 /dev/sdb1

5 创建lv 薄模式 这里100g根据自己硬盘大小填写

#lvcreate -L 100g -T -n data pve1

6 警告:

在某些情况下,LVM不会正确计算元数据池/块大小。请检查metadatapool是否足够大。必须满足的公式是:

PoolSize / ChunkSize * 64b = MetadataPoolSize

你可以通过命令获取这些信息

#lvs -a -o name,size,chunk_size

8 增加到存储 可以在数据中心 存储 直接添加lvm-thin  id:lvm1, 卷组:pve1, thin pool:data

或者编辑/etc/pve/storage.cfg

lvmthin: lvm1

         thinpool data

         vgname pve1

         content rootdir,images

Creating a Volume Group

Let’s assume we have an empty disk /dev/sdb, onto which we want to create a volume group named “vmdata”.

Caution Please note that the following commands will destroy all existing data on /dev/sdb.

First create a partition.

# sgdisk -N 1 /dev/sdb

Create a Physical Volume (PV) without confirmation and 250K metadatasize.

# pvcreate --metadatasize 250k -y -ff /dev/sdb1

Create a volume group named “vmdata” on /dev/sdb1

# vgcreate vmdata /dev/sdb1

Creating an extra LV for /var/lib/vz

This can be easily done by creating a new thin LV.

# lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>

A real world example:

# lvcreate -n vz -V 10G pve/data

Now a filesystem must be created on the LV.

# mkfs.ext4 /dev/pve/vz

At last this has to be mounted.

Warning be sure that /var/lib/vz is empty. On a default installation it’s not.

To make it always accessible add the following line in /etc/fstab.

# echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab

Resizing the thin pool

Resize the LV and the metadata pool can be achieved with the following command.

# lvresize --size +<size[\M,G,T]> --poolmetadatasize +<size[\M,G]> <VG>/<LVThin_pool>

Note When extending the data pool, the metadata pool must also be extended.

Create a LVM-thin pool

A thin pool has to be created on top of a volume group. How to create a volume group see Section LVM.

# lvcreate -L 80G -T -n vmstore vmdata

二、增加为zfs格式:

1 创建存储池(type  raidz 两块硬盘是raid1,三块硬盘是raid5,还可以用raidz1,2,3等高级用法)

性能对比

Stripe > Mirror

Stripe > RAIDZ1 > RAIDZ2 > RAIDZ3

数据可靠性

Mirror > Stripe

RAIDZ3 > RAIDZ2 > RAIDZ1 > Stripe

zpool create -f -o ashift=12 <pool> <type> <device> log <device-part1> cache <device-part2>

对已有pool增加缓存

zpool add -f <pool> log <device-part1> cache <device-part2>

更换失败的硬盘

zpool replace -f <pool> <old device> <new-device>

pool存储池丢失处理.no pools avaliable

查看存储池状态

#zpool status -v

删除缓存

#rm -f /etc/zfs/zpool.cache

导入存储池

#zpool import zfs-v

如果提示 -f

#zpool import -f zfs-v

如果有设置过缓存或者log,可能会提示 -m

#zpool import -f zfs-v -m

LVM使用缓存

#sda为HDD   sdb为SSD

1、创建物理卷

pvcreate /dev/sda

pvcreate /dev/sdb

2、创建卷组vg

vgcreate vg /dev/sda

vgextend vg /dev/sdb

3、创建逻辑卷(data为存储卷,cache为缓存卷,meta为缓冲卷索引,其中cache:meta不能大于1000:1,meta最小为8M)

lvcreate -L 500G -n data /dev/sda

lvcreate -L 220G -n cache /dev/sdb

lvcreate -L 220M -n meta  /dev/sdb

4、创建缓存池(注意cache和meta的顺序不能颠倒)

lvconvert --type cache-pool --poolmetadata vg/meta vg/cache

5、将存储卷加入到缓存池中(cachemode有writeback和writethrough两种模式,默认为writethrough)

lvconvert --type cache --cachepool vg/cache --cachemode writeback vg/data

注:writeback会在写入cache完成后,再写入date中

    writethrough会在写入cache的同时,写入date(写入date慢于cache)

    两种模式比较下writeback在使用过程中写入加速,但如果数据在缓存层中服务器掉电数据会丢失(现在已解决丢失问题,未研究)

    writethrough写入较慢,但数据不易丢失

创建完成。


(责任编辑:IT)