openstack结合glusterfs存储

1、机器信息

2、准备工作

2.1、关闭NetworkManager服务

2.2、上传repo文件

2.3、在每个机器上做时间同步

2.4、每台机器上关闭selinux服务、配置防火墙

2.5、每台机器上配置hosts文件

3、部署O+G环境

3.1、每台机器上安装gfs组件

3.2、在YUN21上安装packstack,并部署openstack

3.3、创建gfs卷并挂载

3.4、进入dash,上传镜像,创建网络,修改配额

3.5、创建glance和cinder卷并挂载

4、系统优化

1、机器信息

主机名  网卡   inet addr

YUN21  eth1   10.0.0.21

eth2   192.168.0.121

eth4   20.0.0.21

YUN22  eth2   192.168.0.122

eth3   10.0.0.22

eth7   20.0.0.22

YUN23  eth2   192.168.0.123

eth3   10.0.0.23

eth7   20.0.0.23

YUN24  eth0   192.168.0.124

eth1   10.0.0.24

eth6   20.0.0.24

安装的是桌面版的centos6.5系统

(安装桌面版系统的原因是不同物理机对于硬件的识别需要软件包支持的差异性,例如一种情况就是在联想物理机上安装带有桌面的系统后不需要安装额外的驱动就可以识别安装在物理机上的intel万兆网卡,这是在没有带有桌面系统上不存在的)。

2、准备工作

2.1、关闭NetworkManager服务

如果不关闭的话,都无法PING通其他机器,在最简版的centos系统中不需要考虑这个服务。

每个机器上都做下面操作

# service NetworkManager stop

# chkconfig NetworkManager off

2.2、上传repo文件

每个机器上操作

# yum makecache

2.3、在每个机器上做时间同步

# yum install  -y ntp ntpdate ntp-doc

配置NTP

# vi /etc/ntp.conf

把下面的内容加以注释,使其失效

restrict default kod nomodify notrap nopeer noquery

restrict -6 default kod nomodify notrap nopeer noquery

restrict 127.0.0.1

restrict -6 ::1

server 0.centos.pool.ntp.org iburst

server 1.centos.pool.ntp.org iburst

server 2.centos.pool.ntp.org iburst

server 3.centos.pool.ntp.org iburst

添加一行

server  192.168.0.100

# service ntpd start

# chkconfig ntpd on

# ntpdate -u 192.168.0.124

(IP地址是192.168.0.124的物理机是配置好的NTP服务器端)

2.4、每台机器上关闭selinux服务、配置防火墙

# setenforce 0

# vi /etc/sysconfig/selinux

SELINUX=disabled

# vi /etc/sysconfig/iptables

在ssh规则下添加

-A INPUT -p tcp -m multiport --dports 24007:24047 -j ACCEPT

-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT

-A INPUT -p udp -m udp --dport 111 -j ACCEPT

-A INPUT -p tcp -m multiport --dports 38465:38485 -j ACCEPT

-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT

-A INPUT -p tcp -m multiport --dports 16509 -j ACCEPT

# service iptables restart

2.5、每台机器上配置hosts文件

# vi /etc/hosts

添加

192.168.0.121      YUN21

192.168.0.122      YUN22

192.168.0.123      YUN23

192.168.0.124      YUN24

3、部署O+G环境

3.1、每台机器上安装gfs组件

# yum install -y glusterfs-server

3.2、在YUN21上安装packstack,并部署openstack

首先更新每台机器

# yum update -y && reboot

这里在更新的时候由于系统的原因(和最简化系统对比),没有在Centos软件源中找到google为开头的两个软件,可以通过wget的方式把系统镜像源里边的这两个软件下载到本地并安装,之后再次运行命令。

# yum install -y openstack-packstack

# packstack --gen-answer-file answers.txt

# vi answers.txt

修改密码

CONFIG_KEYSTONE_ADMIN_PW=openstack

CONFIG_PROVISION_DEMO=n

修改网络

CONFIG_NOVA_COMPUTE_PRIVIF=eth1

CONFIG_NOVA_NETWORK_PUBIF=eth0

CONFIG_NOVA_NETWORK_PRIVIF=eth1

to

CONFIG_NOVA_COMPUTE_PRIVIF=eth1

CONFIG_NOVA_NETWORK_PUBIF=eth2

CONFIG_NOVA_NETWORK_PRIVIF=eth1

(上边网卡的修改在不同的物理环境中,也就是物理机不同的环境中是不一样的,要视情况而修改,两个参数有“PRIVIF”的表示内部网络,中间一个有“PUBIF”标识的表示外部网络,也就是分配浮动IP的网络。)

添加计算节点

CONFIG_COMPUTE_HOSTS=192.168.0.121

to

CONFIG_COMPUTE_HOSTS=192.168.0.121,192.168.0.122,192.168.0.123,192.168.0.124

# packstack --answer-file answers.txt

配置网桥

[[email protected]YUN21 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth2 ifcfg-eth2.bak

[[email protected]YUN21 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth2 /etc/sysconfig/network-scripts/ifcfg-br-ex

[[email protected]YUN21 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

HWADDR=xx:xx:xx:xx:xx:xx

TYPE=OVSPort

OVS_BRIDGE=br-ex

DEVICETYPE=ovs

ONBOOT=yes

[[email protected]YUN21 ~]# vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex

TYPE=OVSBridge

DEVICETYPE=ovs

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.0.121

NETMASK=255.255.255.128

GATEWAY=10.231.29.1

[[email protected]YUN21 ~]# vi /etc/neutron/plugin.ini

添加

network_vlan_ranges = physnet1

bridge_mappings = physnet1:br-ex

[[email protected]YUN21 ~]# service network restart

[[email protected]YUN21 ~]# ifconfig

br-ex     Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx

inet addr:192.168.0.121  Bcast:192.168.0.127  Mask:255.255.255.0

inet6 addr: fe80::49b:36ff:fed3:bb5e/64 Scope:Link

UP BROADCAST RUNNING  MTU:1500  Metric:1

RX packets:1407 errors:0 dropped:0 overruns:0 frame:0

TX packets:856 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:309542 (302.2 KiB)  TX bytes:171147 (167.1 KiB)

eth1      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx

inet addr:10.0.0.21  Bcast:10.0.0.255  Mask:255.255.255.0

inet6 addr: fe80::6e92:bfff:fe0b:de45/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:142 errors:0 dropped:0 overruns:0 frame:0

TX packets:14 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:10730 (10.4 KiB)  TX bytes:1128 (1.1 KiB)

Memory:dfa20000-dfa3ffff

eth2      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx

inet6 addr: fe80::6e92:bfff:fe0b:de44/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:176062 errors:0 dropped:0 overruns:0 frame:0

TX packets:80147 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:231167565 (220.4 MiB)  TX bytes:9536425 (9.0 MiB)

Memory:dfa00000-dfa1ffff

eth4      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx

inet addr:20.0.0.21  Bcast:20.0.0.255  Mask:255.255.255.0

inet6 addr: fe80::7a24:afff:fe85:3a32/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:9 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:0 (0.0 b)  TX bytes:670 (670.0 b)

Interrupt:68 Memory:fa000000-fa7fffff

当名为br-ex的网卡的IP替换为eth2网卡的IP时才可以接下来的操作

3.3、创建gfs卷并挂载

[[email protected]YUN21 ~]# service glusterd status

glusterd (pid  3124) is running...

在任意一台机器说那个操作,把其他机器加入到存储池

[[email protected]YUN21 ~]# gluster peer probe 20.0.0.22

peer probe: success.

[[email protected]YUN21 ~]# gluster peer probe 20.0.0.23

peer probe: success.

[[email protected]YUN21 ~]# gluster peer probe 20.0.0.24

peer probe: success.

[[email protected]YUN21 ~]# gluster peer status

Number of Peers: 3

Hostname: 20.0.0.22

Uuid: 434fc5dd-22c9-49c8-9e42-4962279cdca6

State: Peer in Cluster (Connected)

Hostname: 20.0.0.23

Uuid: a3c6770a-0b3b-4dc5-ad94-37e8c06da3b5

State: Peer in Cluster (Connected)

Hostname: 20.0.0.24

Uuid: 13905ea7-0c32-4be0-9708-b6788033070c

State: Peer in Cluster (Connected)

每个机器上创建组成卷的二级目录

# mkdir /gv0/brick

# mkdir /gv1/brick

# mkdir /gv2/brick

创建nova卷

[[email protected]YUN21 ~]# gluster volume create nova replica 2 20.0.0.21:/gv0/brick/ 20.0.0.22:/gv0/brick/ 20.0.0.23:/gv0/brick/ 20.0.0.24:/gv0/brick/

volume create: nova: success: please start the volume to access data

[[email protected]YUN21 ~]# gluster volume start nova

volume start: nova: success

[[email protected]YUN21 ~]# gluster volume status nova

Status of volume: nova

Gluster process      Port Online Pid

------------------------------------------------------------------------------

Brick 20.0.0.21:/gv0/brick    49152 Y 7672

Brick 20.0.0.22:/gv0/brick    49152 Y 30221

Brick 20.0.0.23:/gv0/brick    49152 Y 30432

Brick 20.0.0.24:/gv0/brick    49152 Y 22918

NFS Server on localhost     2049 Y 7687

Self-heal Daemon on localhost    N/A Y 7693

NFS Server on 20.0.0.24     2049 Y 22933

Self-heal Daemon on 20.0.0.24    N/A Y 22938

NFS Server on 20.0.0.22     2049 Y 30236

Self-heal Daemon on 20.0.0.22    N/A Y 30242

NFS Server on 20.0.0.23     2049 Y 30447

Self-heal Daemon on 20.0.0.23    N/A Y 30453

Task Status of Volume nova

------------------------------------------------------------------------------

There are no active volume tasks

每台机器上都配置自动挂载

[[email protected]YUN21 ~]# echo "20.0.0.21:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab

[[email protected]YUN21 ~]# mount -a

[[email protected]YUN21 ~]# mount

/dev/mapper/vg_YUN21-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg_YUN21-lv_gv0 on /gv0 type xfs (rw,nobarrier)

/dev/mapper/vg_YUN21-lv_gv1 on /gv1 type xfs (rw,nobarrier)

/dev/mapper/vg_YUN21-lv_gv2 on /gv2 type xfs (rw,nobarrier)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

/srv/loopback-device/swiftloopback on /srv/node/swiftloopback type ext4 (rw,noatime,nodiratime,loop=/dev/loop1,nobarrier,user_xattr)

gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)

20.0.0.21:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[[email protected]YUN22 ~]# echo "20.0.0.22:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab

[[email protected]YUN22 ~]# mount -a && mount

/dev/mapper/vg_YUN13-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg_YUN13-lv_gv0 on /gv0 type xfs (rw)

/dev/mapper/vg_YUN13-lv_gv1 on /gv1 type xfs (rw)

/dev/mapper/vg_YUN13-lv_gv2 on /gv2 type xfs (rw)

/dev/mapper/vg_YUN13-lv_gv3 on /gv3 type xfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

20.0.0.22:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[[email protected]YUN23 ~]# echo "20.0.0.23:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab

[[email protected]YUN23 ~]# mount -a && mount

/dev/mapper/vg_YUN23-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg_YUN23-lv_gv0 on /gv0 type xfs (rw)

/dev/mapper/vg_YUN23-lv_gv1 on /gv1 type xfs (rw)

/dev/mapper/vg_YUN23-lv_gv2 on /gv2 type xfs (rw)

/dev/mapper/vg_YUN23-lv_gv3 on /gv3 type xfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

20.0.0.23:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[[email protected]YUN24 ~]# echo "20.0.0.24:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab

[[email protected]YUN24 ~]# mount -a && mount

/dev/mapper/vg_YUN17-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg_YUN17-lv_gv0 on /gv0 type xfs (rw)

/dev/mapper/vg_YUN17-lv_gv1 on /gv1 type xfs (rw)

/dev/mapper/vg_YUN17-lv_gv2 on /gv2 type xfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

20.0.0.24:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072

每台机器上操作

# vi .bash_profile

在最后添加

export PS1='[\[email protected]\h \W]\$'

mount -a

这个操作是为了保证在系统重启之后,glusterfs卷可以完成自动挂载

查看并修改目录权限

[[email protected]YUN21 ~]#ll -d /var/lib/nova/instances/

drwxr-xr-x 3 root root 46 Dec 25 17:18 /var/lib/nova/instances/

[[email protected]YUN21 ~]#chown -R nova:nova /var/lib/nova/instances/

[[email protected]YUN21 ~]#ll -d /var/lib/nova/instances/

drwxr-xr-x 3 nova nova 46 Dec 25 17:18 /var/lib/nova/instances/

其他三台机器依次做修改

各台机器上重启服务

#service openstack-nova-compute restart

Stopping openstack-nova-compute:                           [  OK  ]

Starting openstack-nova-compute:                           [  OK  ]

3.4、进入dash,上传镜像,创建网络,修改配额

创建实例成功

3.5、创建glance和cinder卷并挂载

[[email protected]YUN21 ~]#gluster volume create glance replica 2 20.0.0.21:/gv1/brick 20.0.0.22:/gv1/brick 20.0.0.23:/gv1/brick 20.0.0.24:/gv1/brick

volume create: glance: success: please start the volume to access data

[[email protected]YUN21 ~]#gluster volume create cinder replica 2 20.0.0.21:/gv2/brick 20.0.0.22:/gv2/brick 20.0.0.23:/gv2/brick 20.0.0.24:/gv2/brick

volume create: cinder: success: please start the volume to access data

[[email protected]YUN21 ~]#gluster volume start glance

volume start: glance: success

[[email protected]YUN21 ~]#gluster volume start cinder

volume start: cinder: success

[[email protected]YUN21 ~]#gluster volume status glance

Status of volume: glance

Gluster process      Port Online Pid

------------------------------------------------------------------------------

Brick 20.0.0.21:/gv1/brick    49153 Y 18269

Brick 20.0.0.22:/gv1/brick    49153 Y 39924

Brick 20.0.0.23:/gv1/brick    49153 Y 40300

Brick 20.0.0.24:/gv1/brick    49153 Y 30920

NFS Server on localhost     2049 Y 18374

Self-heal Daemon on localhost    N/A Y 18389

NFS Server on 20.0.0.24     2049 Y 31005

Self-heal Daemon on 20.0.0.24    N/A Y 31015

NFS Server on 20.0.0.22     2049 Y 40010

Self-heal Daemon on 20.0.0.22    N/A Y 40020

NFS Server on 20.0.0.23     2049 Y 40385

Self-heal Daemon on 20.0.0.23    N/A Y 40395

Task Status of Volume glance

------------------------------------------------------------------------------

There are no active volume tasks

[[email protected]YUN21 ~]#gluster volume status cinder

Status of volume: cinder

Gluster process      Port Online Pid

------------------------------------------------------------------------------

Brick 20.0.0.21:/gv2/brick    49154 Y 18362

Brick 20.0.0.22:/gv2/brick    49154 Y 39993

Brick 20.0.0.23:/gv2/brick    49154 Y 40369

Brick 20.0.0.24:/gv2/brick    49154 Y 30989

NFS Server on localhost     2049 Y 18374

Self-heal Daemon on localhost    N/A Y 18389

NFS Server on 20.0.0.24     2049 Y 31005

Self-heal Daemon on 20.0.0.24    N/A Y 31015

NFS Server on 20.0.0.23     2049 Y 40385

Self-heal Daemon on 20.0.0.23    N/A Y 40395

NFS Server on 20.0.0.22     2049 Y 40010

Self-heal Daemon on 20.0.0.22    N/A Y 40020

Task Status of Volume cinder

------------------------------------------------------------------------------

There are no active volume tasks

配置glance卷和cinder卷自动挂载(只需要在YUN21上操作)

[[email protected]YUN21 ~]#echo "20.0.0.21:/glance /var/lib/glance/images/ glusterfs defaults,_netdev 0 0" >> /etc/fstab

[[email protected]YUN21 ~]#mount -a && mount

/dev/mapper/vg_YUN21-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg_YUN21-lv_gv0 on /gv0 type xfs (rw,nobarrier)

/dev/mapper/vg_YUN21-lv_gv1 on /gv1 type xfs (rw,nobarrier)

/dev/mapper/vg_YUN21-lv_gv2 on /gv2 type xfs (rw,nobarrier)

/srv/loopback-device/swiftloopback on /srv/node/swiftloopback type ext4 (rw,noatime,nodiratime,nobarrier,user_xattr,nobarrier,loop=/dev/loop0)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

20.0.0.21:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

20.0.0.21:/glance on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[[email protected]YUN21 ~]#service openstack-glance-api restart

Stopping openstack-glance-api:                             [  OK  ]

Starting openstack-glance-api:                             [  OK  ]

修改目录权限

[[email protected]YUN21 ~]#ll -d /var/lib/glance/images/

drwxr-xr-x 3 root root 46 Dec 25 18:20 /var/lib/glance/images/

[[email protected]YUN21 ~]#chown -R glance:glance /var/lib/glance/images/

[[email protected]YUN21 ~]#ll -d /var/lib/glance/images/

drwxr-xr-x 3 glance glance 46 Dec 25 18:20 /var/lib/glance/images/

配置cinder卷

[[email protected]YUN21 ~]#vi /etc/cinder/share.conf

20.0.0.21:/cinder

[[email protected]YUN21 ~]#chmod 0640 /etc/cinder/share.conf

[[email protected]YUN21 ~]#chown root:cinder /etc/cinder/share.conf

[[email protected]YUN21 ~]#vi /etc/cinder/cinder.conf

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

to

volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver

添加

glusterfs_shares_config=/etc/cinder/share.conf

glusterfs_mount_point_base=/var/lib/cinder/volumes

[[email protected]YUN21 ~]#for i in api scheduler volume; do sudo service openstack-cinder-${i} restart;done

Stopping openstack-cinder-api:                             [  OK  ]

Starting openstack-cinder-api:                             [  OK  ]

Stopping openstack-cinder-scheduler:                       [  OK  ]

Starting openstack-cinder-scheduler:                       [  OK  ]

Stopping openstack-cinder-volume:                          [  OK  ]

Starting openstack-cinder-volume:                          [  OK  ]

[[email protected]YUN21 ~]#mount

/dev/mapper/vg_YUN21-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg_YUN21-lv_gv0 on /gv0 type xfs (rw,nobarrier)

/dev/mapper/vg_YUN21-lv_gv1 on /gv1 type xfs (rw,nobarrier)

/dev/mapper/vg_YUN21-lv_gv2 on /gv2 type xfs (rw,nobarrier)

/srv/loopback-device/swiftloopback on /srv/node/swiftloopback type ext4 (rw,noatime,nodiratime,nobarrier,user_xattr,nobarrier,loop=/dev/loop0)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

20.0.0.21:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

20.0.0.21:/glance on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

20.0.0.21:/cinder on /var/lib/cinder/volumes/6c05f25454fce4801c6aae690faff3dc type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

4、系统优化

增加云主机带宽

在控制节点上

[[email protected]YUN21 ~]#vi /etc/neutron/dhcp_agent.ini

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

[[email protected]YUN21 ~]#vi /etc/neutron/dnsmasq-neutron.conf

dhcp-option-force=26,1400

[[email protected]YUN21 ~]#ethtool -K eth1 tso off

[[email protected]YUN21 ~]#ethtool -K eth2 tso off

[[email protected]YUN21 ~]#ethtool -K eth4 tso off

[[email protected]YUN21 ~]#ethtool -K eth1 gro off

[[email protected]YUN21 ~]#ethtool -K eth2 gro off

[[email protected]YUN21 ~]#ethtool -K eth4 gro off

[[email protected]YUN21 ~]#vi /etc/rc.d/rc.local

ethtool -K eth1 tso off

ethtool -K eth2 tso off

ethtool -K eth4 tso

对于创建的虚机也依照上边的方式关闭对应网卡的tso和gro服务

原文地址:http://blog.51cto.com/xiaoxiaozhou/2113302

时间: 2024-07-31 07:47:09

openstack结合glusterfs存储的相关文章

openstack结合glusterfs存储其一(准备工作)

1.机器信息 2.准备工作 2.1.关闭NetworkManager服务 2.2.上传repo文件 2.3.在每个机器上做时间同步 2.4.每台机器上关闭selinux服务.配置防火墙 2.5.每台机器上配置hosts文件 3.部署O+G环境 3.1.每台机器上安装gfs组件 3.2.在YUN21上安装packstack,并部署openstack 3.3.创建gfs卷并挂载 3.4.进入dash,上传镜像,创建网络,修改配额 3.5.创建glance和cinder卷并挂载 4.系统优化 1.机器

GlusterFS存储结构原理介绍

一.分布式文件系统 分布式文件系统(Distributed File System)是指文件系统管理的物理存储资源并不直接与本地节点相连,而是分布于计算网络中的一个或者多个节点的计算机上.目前意义上的分布式文件系统大多都是由多个节点计算机构成,结构上是典型的客户机/服务器模式.流行的模式是当客户机需要存储数据时,服务器指引其将数据分散的存储到多个存储节点上,以提供更快的速度,更大的容量及更好的冗余特性. 目前流行的分布式文件系统有许多,如MooseFS.OpenAFS.GoogleFS,具体实现

openstack的临时存储后端

声明: 本博客欢迎转发,但请保留原作者信息! 博客地址:http://blog.csdn.net/halcyonbaby 内容系本人学习.研究和总结,如有雷同,实属荣幸! 目前openstack提供了raw,qcow2,lvm,rbd四种类型的image后端. 所谓后端,即image/临时卷root盘的管理存储方式. 可以看出image在compute上缓存为base,以base创建虚拟机的磁盘,多个虚拟机的磁盘可能共享同一个base. nova/virt/libvirt/imagebacken

openstack-r版(rocky)搭建基于centos7.4 的openstack swift对象存储服务 三

openstack-r版(rocky)搭建基于centos7.4 的openstack swift对象存储服务 一 openstack-r版(rocky)搭建基于centos7.4 的openstack swift对象存储服务 二 openstack-r版(rocky)搭建基于centos7.4 的openstack swift对象存储服务 三 openstack-r版(rocky)搭建基于centos7.4 的openstack swift对象存储服务 四 以下操作在控制节点执行control

k8s使用glusterfs存储报错type ‘features/utime‘

k8s使用glusterfs存储报错type 'features/utime' is not valid or not found on this machine pods报错如下 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m48s default-scheduler Successfully assigned default/auth-web-1-77f4f7cbcc

GlusterFS 存储

GlusterFS简介: 互联网四大开源分布式文件系统分别是:MooseFS.CEPH.Lustre.GusterFS. GluterFS最早由Gluster公司开发,其目的是开发一个能为客户提供全局命名空间.分布式前端及高达数百PB级别扩展性的分布式文件系统. 相比其他分布式文件系统,GlusterFS具有高扩展性.高可用性.高性能.可横向扩展等特点,并且其没有元数据服务器 的设计,让整个服务没有单点故障的隐患. 常见的分布式文件系统简介: 1.MooseFS MooseFS主要由管理服务器(

GlusterFS 存储结构原理介绍

分布式文件系统分布式文件系统(Distributed File System)是指文件系统管理的物理存储资源并不直接与本地节点相连,而是分布于计算网络中的一个或者多个节点的计算机上.目前意义上的分布式文件系统大多都是由多个节点计算机构成,结构上是典型的客户机/服务器模式.流行的模式是当客户机需要存储数据时,服务器指引其将数据分散的存储到多个存储节点上,以提供更快的速度,更大的容量及更好的冗余特性.目前流行的分布式文件系统有许多,如MooseFS.OpenAFS.GoogleFS,具体实现原理我这

使用Ceph作为OpenStack的后端存储

概述 libvirt配置了librbd的QEMU接口,通过它可以在OpenStack中使用Ceph块存储.Ceph块存储是集群对象,这意味着它比独立的服务器有更好的性能. 在OpenStack中使用Ceph块设备,必须首先安装QEMU,libvirt和OpenStack,下图描述了 OpenStack和Ceph技术层次结构: http://my.oschina.net/JerryBaby/blog/376580 我翻译的官方文档,仅供参考 ^ ^. 系统规划 OpenStack集群: 控制节点:

openstack虚拟机做存储分区问题的解决方案之一

openstack实例存储分区的构建方案 对于在openstack的实例中做存储,不管是做cinder还是swift首先就是要解决分区问题.今天在openstack的实例中构建swift存储是就就遇到这样的问题.对于分区我们可以使用一下的几种方案: 构建共享存储,或者做iscsi存储服务器等 使用实例自带的硬盘进行分区 使用回环设备作为服务器的存储设备 在本地的服务器中构建swift使用openstack的云硬盘将其挂载至所需的实例中.(还未做测试,只是一种方案) 所用的文件系统官方推荐使用xf