RedHat7.3搭建KVM

RedHat7.3搭建KVM

1. 宿主机安装RedHat7.3系统

1.1选择语言

中文、简体中文(中国)

1.2安装位置

1.2.1自定义分区,选择LVM,将分区空间全部分配给根

1.2.2禁用Kdump

2.安装KVM

2.1安装前准备

2.1.1配置yum源

2.1.2防火墙处理

setenforce 0

sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/sysconfig/selinux

systemctl disable firewalld

systemctl stop firewalld

2.2安装GNOME桌面环境

yum -y groupinstall "X Window System"

yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts –y

2.2.1设置默认以桌面启动

将/etc/inittab中的

#graphical.target: analogous to runlevel 5

修改为

graphical.target: analogous to runlevel 5

systemctl set-default graphical.target

2.3安装vncserver

2.3.2安装vncserver

yum -y install vnc *vnc-server*

2.3.3设置密码

[[email protected] ~]# vncserver

You will require a password to access your desktops.

Password:*****   ###输入密码

Verify:*****      ###确认密码

2.3.4启动服务

vncserver :1

2.3.5停止服务

vncserver –kill :1

2.4安装KVM

2.4.1查看主机是否支持VT

cat /proc/cpuinfo |grep vmx  #intel cpu
cat /proc/cpuinfo |grep svm  # amd cpu

#如果flags: 里有vmx 或者svm就说明支持VT;如果没有任何的输出,说明你的cpu不支持,将无法使用KVM虚拟机

2.4.2确保BIOS里开启VT:  Intel(R) Virtualization Tech [Enabled]  使用如下命令

[[email protected] ~]# lsmod | grep kvm

kvm_intel             170181  0

kvm                   554609  1 kvm_intel

irqbypass              13503  1 kvm

2.4.3桥接网络

安装bridge-utils ,用来管理网桥的工具brctl

yum -y install bridge-utils

2.4.4安装qemu-kvm libvirt virt-install virt-manager

yum -y install qemu-kvm libvirt virt-install virt-manager openssh-askpass

2.4.5配置修改

/etc/libvirt/qemu.conf

dynamic_ownership=1

#user = "root"

#group = "root"

修改为

dynamic_ownership=0

user = "root"

group = "root"

2.4.6重启服务设置开机自启动

systemctl restart libvirtd

systemctl enable libvirtd

2.4.7修改网络配置文件

nmcli c add type bridge autoconnect yes con-name br0 ifname br0

cd /etc/sysconfig/network-scripts/

[[email protected] network-scripts]# cat ifcfg-br0

DEVICE=br0

STP=yes

BRIDGING_OPTS=priority=32768

TYPE=Bridge

BOOTPROTO=none

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

NAME=br0

ONBOOT=yes

IPADDR=192.161.14.247

NETMASK=255.255.255.0

GATEWAY=192.161.14.1

[[email protected] network-scripts]# cat ifcfg-ens192

TYPE=Ethernet

BOOTPROTO=none

BRIDGE=br0

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

NAME=ens192

UUID=89e79501-94d5-4e32-a215-dad967527107

DEVICE=ens192

ONBOOT=yes

重启网络systemctl restart network

查看网络

[[email protected] network-scripts]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000

link/ether 00:50:56:83:03:6a brd ff:ff:ff:ff:ff:ff

3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000

link/ether 52:54:00:26:16:70 brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

valid_lft forever preferred_lft forever

4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000

link/ether 52:54:00:26:16:70 brd ff:ff:ff:ff:ff:ff

7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000

link/ether 00:50:56:83:03:6a brd ff:ff:ff:ff:ff:ff

inet 192.161.14.247/24 brd 192.161.14.255 scope global br0

valid_lft forever preferred_lft forever

inet6 fd3c:dfbd:20c3:d000:250:56ff:fe83:36a/64 scope global mngtmpaddr dynamic

valid_lft 7094sec preferred_lft 3494sec

inet6 fe80::250:56ff:fe83:36a/64 scope link

valid_lft forever preferred_lft forever

2.5各宿主机实现双机互信

a)

ssh-keygen -t rsa

#生产公钥、私钥

b)同步各宿主机/root/.ssh/authorized_keys配置文件

2.6配置vlan

2.6.1设置开机启动加载8021q模块

echo ‘/usr/sbin/modprobe 8021q’ >> /etc/rc.local

chmod +x /etc/rc.local

2.6.2上传vconfig-1.9-8.1.el6.x86_64.rpm用于创建vlan

rpm -ivh vonfig-1.9-8.1.el6.x86_64.rpm

2.6.3创建vlan140

a)

[[email protected] network-scripts]# vconfig add eno1 140

Added VLAN with VID == 140 to IF -:eno1:

命令格式如下:

vconfig add 物理网卡名 vlanid

b)

[[email protected] network-scripts]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP qlen 1000

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 30:e1:71:55:a8:f5 brd ff:ff:ff:ff:ff:ff

4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 30:e1:71:55:a8:f6 brd ff:ff:ff:ff:ff:ff

5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 30:e1:71:55:a8:f7 brd ff:ff:ff:ff:ff:ff

45: [email protected]: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

#多了一个eno1.140的虚拟网卡

c)创建相应虚拟网卡配置文件和桥接配置文件

d)重启网卡

[[email protected] network-scripts]# service network restart

Restarting network (via systemctl):  [  OK  ]

e)查看网桥绑定情况

[[email protected] network-scripts]# brctl show

bridge name     bridge id               STP enabled     interfaces

br0             8000.30e17155a8f4       yes             eno1

br140           8000.30e17155a8f4       yes             eno1.140

br20            8000.30e17155a8f4       yes             eno1.20

vnet1

virbr0          8000.5254009c7586       yes             virbr0-nic

#网桥绑定成功

f)查看网桥是否运行正常

[[email protected] network-scripts]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP qlen 1000

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 30:e1:71:55:a8:f5 brd ff:ff:ff:ff:ff:ff

4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 30:e1:71:55:a8:f6 brd ff:ff:ff:ff:ff:ff

5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 30:e1:71:55:a8:f7 brd ff:ff:ff:ff:ff:ff

8: br20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet 172.16.4.100/24 brd 172.16.4.255 scope global br20

valid_lft forever preferred_lft forever

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

9: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

link/ether 52:54:00:9c:75:86 brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

valid_lft forever preferred_lft forever

10: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500

link/ether 52:54:00:9c:75:86 brd ff:ff:ff:ff:ff:ff

40: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br20 state UNKNOWN qlen 500

link/ether fe:54:00:f3:1a:6a brd ff:ff:ff:ff:ff:ff

inet6 fe80::fc54:ff:fef3:1a6a/64 scope link

valid_lft forever preferred_lft forever

45: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br140 state UP

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

46: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet 192.161.14.247/24 brd 192.161.14.255 scope global br0

valid_lft forever preferred_lft forever

inet6 fd3c:dfbd:20c3:d000:32e1:71ff:fe55:a8f4/64 scope global mngtmpaddr dynamic

valid_lft 7004sec preferred_lft 3404sec

inet6 fd51:8056:6705:0:32e1:71ff:fe55:a8f4/64 scope global mngtmpaddr dynamic

valid_lft 7200sec preferred_lft 1800sec

inet6 fd51:8056:6705:4:32e1:71ff:fe55:a8f4/64 scope global mngtmpaddr dynamic

valid_lft 7200sec preferred_lft 1800sec

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

47: br140: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet 172.16.4.40/24 brd 172.16.4.255 scope global br140

valid_lft forever preferred_lft forever

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

48: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br20 state UP

link/ether 30:e1:71:55:a8:f4 brd ff:ff:ff:ff:ff:ff

inet6 fe80::32e1:71ff:fe55:a8f4/64 scope link

valid_lft forever preferred_lft forever

#运行正常

3.虚机迁移

virsh migrate --live rhel7.3  qemu+ssh://192.161.14.250/system

3.1查看虚拟机磁盘格式

[[email protected] images]# qemu-img info redhat7.3

image: redhat7.3

file format: qcow2

virtual size: 60G (64424509440 bytes)

disk size: 1.3G

cluster_size: 65536

Format specific information:

compat: 1.1

lazy refcounts: true

3.2克隆模板的磁盘文件

[[email protected] images]# qemu-img create -f qcow2 -b redhat7.3 KVM2-VM1

###克隆格式为qcow2

Formatting ‘KVM2-VM1‘, fmt=qcow2 size=64424509440 backing_file=‘redhat7.3‘ encryption=off cluster_size=65536 lazy_refcounts=off

3.3克隆模板的配置文件

virsh dumpxml rhel7.3 > /etc/libvirt/qemu/KVM2-VM1.xml

###其中rhel7.3为模板文件的名字,KVM2-VM1为要生成模板文件名字,注意跟前面的磁盘文件名保持一致

3.4删除网卡的MAC、模板虚拟机的UUID

<mac address=‘52:54:00:f3:1a:6a‘/>

<uuid>18f4b3eb-4d0f-4cac-bc3f-e3798fa4746c</uuid>

3.5修改磁盘名称

<source file=‘/var/lib/libvirt/images/redhat7.3‘/>

3.6修改虚拟机name

<name>rhel7.3</name>

3.7重新定义一个虚拟机

[[email protected] qemu]# virsh  define /etc/libvirt/qemu/KVM2-VM1.xml

Domain KVM2-VM1 defined from /etc/libvirt/qemu/KVM2-VM1.xml

3.8迁移报错

Error starting domain: internal error: process exited while connecting to monitor: 2017-08-29T05:09:58.146446Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-rhel7.3/org.qemu.guest_agent.0,server,nowait: Failed to bind socket: No such file or directory

2017-08-29T05:09:58.146488Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-rhel7.3/org.qemu.guest_agent.0,server,nowait: chardev: opening backend "socket" failed

创建该目录/var/lib/libvirt/qemu/channel/target/下相应目录问题就可解决

3.9虚拟克隆到迁移步骤

3.9.1利用virt-mangaer克隆

a)

#选择要克隆的主机(该主机必须是关闭或Pause状态)

b)

#设置要克隆的虚机名称,Storage,选择Details(这里不要选择默认的,否则会导致主机名跟磁盘名不一致,不便于管理)

c)

#设置虚机机磁盘位置、名称,这里要求跟虚拟机名字一致

d)

#选择克隆即可

e)

#克隆完毕之后,即可看到虚拟机列表

3.9.2virt-clone命令克隆

a)克隆命令格式

virt-clone -o 源虚机名 -n 目标虚机名 -f /var/lib/libvirt/images/目标虚拟机名

b)列出宿主机所有虚拟机

[[email protected] qemu]# virsh list --all

Id    Name                           State

----------------------------------------------------

16    rhel7.3                        paused

-     KVM2-VM1                       shut off

-     KVM2-VM2                       shut off

-     KVM2-VM3                       shut off

-     KVM2-VM5                       shut off

-     KVM2-VM6                       shut off

-     KVM2-VM7                       shut off

c)克隆虚机KVM2-VM8

[[email protected] qemu]# virt-clone -o rhel7.3 -n KVM2-VM8 -f /var/lib/libvirt/images/KVM2-Vm8

WARNING  Setting the graphics device port to autoport, in order to avoid conflicting.

Allocating ‘KVM2-Vm8‘                                                                                                                                 |  60 GB  00:00:02

Clone ‘KVM2-VM8‘ created successfully.

d)检查是否创建成功

[[email protected] target]# virsh list --all

Id    Name                           State

----------------------------------------------------

16    rhel7.3                        paused

-     KVM2-VM1                       shut off

-     KVM2-VM2                       shut off

-     KVM2-VM3                       shut off

-     KVM2-VM5                       shut off

-     KVM2-VM6                       shut off

-     KVM2-VM7                       shut off

-     KVM2-VM8                       shut off

3.9.3虚拟机静态迁移

a)将虚拟机KVM2-VM8迁移到宿主机KVM2上

b)迁移配置文件

[[email protected] target]# scp /etc/libvirt/qemu/KVM2-VM8.xml [email protected]:/etc/libvirt/qemu/

KVM2-VM8.xml

c)迁移镜像文件

[[email protected] target]# scp /var/lib/libvirt/images/KVM2-VM  [email protected]:/var/lib/libvirt/images/

d)激活配置文件

[[email protected] target]# virsh define /etc/libvirt/qemu/KVM2-VM8.xml

e)查看宿主机虚机情况

[[email protected] target]# virsh list --all

Id    Name                           State

----------------------------------------------------

-     KVM2-VM1                       shut off

-     KVM2-VM2                       shut off

-     KVM2-VM3                       shut off

-     KVM2-VM5                       shut off

-     KVM2-VM6                       shut off

-     KVM2-VM8                       shut off

#迁移成功

参考

http://www.linuxidc.com/Linux/2015-12/126690.htm ###安装桌面

http://www.linuxidc.com/Linux/2016-07/132835.htm ###vnc安装

http://blog.csdn.net/skykingf/article/details/51944455 ###安装kvm1

http://www.cnblogs.com/lvxiaobo616/p/5704646.html ###安装kvm2

http://blog.csdn.net/qq_19646075/article/details/51780530 ###虚机迁移1

http://www.cnblogs.com/sammyliu/p/4572287.html  ###虚机迁移2

备注

如在虚拟机中测试,网卡一定选择成混杂模式,否则,在虚机获取桥接网络时,会获取不到

时间: 2024-10-12 12:35:39

RedHat7.3搭建KVM的相关文章

rhel5 搭建KVM

搭建KVM 一台rhel5 服务器 Yum 安装kvm相关包组 配置pxe Host网络及存储 挂载光盘中的VT yum Yum –y grouplinstal KVM service libvirtd restart chkconfig libvirtd on 1. 建立桥接网卡 br0 [[email protected] ~]# cp  /etc/sysconfig/network-scripts/ifcfg-{eth0,br0} [[email protected] ~]# vim  /

centos6.3 搭建KVM虚拟机

突然接到老总需求,需要在现有机房的服务器上边部署KVM,让内网测试机到达外网去测试.说实话有些头疼,因为我们的硬件资源实在是太缺了(小公司...),只有100G的剩余空间,让我在上边跑俩台虚拟机,还得应付时刻增涨的数据,这无疑让我很无语...hadoop可是在这台服务器的...所以,同学们懂了吧,那数据量,每天哗哗滴啊...废话少说,开始搭建~ 一.服务器硬件环境 服务器型号 DELL R710 CPU型号 Intel(R) Xeon(R) CPU  [email protected]*2 物理

搭建KVM环境——07 带GUI的Linux上安装KVM图形界面管理工具

清空yum源缓存,并查看yun源 [[email protected] ~]# yum clean all Loaded plugins: fastestmirror, langpacks Cleaning repos: vcd Cleaning up everything Cleaning up list of fastest mirrors [[email protected] ~]# yum repolist Loaded plugins: fastestmirror, langpacks

搭建KVM服务器

搭建KVM服务器安装虚拟化服务器平台必备软件? qemu-kvm – 为 kvm 提供底层仿真支持? libvirt-daemon – libvirtd 守护进程,管理虚拟机? libvirt-client – 用户端软件,提供客户端管理命令? libvirt-daemon-driver-qemu – libvirtd 连接 qemu 的驱劢可选功能– virt-install 系统安装工具 – virt-manager # 图形管理工具– virt-v2v # 虚拟机迁移工具– virt-p2

CentOS 7中搭建KVM虚拟化平台

什么是虚拟化 虚拟化就是把硬件资源从物理方式转变为逻辑方式,打破原有物理结构,使用户可以灵活管理这些资源,并且允许1台物理机上同时运行多个操作系统,以实现资源利用率最大化和灵活管理的一项技术. 虚拟化的优势 减少服务器数量,降低硬件采购成本. 资源利用率最大化 降低机房空间.散热.用电消耗的成本. 硬件资源可动态调整,提高企业IT业务灵活性. 高可用性. 在不中断服务的情况下进行物理硬件调整. 降低管理成本. 具备更高效的备灾能力. KVM虚拟化 KVM自linux2.6.20版本后就直接整合到

在 CentOS 7 上搭建 KVM 虚拟化平台

KVM 简介 Kernel-based Virtual Machine的简称,是一个开源的系统虚拟化模块,自Linux 2.6.20之后集成在Linux的各个主要发行版本中.它使用Linux自身的调度器进行管理,所以相对于Xen,其核心源码很少.KVM目前已成为学术界的主流VMM之一. KVM的虚拟化需要硬件支持(如IntelVT技术或者AMDV技术).是基于硬件的完全虚拟化.而Xen早期则是基于软件模拟的Para-Virtualization,新版本则是基于硬件支持的完全虚拟化.但Xen本身有

搭建kvm环境,及批量自动化部署

本实验使用两台主机:10.0.91.8 作为虚拟机的宿主机10.0.91.10 配置httpd服务,提供10.0.91.8安装虚拟机要使用的镜像及自动应答文件kickstart 主机环境: [[email protected] ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core) [[email protected] ~]# uname -r3.10.0-862.11.6.el7.x86_64 10.0.91.8主机上执

在ubuntu18.04下搭建kvm

前一段时间一直在尝试Ubuntu上搭建xen,一直出现各种问题,各种坑 首先先感谢下面这个公司对我的耐心解答,非常感谢.特别是后面来的电话对我进行了详细的解答,所以选择搭建kvm. 1. 需要检查一下CPU是否支持虚拟化,执行一下命令来检查/proc/cpuinfo文件中是否又虚拟化相关的字眼,如果有的话表明CPU支持虚拟化技术. egrep -c '(svm|vmx)' /proc/cpuinfo 上面命令执行结果如果返回0,表示CPU不支持虚拟化技术.当然主板BIOS中的虚拟化技术也可能不是

CentOS 6.5 十分钟搭建KVM虚拟机详细文档,从零到有,快速入门。

以下技术文档全部在我的公司服务器上成功搭建并且运行 KVM通俗的说就是一台服务器当多台用,详细介绍去百度和谷歌. 首先查看服务器是否支持虚拟化 [[email protected] ~]# grep -E '(vmx|svm)' /proc/cpuinfo --color 反馈如下:   flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov p    at pse36 clflush dts acpi mmx