Bootstrap

Qume-KVM虚拟化

Qume-KVM虚拟化

虚拟化概述

什么是虚拟化?

  • 虚拟化,是指通过虚拟化技术将一台计算机虚拟为多台逻辑计算机。
  • 在一台计算机上同时运行多个逻辑计算机,每个逻辑计算机可运行不同的操作系统,并且应用程序都可以在相互独立的空间内运行而互相不影响,从而显著提高计算机的工作效率
  • 虚拟化使用软件的方法重新定义划分 IT 资源,可以实现 IT 资源的动态分配、灵活调度、跨域共享,提高 IT 资源利用率

为什么企业使用虚拟化技术?

1、节约成本
2、提高效率,物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。

虚拟化通过什么软件实现资源分配?

  • 这个主要是通过一个叫做 Hypervisor 的程序实现的。
    • Hypervisor:一种运行在物理服务器硬件与操作系统之间的中间软件层
      可允许多个操作系统和应用来共享硬件资源

通过Hypervisor实现的虚拟化分类:

全虚拟化:

  • Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型

  • 全虚拟化对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高

image-20221008000152632

半虚拟化:

  • 物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。

  • Hypervisor 作为 OS 上的一个程序模块运行,并对虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型

  • 半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM

image-20221008000159301

总结:完全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。

KVM概述

什么是KVM?

  • KVM(Kernel-based Virtual Machine)全称是基于内核的虚拟机

  • KVM 是一个开源软件,基于内核的虚拟化技术,实际是嵌入系统的一个虚拟化模块,通过优化内核来使用虚拟技术,该内核模块使得 Linux 变成了一个Hypervisor,虚拟机使用 Linux 自身的调度器进行管理

  • KVM 自 Linux 2.6.20 之后逐步取代 Xen 被集成在Linux 的各个主要发行版本中,使用 Linux 自身的调度器进行管理

  • KVM有一个内核模块叫 kvm.ko。KVM 本身只关注虚拟机调度和内存管理这两个方面。IO 外设的任务交给 Linux 内核和 Qemu

KVM通过什么来管理?

  • Libvirt 就是 KVM 的管理工具

  • Libvirt 除了能管理 KVM 这种 Hypervisor,还能管理 Xen,VirtualBox 等

  • Libvirt 包含 3 个东西:后台 daemon 程序 libvirtd、API 库和命令行工具 virsh

    • libvirtd是服务程序,接收和处理 API 请求
    • API 库使得其他人可以开发基于 Libvirt 的高级工具,比如 virt-manager,这是个图形化的 KVM 管理工具
    • virsh 是我们经常要用的 KVM 命令行工具

KVM虚拟化架构

KVM虚拟化有两个核心模块:

1)KVM内核模块:主要包括KVM虚拟化核心模块KVM.ko,以及硬件相关的KVM_intel或KVM_AMD模块;负责CPU与内存虚拟化,包括VM创建,内存分配与管理、vCPU执行模式切换等。

2)QEMU设备模拟:实现IO虚拟化与各设备模拟(磁盘、网卡、显卡、声卡等),通过IOCTL系统调用与KVM内核交互。KVM仅支持基于硬件辅助的虚拟化(如Intel-VT与AMD-V),在内核加载时,KVM先初始化内部数据结构,打开CPU控制寄存器CR4里面的虚拟化模式开关,执行VMXON指令将Host OS设置为root模式,并创建的特殊设备文件/dev/kvm等待来自用户空间的命令,然后由KVM内核与QEMU相互配合实现VM的管理。KVM会复用部分Linux内核的能力,如进程管理调度、设备驱动,内存管理等。

Qume概述

KVM本身不执行任何设备模拟,需要用户空间程序QEMU通过/dev/kvm接口设置一个虚拟客户机的地址空间。

image-20221008001206402

KVM和Qemu的关系

  • Qemu是一个独立的虚拟化解决方案,通过intel-VT 或AMD SVM实现全虚拟化,安装qemu的系统,可以直接模拟出另一个完全不同的系统环境。QEMU本身可以不依赖于KVM,但是如果有KVM的存在并且硬件(处理器)支持比如Intel VT功能,那么QEMU在对处理器虚拟化这一块可以利用KVM提供的功能来提升性能。
  • KVM是集成到Linux内核的Hypervisor,是X86架构且硬件支持虚拟化技术(Intel-VT或AMD-V)的Linux的全虚拟化解决方案。它是Linux的一个很小的模块,利用Linux做大量的事,如任务调度、内存管理与硬件设备交互等。准确来说,KVM是Linux kernel的一个模块。

Qemu的三种运行模式

  • 第一种模式是通过kqemu模块实现内核态的加速。
  • 第二种模式是在用户态直接运行QEMU,由QEMU对目标机的 所有 指令进行翻译后执行,相当于全虚拟化。
  • 第三种模式则是KVM官方提供的kvm-qemu加速模式。

qmeu的两种特点

  • QEMU可以在没有主机内核驱动程序的情况下运行。
  • 它适用于多种操作系统(GNU / Linux,* BSD,Mac OS X,Windows)和体系结构。
  • 它执行FPU的精确软件仿真。

QEMU的两种操作模式:全系统仿真用户模式仿真

  • QEMU用户模式仿真具有以下功能:
    • 1.通用Linux系统调用转换器,包括大部分ioctls。
    • 2.使用本机CPU clone的仿真为线程使用Linux调度程序。
    • 3.通过将主机信号重新映射到目标信号来实现精确信号处理。
  • QEMU全系统仿真具有以下特点:
    • 1.QEMU使用完整的软件MMU来实现最大的便携性。
    • 2.QEMU可以选择使用内核加速器,如kvm。加速器本地执行大部分客户代码,同时继续模拟机器的其余部分。
    • 3.可以仿真各种硬件设备,并且在某些情况下,客户机操作系统可以透明地使用主机设备(例如串行和并行端口,USB,驱动器)。主机设备传递可用于与外部物理外围设备(例如网络摄像头,调制解调器或磁带驱动器)交谈。
    • 4.对称多处理(SMP)支持。目前,内核加速器需要使用多个主机CPU进行仿真。

部署Qume-KVM

实验环境说明

  1. 该虚拟机的内存建议不低于8G
  2. 要勾选该虚拟机的CPU里的虚拟化引擎的【虚拟化Intel VT-x/EPT或AMD-V/RVI(V)】
  3. 添加一块200G的硬盘

image-20221007154459633

  1. 检查该虚拟机是否支持KVM虚拟化。intel的CPU会显示vmx ,AMD的CPU会显示svm
egrep -o 'vmx|svm' /proc/cpuinfo

开始部署

#验证是否支持kvm虚拟化功能。intel的CPU会显示vmx ,AMD的CPU会显示svm
[root@Qume-KVM ~]# egrep -o 'vmx|svm' /proc/cpuinfo
svm
svm
svm
svm

#关闭防火墙与SELinux
[root@Qume-KVM ~]# setenforce 0
[root@Qume-KVM ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@Qume-KVM ~]# systemctl disable --now firewalld.service

#查看新硬盘名
[root@Qume-KVM ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  200G  0 disk
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0  199G  0 part
  ├─cl-root 253:0    0   70G  0 lvm  /
  ├─cl-swap 253:1    0    2G  0 lvm  [SWAP]
  └─cl-home 253:2    0  127G  0 lvm  /home
sdb           8:16   0  200G  0 disk
sr0          11:0    1 10.1G  0 rom  /mnt/cdrom
#给新磁盘分配分区。在该命令界面中可多使用Tab键来查看有哪些参数可供使用
[root@Qume-KVM ~]# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel				#创建新的磁盘标签(分区表)
New disk label type? msdos		#msdos是MBR类型的磁盘分区表
(parted) unit					#设置磁盘的基础单元格式
Unit?  [compact]? MiB			#以MiB为基础单元格式显示磁盘的容量
(parted) p					#p是print是缩写,打印磁盘的信息
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 204800MiB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start  End  Size  Type  File system  Flags

(parted) mkpart			#创建分区
Partition type?  primary/extended? primary		#分区类型
File system type?  [ext2]? xfs			#文件系统类型
Start? 10				#从10MiB(这个单元格式是上面设置的)作起始划分
End? 204790				#到204790MiB作为结束范围
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 204800MiB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start    End        Size       Type     File system  Flags
 1      10.0MiB  204790MiB  204780MiB  primary  xfs          lba

(parted) q		#q是quit的缩写,意为退出
Information: You may need to update /etc/fstab.

[root@Qume-KVM ~]# udevadm settle	#刷新分区表
[root@Qume-KVM ~]# lsblk		#查看磁盘及分区的信息
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  200G  0 disk
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0  199G  0 part
  ├─cl-root 253:0    0   70G  0 lvm  /
  ├─cl-swap 253:1    0    2G  0 lvm  [SWAP]
  └─cl-home 253:2    0  127G  0 lvm  /home
sdb           8:16   0  200G  0 disk
└─sdb1        8:17   0  200G  0 part
sr0          11:0    1 10.1G  0 rom  /mnt/cdrom
[root@Qume-KVM ~]# mkfs -t xfs /dev/sdb1	#把该分区格式化为xfs文件系统
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=13105920 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=52423680, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=25597, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@Qume-KVM ~]# blkid /dev/sdb1		#查看UUID及文件系统
/dev/sdb1: UUID="d9f5e2c1-ffc3-489b-9cf0-a58287454c6c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="d7078c1b-01"
[root@Qume-KVM ~]# mkdir /kvmdata		#创建供挂载的目录
#写入配置文件实现永久挂载
[root@Qume-KVM ~]# echo 'UUID=d9f5e2c1-ffc3-489b-9cf0-a58287454c6c  /kvmdata xfs defaults 0 0' >> /etc/fstab
[root@Qume-KVM ~]# mount -a		#挂载/etc/fstab文件里的全部
[root@Qume-KVM ~]# df -Th		#查看是否挂载成功
Filesystem          Type      Size  Used Avail Use% Mounted on
devtmpfs            devtmpfs  3.8G     0  3.8G   0% /dev
tmpfs               tmpfs     3.8G     0  3.8G   0% /dev/shm
tmpfs               tmpfs     3.8G  8.9M  3.8G   1% /run
tmpfs               tmpfs     3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/cl-root xfs        70G  2.1G   68G   3% /
/dev/sr0            iso9660    11G   11G     0 100% /mnt/cdrom
/dev/mapper/cl-home xfs       127G  939M  126G   1% /home
/dev/sda1           xfs      1014M  214M  801M  22% /boot
tmpfs               tmpfs     775M     0  775M   0% /run/user/0
/dev/sdb1           xfs       200G  1.5G  199G   1% /kvmdata

#配置yum源
[root@Qume-KVM ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2495  100  2495    0     0   4863      0 --:--:-- --:--:-- --:--:--  4863
[root@Qume-KVM ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' 、/etc/yum.repos.d/CentOS-Base.repo

#按照所需的软件依赖包
[root@Qume-KVM ~]# dnf -y install epel-release
[root@Qume-KVM ~]# dnf -y install vim wget net-tools unzip zip gcc gcc-c++
#安装kvm
[root@Qume-KVM ~]# dnf -y install qemu-kvm qemu-img virt-manager libvirt libvirt-clientvirt-install virt-viewer libguestfs-tools

#配置网络,因为虚拟机中的网络,我们一般是都和公司服务器处在同一网段的,所以我们需要把kvm的网卡配置成桥接模式
[root@Qume-KVM ~]# cd /etc/sysconfig/network-scripts/
[root@Qume-KVM network-scripts]# cp ifcfg-ens32 ifcfg-br0
[root@Qume-KVM network-scripts]# vim ifcfg-br0
TYPE=Bridge
BOOTPROTO=none
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.92.130
PREFIX=24
GATEWAY=192.168.92.2
DNS1=8.8.8.8
[root@Qume-KVM network-scripts]# vim ifcfg-ens32
TYPE=Ethernet
BOOTPROTO=none
NAME=ens32
DEVICE=ens32
ONBOOT=yes
BRIDGE=br0

#重启网卡服务
[root@Qume-KVM ~]# nmcli connection reload
[root@Qume-KVM ~]# nmcli connection up ens32
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@Qume-KVM ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

[root@Qume-KVM ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.92.130/24 brd 192.168.92.255 scope global noprefixroute br0
       valid_lft forever preferred_lft forever

#启动libvirtd服务
[root@Qume-KVM ~]# systemctl enable --now libvirtd
[root@Qume-KVM ~]# lsmod | grep kvm
kvm_amd               135168  0
ccp                    98304  1 kvm_amd
kvm                   880640  1 kvm_amd
irqbypass              16384  1 kvm

#将qemu-kvm命令软链接到/usr/bin/qemu-kvm
[root@Qume-KVM ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
[root@Qume-KVM ~]# ll /usr/bin/qemu-kvm
lrwxrwxrwx 1 root root 21 Oct  7 17:27 /usr/bin/qemu-kvm -> /usr/libexec/qemu-kvm

#安装brctl命令
[root@Qume-KVM ~]# yum -y install console-bridge console-bridge-devel
[root@Qume-KVM ~]# rpm -ivh http://mirror.centos.org/centos/7/os/x86_64/Packages/bridge-utils-1.5-9.el7.x86_64.rpm

#查看网桥信息
[root@Qume-KVM ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.000c299ee3c1       no              ens32
virbr0          8000.5254004abd29       yes             virbr0-nic

KVM Web管理界面安装

前言:Kvm的web界面是由webvirtmgr程序提供的

#安装依赖包
[root@Qume-KVM ~]# yum -y install git python2-pip supervisor nginx python2-devel
[root@Qume-KVM ~]# rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/libxml2-python-2.9.1-6.el7.5.x86_64.rpm
[root@Qume-KVM ~]# rpm -ivh --nodeps https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/p/python-websockify-0.6.0-2.el7.noarch.rpm

#升级pip
[root@Qume-KVM ~]# pip2 install --upgrade pip
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip2 install --user` instead.
Collecting pip
  Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)",)': /packages/27/79/8a850fe3496446ff0d584327ae44e7500daf6764ca1a382d2d02789accf7/pip-20.3.4-py2.py3-none-any.whl
  Downloading https://files.pythonhosted.org/packages/27/79/8a850fe3496446ff0d584327ae44e7500daf6764ca1a382d2d02789accf7/pip-20.3.4-py2.py3-none-any.whl (1.5MB)
    100% |████████████████████████████████| 1.5MB 24kB/s
Installing collected packages: pip
  Found existing installation: pip 9.0.3
    Uninstalling pip-9.0.3:
      Successfully uninstalled pip-9.0.3
Successfully installed pip-20.3.4
You are using pip version 20.3.4, however version 22.2.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[root@Qume-KVM ~]# pip -V
pip 20.3.4 from /usr/lib/python2.7/site-packages/pip (python 2.7)

#从github拉取webvirtmgr
[root@Qume-KVM src]# git clone http://github.com/retspen/webvirtmgr.git
Cloning into 'webvirtmgr'...
warning: redirecting to https://github.com/retspen/webvirtmgr.git/
remote: Enumerating objects: 5614, done.
remote: Total 5614 (delta 0), reused 0 (delta 0), pack-reused 5614
Receiving objects: 100% (5614/5614), 2.97 MiB | 1.76 MiB/s, done.
Resolving deltas: 100% (3606/3606), done.
[root@Qume-KVM src]# ls
webvirtmgr
#安装webvirtmgr
[root@Qume-KVM src]# cd webvirtmgr/
[root@Qume-KVM webvirtmgr]# ls
conf     deploy                images      locale       networks          secrets    setup.py   Vagrantfile
console  dev-requirements.txt  instance    manage.py    README.rst        serverlog  storages   vrtManager
create   hostdetail            interfaces  MANIFEST.in  requirements.txt  servers    templates  webvirtmgr
[root@Qume-KVM webvirtmgr]# pip install -r requirements.txt
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)",)': /simple/django/
Collecting django==1.5.5
  Downloading Django-1.5.5.tar.gz (8.1 MB)
     |████████████████████████████████| 8.1 MB 17 kB/s
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)",)': /simple/gunicorn/
Collecting gunicorn==19.5.0
  Downloading gunicorn-19.5.0-py2.py3-none-any.whl (113 kB)
     |████████████████████████████████| 113 kB 29 kB/s
Collecting lockfile>=0.9
  Downloading lockfile-0.12.2-py2.py3-none-any.whl (13 kB)
Using legacy 'setup.py install' for django, since package 'wheel' is not installed.
Installing collected packages: django, gunicorn, lockfile
    Running setup.py install for django ... done
Successfully installed django-1.5.5 gunicorn-19.5.0 lockfile-0.12.2

#检查sqlite3是否安装
[root@Qume-KVM webvirtmgr]# python3
Python 3.6.8 (default, Sep 10 2021, 09:13:53)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> exit()

#初始化账号信息
[root@Qume-KVM webvirtmgr]# python2 manage.py syncdb
WARNING:root:No local_settings file found.
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table servers_compute
Creating table instance_instance
Creating table create_flavor

You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes   #是否创建超级管理员账号
Username (leave blank to use 'root'): root    	#指定超级管理员账号用户名,默认留空为root
Email address: [email protected]			#设置超级管理员邮箱
Password:			#设置超级管理员密码
Password (again):		#再次输入确认超级管理员密码
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)

#拷贝web网页到指定目录
[root@Qume-KVM ~]# mkdir /var/www
[root@Qume-KVM ~]# cp -r /usr/local/src/webvirtmgr/ /var/www/
[root@Qume-KVM ~]# chown -R nginx.nginx /var/www/webvirtmgr/

#配置密钥认证
#由于这里webvirtmgr和kvm服务部署在同一台主机中,所以这里本地信任。如果kvm部署在其他机器上的时候,那么就需要把公钥发送到kvm主机中
[root@Qume-KVM ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.92.130 (192.168.92.130)' can't be established.
ECDSA key fingerprint is SHA256:41MUAgoOJ7cipkGboXt2n0BlrxuPxp2IVlgXn0ahNgg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

#配置端口转发
[root@Qume-KVM ~]# ssh 192.168.92.130 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
Last login: Fri Oct  7 17:19:01 2022 from 192.168.92.1
#查看端口
[root@Qume-KVM ~]# ss -anlt
State       Recv-Q      Send-Q             Local Address:Port             Peer Address:Port      Process
LISTEN      0           128                    127.0.0.1:6080                  0.0.0.0:*
LISTEN      0           128                    127.0.0.1:8000                  0.0.0.0:*
LISTEN      0           128                      0.0.0.0:111                   0.0.0.0:*
LISTEN      0           32                 192.168.122.1:53                    0.0.0.0:*
LISTEN      0           128                      0.0.0.0:22                    0.0.0.0:*
LISTEN      0           128                        [::1]:6080                     [::]:*
LISTEN      0           128                        [::1]:8000                     [::]:*
LISTEN      0           128                         [::]:111                      [::]:*
LISTEN      0           128                         [::]:22                       [::]:*

#配置nginx
#先把原nginx配置文件做个备份,这样改错了也可以恢复
[root@Qume-KVM ~]# cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
[root@Qume-KVM ~]# vim /etc/nginx/nginx.conf
#在server参数中进行修改
#删除listen       [::]:80;行
#参数server_name行改成server_name  localhost;
#删除root         /usr/share/nginx/html;行
    server {
        listen       80 ;
        server_name  localhost;

#在include /etc/nginx/default.d/*.conf;行下修改为如下这样
        location / {
                root    html;
                index   index.html index.htm;
        }

#配置nginx虚拟主机
[root@Qume-KVM ~]# vim /etc/nginx/conf.d/webvirtmgr.conf
server {
    listen 80 default_server;

    server_name $hostname;
    #access_log /var/log/nginx/webvirtmgr_access_log;

    location /static/ {
        root /var/www/webvirtmgr/webvirtmgr;
        expires max;
    }

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Forwarded-Proto $remote_addr;
        proxy_connect_timeout 600;
        proxy_read_timeout 600;
        proxy_send_timeout 600;
        client_max_body_size 1024M;
    }
}

#确保bind绑定本机的8000端口
[root@Qume-KVM ~]# grep "bind"  /var/www/webvirtmgr/conf/gunicorn.conf.py
# bind - The socket to bind.
bind = '127.0.0.1:8000'

#重启nginx服务,并查看默认80端口是否开启
[root@Qume-KVM ~]# systemctl restart nginx.service
[root@Qume-KVM ~]# ss -anlt
State       Recv-Q      Send-Q             Local Address:Port             Peer Address:Port      Process
LISTEN      0           128                    127.0.0.1:6080                  0.0.0.0:*
LISTEN      0           128                    127.0.0.1:8000                  0.0.0.0:*
LISTEN      0           128                      0.0.0.0:111                   0.0.0.0:*
LISTEN      0           128                      0.0.0.0:80                    0.0.0.0:*
LISTEN      0           32                 192.168.122.1:53                    0.0.0.0:*
LISTEN      0           128                      0.0.0.0:22                    0.0.0.0:*
LISTEN      0           128                        [::1]:6080                     [::]:*
LISTEN      0           128                        [::1]:8000                     [::]:*
LISTEN      0           128                         [::]:111                      [::]:*
LISTEN      0           128                         [::]:22                       [::]:*

#设置supervisor。在该文件最后一行添加内容
[root@Qume-KVM ~]# vim /etc/supervisord.conf
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx

[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx

#启动并设为开机自启
[root@Qume-KVM ~]# systemctl enable --now supervisord.service

#配置nginx用户
[root@Qume-KVM ~]# su - nginx -s /bin/bash
[nginx@Qume-KVM ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa):
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:QGXNmuONujvXUn8UcXyVkXvpuZuPpvNl5Ndf8/RRxW8 nginx@Qume-KVM
The key's randomart image is:
+---[RSA 3072]----+
|      ..oo     o*|
|     . .  o   .++|
|      .  o     o*|
|       .+     .o+|
|       .S+    ..E|
|        o o   .*o|
|       . o . . oX|
|      o o . o o*X|
|      o= .  .*ooB|
+----[SHA256]-----+
[nginx@Qume-KVM ~]$ echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" > ~/.ssh/config
[nginx@Qume-KVM ~]$ cat ~/.ssh/config
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
[nginx@Qume-KVM ~]$ chmod 600 .ssh/config
[nginx@Qume-KVM ~]$ ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.92.130' (ECDSA) to the list of known hosts.
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

#验证基于密钥认证是否成功
[nginx@Qume-KVM ~]$ ssh [email protected]
Warning: Permanently added '192.168.92.130' (ECDSA) to the list of known hosts.
Last login: Fri Oct  7 18:01:24 2022 from 192.168.92.130
[root@Qume-KVM ~]# exit
logout
Connection to 192.168.92.130 closed.
[nginx@Qume-KVM ~]$ exit
logout
[root@Qume-KVM ~]#

[root@Qume-KVM ~]# vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

#重启服务,生效配置
[root@Qume-KVM ~]# systemctl restart nginx.service
[root@Qume-KVM ~]# systemctl enable nginx.service
[root@Qume-KVM ~]# systemctl restart libvirtd

做完上述操作后去浏览器进行访问。

如果遇到下列问题则reboot一下即可,这是去浏览器访问失败终端报的错

channel 1012: open failed: connect failed: Device or resource busy
channel 1014: open failed: connect failed: Device or resource busy

Web管理界面

登录密码就是你自己之前配置中设置的

image-20221007184412785

添加连接

image-20221007185207157

新建存储池

image-20221007185456412

image-20221007185509766

image-20221007185719502

image-20221007185809795

将镜像上传至/kvmdata目录下

[root@Qume-KVM ~]# ls /kvmdata/
CentOS-8.5.2111-x86_64-dvd1.iso

将镜像上传完成后,去web界面刷新看一下

image-20221007190357142

新建镜像

image-20221007190517883

image-20221007190625679

新建网络

点击【New Network】先选择网络类型,不然配置界面不一样

image-20221007190943430

实例管理

自定义创建实例

image-20221007191254688

image-20221007191414237

设置控制台密码

image-20221007191506871

连接上镜像

image-20221007191555103

启动实例

image-20221007191727824

报错!别急,有办法解决

image-20221007191908639

[root@Qume-KVM ~]# dnf -y install novnc
[root@Qume-KVM ~]# chmod +x /etc/rc.d/rc.local
[root@Qume-KVM ~]# echo "nohup novnc_server 192.168.100.100:5920 &" >> /etc/rc.d/rc.local
[root@Qume-KVM ~]# . /etc/rc.d/rc.local

image-20221007192425910

后面的步骤就跟VMware一样安装Centos就可以了

image-20221007193450099

;