WARNING: xl now has better capability to manage domain configuration, avoid using this command when possible
也就是说, 在管理域的时候, 尽量避开使用命令, 而是通过配置来处理.
1 2 3 4 5 6 7 8 9 10
xl full list of subcommands:
create Create a domain from config file <filename> config-update Update a running domain's saved configuration, used when rebuilding the domain after reboot. WARNING: xl now has better capability to manage domain configuration, avoid using this command when possible list List information about all/some domains destroy Terminate a domain immediately shutdown Issue a shutdown signal to a domain reboot Issue a reboot signal to a domain ...(omitted)
[root@VM-master ~]# mke2fs -t ext2 /images/xen/busybox.img mke2fs 1.42.9 (28-Dec-2013) /images/xen/busybox.img is not a block special device. Proceed anyway? (y,n) y Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 131072 inodes, 524288 blocks 26214 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=536870912 16 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912
Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done
[root@VM-master ~]# du -sh /images/xen/busybox.img 33M /images/xen/busybox.img
[root@VM-master xen]# cat busybox # ===================================================================== # Example PV Linux guest configuration # ===================================================================== # # This is a fairly minimal example of what is required for a # Paravirtualised Linux guest. For a more complete guide see xl.cfg(5)
# Guest name name = "busybox"
# 128-bit UUID for the domain as a hexadecimal number. # Use "uuidgen" to generate one if required. # The default behavior is to generate a new UUID each time the guest is started. #uuid = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
# Kernel image to boot kernel = "/boot/vmlinuz"
# Ramdisk (optional) ramdisk = "/boot/initramfs"
# Kernel command line options extra = "selinux=0 init=/bin/sh"
# Initial memory allocation (MB) memory = 256
# Maximum memory (MB) # If this is greater than `memory' then the slack will start ballooned # (this assumes guest kernel support for ballooning) maxmem = 256
# Number of VCPUS vcpus = 2
# Network devices # A list of 'vifspec' entries as described in # docs/misc/xl-network-configuration.markdown #vif = [ '' ]
# Disk Devices # A list of `diskspec' entries as described in # docs/misc/xl-disk-configuration.txt disk = [ '/images/xen/busybox.img,raw,xvda,rw' ] root = "/dev/xvda ro"
然后就通过这个配置文件进行创建就行了, 命令像这样:
1
xl -v create /etc/xen/busybox
在后面可以加上-n进行dry-run.
隔了好久更新的分割线, 受不了在CentOS7上搞这个Xen了. 太麻烦了.
问题主要出在那个QEMU的前后端模拟上, 这个Hypervisor的模块服务总是加载失败. 这个xen_scsi_processor总是提示No such device. 这个东西就应该编译进内核! 官方的解决办法简直了, 不搞了.
我去CentOS6了.
终于! 在CentOS6上终于跑起来了, 我们的虚拟机!
1 2 3 4 5 6
[root@localhost ~]# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 1024 4 r----- 1359.9 [root@localhost ~]# xl create /etc/xen/busybox Parsing config from /etc/xen/busybox
进入终端看一下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
[root@localhost ~]# xl console busybox [ 0.000000] Linux version 4.9.127-32.el6.x86_64 (mockbuild@c1bj.rdu2.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-23) (GCC) ) #1 SMP Mon Sep 17 13:48:11 UTC 2018 [ 0.000000] Command line: root=/dev/xvda ro selinux=0 init=/bin/sh ...(omitted) [ 0.000000] Booting paravirtualized kernel on Xen [ 0.000000] Xen version: 4.6.6-12.el6 (preserve-AD) ...(omitted) [ 2.781378] dracut: Mounted root filesystem /dev/xvda [ 2.911023] dracut: Switching root /bin/sh: can't access tty; job control turned off / # ls bin dev home lost+found sbin usr boot etc linuxrc proc sys var / # uname -a Linux (none) 4.9.127-32.el6.x86_64 #1 SMP Mon Sep 17 13:48:11 UTC 2018 x86_64 GNU/Linux / #
[root@xen_node network-scripts]# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 1022 4 r----- 297.8 busybox 1 256 1 -b---- 3.5
那么我们的前后端是否可以进行通信呢, 来看一下:
1 2 3 4 5
[root@xen_node network-scripts]# brctl show bridge name bridge id STP enabled interfaces pan0 8000.000000000000 no xenbr0 8000.000c29bd620e no eth0 vif1.0
[root@xen_node ~]# cp /boot/vmlinuz-2.6.32-754.9.1.el6.x86_64 /mnt/boot/vmlinuz [root@xen_node ~]# cp /boot/initramfs-2.6.32-754.9.1.el6.x86_64.img /mnt/boot/initramfs.img [root@xen_node ~]# grub-install --root-directory=/mnt/ /dev/loop0 Probing devices to guess BIOS drives. This may take a long time. /dev/loop0 does not have any corresponding BIOS drive. [root@xen_node ~]# ls /mnt/boot/ grub initramfs.img lost+found vmlinuz
[root@xen_node ~]# virsh list Id Name State ---------------------------------------------------- 0 Domain-0 running
[root@xen_node ~]# virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 2400 MHz CPU socket(s): 1 Core(s) per socket: 2 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 2096632 KiB
如果你运行这个命令出现了错误:
1 2 3 4
[root@xen_node ~]# virsh list error: failed to connect to the hypervisor error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory