Virtualization technology emerged with the emergence of computer technology and has always played an essential role in computer technology development. From the introduction of the concept of virtualization in the 1950s to the commercialization of virtualization on mainframes by IBM in the 1960s, from the virtual memory of the operating system to the Java virtual machine, to the server virtualization technology based on the x86 architecture The vigorous development of virtualization has added extremely rich connotations to the seemingly abstract concept of virtualization. In recent years, with the popularization of server virtualization technology, new data center deployment and management methods have emerged, bringing efficient and convenient management experience to data center administrators. This technology can also improve the resource utilization rate of the data center and reduce energy consumption. All of this makes virtualization technology the focus of the entire information industry.

This chapter will explain virtualization technology, focusing on the most important current server virtualization technology, analyze and explain its basic knowledge, supporting technology, main functions, etc. and discuss the application of FusionCompute and desktop cloud.

3.1 Introduction to Virtualization Technology

3.1.1 Definition of Virtualization

Virtualization is a broad and changing concept, so it is not easy to give a clear and accurate definition of virtualization. Currently, the industry has given the following multiple definitions of virtualization.

  • Virtualization is an abstract method of expressing computer resources. Through virtualization, the abstracted resources can be accessed in the same way as the resources before the abstraction. This abstract method of resources is not limited by implementation, geographic location, or physical configuration of underlying resources. (From Wikipedia).

  • Virtualization is a virtual (relative to the real) version created for certain things, such as operating systems, computer systems, storage devices, and network resources. (From What Is Information Technology Terminology Database).

  • Although the above definitions are not all the same, after careful analysis, it is not difficult to find that they all illustrate three meanings:

  • The objects of virtualization are various resources.

  • The virtualized logical resources hide unnecessary details from users.

  • Users can realize part or all of their functions in the real environment in the virtual environment.

Virtualization is a logical representation of resources, and physical limitations do not constrain it. In this definition, resources cover a wide range, as shown in Fig. 3.1. Resources can be various hardware resources, such as CPU, memory, storage, network; it can also be a variety of software environments, such as operating systems, file systems, and applications. According to this definition, we can better understand the memory virtualization in the operating system mentioned in Sect. 2.1.2. Memory is a real resource, and virtualized memory is a substitute for this resource. The two have the same logical representation. The virtualization layer hides the details of how to achieve unified addressing and swap in/out between the memory and the hard disk to the upper layer. For software that uses virtual memory, they can still operate on virtual memory with consistent allocation, access, and release instructions, just like accessing real physical memory. Figure 3.1 shows that multiple resources can be virtualized.

Fig. 3.1
figure 1

Various virtualization

The main goal of virtualization is to simplify the representation, access, and management of IT resources, including infrastructure, systems, and software, and provide standard interfaces for these resources to receive input and provide output. The users of virtualization can be end-users, programs, or services. Through standard interfaces, virtualization can minimize the impact on users when IT infrastructure changes. End-users can reuse the original interface because the way they interact with virtual resources has not changed. Even if the implementation of the underlying resources has changed, they will not be affected.

Virtualization technology reduces the degree of coupling between the resource user and the specific realization of the resource, so that the user no longer depends on a particular realization of the resource. Using this loose coupling relationship, system administrators can reduce the impact on users when maintaining and upgrading IT resources.

3.1.2 Development of Virtualization Technology

With the rapid development of information technology today, enterprises and individual users favor virtualization technology mainly since virtualization technology is conducive to solving problems from resource allocation and business management. First of all, the main function of a virtual computer is to give full play to the capacity of idle resources of high-performance computers to achieve the purpose of increasing server utilization even without purchasing hardware; at the same time, it can also complete the rapid delivery and rapid recovery of customer system applications. This is the most basic and intuitive understanding of the public on virtual computers. Secondly, virtualization technology is gradually playing a vital role in enterprise management and business operations. It enables rapid deployment and migration of servers and data centers and reflects its characteristics of transparent behavior management.

The important position of virtualization technology makes its development become the focus of attention in the industry. At the technological development level, virtualization technology is facing four major trends: platform openness, connection protocol standardization, client hardwareization, and public cloud privatization. Platform openness refers to the closed architecture of the basic platform, through virtualization management to enable virtual machines of multiple vendors to coexist under the open platform, and different vendors can implement rich applications on the platform; the standardization of connection protocols aims to solve the current multiple connections Protocols (VMware’s PColP, Citrix’s ICA, etc.) in the case of public desktop cloud complex terminal compatibility issues, so as to solve the wide compatibility issues between terminals and cloud platforms, optimize the industrial chain structure; client hardware in view of the lack of hardware support for desktop virtualization and customer multimedia experience using virtualization technology, the terminal chip technology is gradually improved, and virtualization technology is implemented on mobile terminals; the development trend of public cloud privatization is through technology similar to VPN. Turn the enterprise’s IT architecture into a “private cloud” superimposed on the public cloud and ensure that the private cloud supports the security of enterprise data without sacrificing the convenience of the public cloud.

3.1.3 Advantages of Virtualization Technology

Virtualization technology abstracts and transforms various physical resources of a computer (CPU, memory, disk space, network adapter, etc.), divided and combined into one or more computer configuration environments. Allows users to run multiple operating systems on a server simultaneously, and programs can run in mutually independent spaces without affecting each other, thereby significantly improving the efficiency of the computer.

The virtualization layer simulates a set of independent hardware devices for each virtual machine, including hardware resources such as CPU, memory, motherboard, graphics card, and network card, and installs a guest operating system on it. The end-user’s program runs in the guest operating system.

Virtual machines can support the dynamic sharing of physical resources and resource pools and improve resource utilization, especially for those different loads whose average demand is far lower than the need to provide dedicated resources for them. This way of virtual machine operation has the following advantages.

  1. 1.

    Reduce the number of terminal equipment

    Reduce the number of terminal equipment and reduce maintenance and management costs. Using virtualization technology can effectively reduce the number of managed physical resources such as servers, workstations, and other equipment, curb the growth of such equipment, and hide part of physical resources’ complexity. Simplify public management tasks through automation, access to better information, and central management. Realize load management automation, support the use of common tools on multiple platforms, and improve staff efficiency.

    Integrating multiple systems into one host through virtualization technology can still guarantee one server for one system. Thus, on the basis of not affecting the use of the business, the number of hardware devices can be effectively reduced, and the energy consumption of power resources can be reduced. Simultaneously, it can also reduce the rack location space required by the equipment and avoid the transformation of the computer room environment caused by the increase in the number of equipment.

  2. 2.

    Higher security

    Virtualization technology can achieve isolation and division that simpler sharing mechanisms cannot achieve. These features can achieve controllable and secure access to data and services. By dividing the host and the internal virtual machine, you can prevent one program from affecting other programs’ performance or causing the system to crash. Even if the original program or system is unstable, it can run safely and isolated. If a comprehensive virtualization strategy is implemented in the future, system administrators can make available fault tolerance planning to ensure business continuity in the event of an accident. By converting operating systems and process instances into data files, it can help realize automated and streamlined backup, replicate, provide more robust business continuity, and speed up recovery after failures or natural disasters. Further development of virtual cluster technology can realize the uninterrupted business function and realize multi-machine hot backup.

  3. 3.

    Higher availability

    Virtualize the entire computing infrastructure, and then use specialized software to centrally manage the system and virtual hosts, which can manage physical resources without affecting users. Reduce the management of resources and processes, thereby reducing the complexity of the network management system’s hardware architecture. Through centralized, policy-based management, the advantages of end-to-end virtualization technology can be used for both virtual and physical resources, allowing maintenance personnel to handle enterprise-level installation configuration and change management from a central location. Significantly reduce the resources and time required to manage system hardware.

3.1.4 Common Types of Virtualization Technology

In virtualization technology, the virtual entities are various IT resources. According to the classification of these resources, we can sort out different types of virtualization. Here are some common types of virtualization technology.

  1. 1.

    Infrastructure virtualization

    Since the network, storage, and file system are all critical infrastructures that support the data center’s operation, network virtualization, and storage virtualization are classified as infrastructure virtualization.

    Network virtualization refers to the virtualization technology that integrates network hardware and software resources to provide users with virtual network connections. Network virtualization can be divided into two forms: local area network virtualization and wide area network virtualization. In local area network virtualization, multiple local networks are combined into one logical network, or one local network is divided into multiple logical networks, and this method is used to improve the efficiency of large-scale enterprise self-use networks or internal networks of data centers. The typical representative of this technology is virtual local area network (Virtual LAN, VLAN). For wide-area network virtualization, the most common application is Virtual Private Network (VPN). VPN abstracts the network connection, allowing remote users to access the company’s internal network anytime, anywhere, without feeling the difference between physical and virtual connections. At the same time, VPN can ensure the security and privacy of this external network connection.

    Storage virtualization refers to providing an abstract logical view for physical storage devices. Users can access integrated storage resources through the unified logical interface in this view. Storage virtualization mainly has two forms: storage virtualization based on storage devices and network-based storage virtualization. Disk array technology (Redundant Array of Inexpensive Disks, RAID) is a typical storage virtualization based on storage devices. This technology combines multiple physical disks into a disk array and uses inexpensive disk devices to achieve a unified, high-performance of fault-tolerant storage space. Network Attached Storage (NAS) and Storage Area Network (SAN) are typical representatives of network-based storage virtualization.

  2. 2.

    Software virtualization

    In addition to virtualization technology for infrastructure and systems, there is another virtualization technology for software. For example, the programs and programming languages used by users have corresponding virtualization concepts. Currently, this type of virtualization technology recognized in the industry mainly includes application virtualization and high-level language virtualization.

    Application virtualization decouples the application program from the operating system and provides a virtual operating environment for the application program. In this environment, application virtualization includes the executable file of the application and the runtime environment it needs. When a user needs to run a program, the application virtualization server can push the user’s program components to the application virtualization operating environment of the client in real time. When the user completes the operation and closes the application, his changes and data will be uploaded to the centralized management server. In this way, users will no longer be limited to a single client and can use their applications on different terminals.

    High-level language virtualization solves the problem of migration of executable programs between computers with different architectures. In high-level language virtualization, programs written in high-level languages are compiled into standard intermediate instructions. These intermediate instructions are executed in an interpreted execution or dynamic translation environment to run on different architectures. For example, the widely used Java virtual machine technology removes the coupling between the lower level system platform (including hardware and operating system) and the upper-level executable code to achieve cross-platform execution of code. The user’s Java source program is compiled into platform-neutral bytecode through the compiler provided by the JDK, which is used as the input of the Java virtual machine. The Java virtual machine converts bytecode into binary machine code executable on a specific platform, so as to achieve the effect of “compile once, execute everywhere.”

3.2 Basic Knowledge of Server Virtualization

3.2.1 System Virtualization

System virtualization is the most widely recognized and accepted virtualization technology. System virtualization realizes the separation of operating system and the physical machine, so that one or more virtual operating systems can be installed and run on a physical machine at the same time, as shown in Fig. 3.2. From the perspective of the applications inside the operating system, there is no significant difference between the virtual operating system and the operating system directly installed on the physical machine.

Fig. 3.2
figure 2

System virtualization

The core idea of system virtualization is to use virtualization software to virtualize one or more virtual machines on a physical machine. A virtual machine refers to a logical computer system that uses system virtualization technology to run in an isolated environment and has complete hardware functions, including a guest operating system and its application programs. In system virtualization, multiple operating systems can run simultaneously on the same physical machine without affecting each other, reusing physical machine resources. There are various system virtualization technologies, such as system virtualization applied to IBM z-series mainframes, system virtualization applied to IBM p-series servers based on Power Architecture, and system virtualization applied to x86 architecture personal computers. For these different types of system virtualization, the virtual machine operating environment’s design and implementation are not the same. However, the virtual operating environment of system virtualization needs to provide a virtual hardware environment for the virtual machine running on it, including virtual processors, memory, devices and I/O, network interfaces, etc., as shown in Fig. 3.3. At the same time, the virtual operating environment also provides many features for these operating systems, such as hardware sharing, system management, and system isolation.

Fig. 3.3
figure 3

System virtualization

The more excellent value of system virtualization lies in server virtualization. At present, a large number of x86 servers are used in data centers, and a large data center often hosts tens of thousands of x86 servers. For safety, reliability, and performance considerations, these servers only run one application service, leading to low server utilization. Since servers usually have strong hardware capabilities, if multiple virtual servers are virtualized on the same physical server, each virtual server runs a different service, increasing server utilization, reducing the number of machines, and reducing operating costs. Save physical storage space and electrical energy so as to achieve both economic and environmentally friendly purposes.

In addition to using virtual machines for system virtualization on personal computers and servers, desktop virtualization can also achieve the purpose of running multiple different systems in the same terminal environment. Desktop virtualization removes the coupling relationship between the desktop environment (including applications and files) of the personal computer and the physical machine. The virtualized desktop environment is stored on a remote server instead of on the personal computer’s local hard disk. This means that when the user is working on his desktop environment, all applications and data are running and ultimately saved on this remote server. The user can use any compatible device with sufficient display capabilities to access and use his desktop environment, such as personal computers and smart phones.

3.2.2 Server Virtualization

Server virtualization applies system virtualization technology to servers, virtualizing one server into several servers. As shown in Fig. 3.4, before server virtualization was adopted, three applications were running on three independent physical servers; after server virtualization was adopted, these three applications were running on three separate virtual servers. On the server, the same physical server can host these three virtual servers. Simply put, server virtualization makes it possible to run multiple virtual servers on a single physical server. Server virtualization provides the virtual server with hardware resource abstraction that can support its operation, including virtual BIOS, virtual processor, virtual memory, virtual device I/O, and provides sound isolation and security for virtual machines.

Fig. 3.4
figure 4

Server virtualization

Server virtualization technology was first used in mainframes manufactured by IBM. It was introduced to the x86 platform by VMware in the 1990s, and it was quickly accepted by the industry after 2000, becoming a more popular technology. Seeing the huge advantages of server virtualization, major IT vendors have increased their investments in server virtualization-related technologies. Microsoft’s server operating system Windows Server 2008 optional components include server virtualization software Hyper-V and promises that Windows Server 2008 supports other existing mainstream virtualization platforms. At the end of 2007, Cisco announced a strategic investment in VMware through the purchase of shares. Many mainstream Linux operating system distributions, such as Novell’s SUSE Enterprise Linux and Red Hat’s Red Hat Enterprise Linux, have added Xen or KVM virtualization software, and users are encouraged to install and use it. Virtualization technology is a key direction in technology and strategic business planning by many mainstream technology companies, including Huawei, Cisco, Google, IBM, Microsoft, etc.

3.2.3 Typical Implementation

Server virtualization provides an abstraction of hardware devices and management of virtual servers through virtualization software. At present, the industry usually uses two special terms when describing such software. They are as follows:

  • Virtual Machine Monitor (VMM): responsible for providing hardware resource abstraction for virtual machines and providing a running environment for guest operating systems.

  • Virtualization platform (Hypervisor): responsible for hosting and management of virtual machines. It runs directly on the hardware, so the underlying architecture directly constrains its implementation.

These two terms are usually not strictly distinguished, and Hypervisor can also be translated as a virtual machine monitor. In server virtualization, virtualization software needs to implement functions such as hardware abstraction, resource allocation, scheduling, management, isolation between virtual machines and host operating systems, and multiple virtual machines. The virtualization layer provided by this software is above the hardware platform and below the guest operating system. According to the virtualization layer’s different implementation methods, server virtualization mainly has two implementation methods, as shown in Fig. 3.5. Table 3.1 shows the comparison of these two implementations.

Fig. 3.5
figure 5

Implementation of server virtualization

Table 3.1 Comparison of implementation methods of server virtualization
  • Residence Virtualization. VMM is an application program running on the host operating system, which uses the functions of the host operating system to implement the abstraction of hardware resources and the management of virtual machines. Virtualization in this way is easier to implement, but because the virtual machine’s resource operations need to be completed by the host operating system, its performance is usually low. Typical implementations of this approach are VMware Workstation and Microsoft Virtual PC.

  • Bare Metal Virtualization. In bare metal virtualization, it is not the host operating system that runs directly on the hardware, but the virtualization platform, and the virtual machine runs on the virtualization platform. The virtualization platform provides instruction sets and device interfaces to provide support for virtual machines. This method usually has higher performance, but it is more difficult to implement. Typical implementations of this approach are Xen Server and Microsoft Hyper-V.

3.2.4 Full Virtualization

From the perspective of the guest operating system, the fully virtualized virtual platform is the same as the real platform, and the guest operating system can run without any modification. This means that the guest operating system will operate the virtual processor, virtual memory, and virtual I/O device just like a normal processor, memory, and I/O device. From an implementation point of view, VMM needs to handle all possible behaviors of the client correctly. Furthermore, the client’s behavior is reflected through instructions, so the VMM needs to process all possible instructions correctly. For full virtualization, all possible instructions refer to all instructions defined in the virtual processor’s manual specification. In terms of implementation, taking the x86 architecture as an example, full virtualization has gone through two stages: software-assisted full virtualization and hardware-assisted full virtualization.

  1. 1.

    Software-assisted full virtualization

    In the early days of x86 virtualization technology, the x86 system did not support virtualization at the hardware level, so full virtualization can only be achieved through software. A typical approach is a combination of priority compression (Ring Compression) and binary code translation (Binary Translation).

    The principle of priority compression is: because VMM and the client run at different privilege levels, corresponding to the x86 architecture, usually VMM runs at Ring0 level, guest operating system kernel runs at Ring1 level, and guest operating system applications run at Ring3 level. When the guest operating system kernel executes related privileged instructions because it is at the non-privileged Ring1 level, an exception is usually triggered, and the VMM intercepts the privileged instruction and virtualizes it. Priority compression can correctly handle most of the privileged instructions, but because the x86 instruction system did not consider virtualization at the beginning of its design, some instructions still cannot be processed normally through priority compression, that is, when performing privileged operations in the Ring1 level, there is no an exception is triggered, so that the VMM cannot intercept the privileged instruction and deal with it accordingly.

    Binary code translation is therefore introduced to handle these virtualization-unfriendly instructions. The principle of binary code translation is also very simple, that is, by scanning and modifying the client’s binary code, instructions that are difficult to virtualize are converted into instructions that support virtualization. VMM usually scans the binary code of the operating system, and once it finds an instruction that needs to be processed, it translates it into an instruction block (Cache Block) that supports virtualization. These instruction blocks can cooperate with VMM to access restricted virtual resources, or explicitly trigger exceptions for further processing by VMM. In addition, because the technology can modify the binary code of the client, it is also widely used in performance optimization, that is, replacing some instructions that cause performance bottlenecks with more efficient instructions to improve performance.

    Although priority compression and binary code translation technology can achieve full virtualization, this patching method is difficult to ensure its integrity in the architecture. Therefore, x86 vendors have added support for virtualization to the hardware, thus realizing virtualization on the hardware architecture.

  2. 2.

    Hardware-assisted full virtualization

    If many problems are difficult to solve at their level, the next level will become easier to solve by adding one level. Hardware-assisted full virtualization is one such way. Since the operating system is the last layer of system software on top of the hardware, if the hardware itself adds sufficient virtualization functions, it can intercept the execution of sensitive instructions or sensitive to the operating system’s sensitive instructions. The resource access is reported to the VMM in an abnormal manner, which solves the virtualization problem. Intel’s VT-x technology is representative of this approach. VT-x technology introduces a new execution mode on the processor for running virtual machines. When the virtual machine executes in this particular mode, it still faces a complete set of processor registers and execution environment, but any privileged operation will be intercepted by the processor and reported to the VMM. The VMM itself runs in the normal mode. After receiving the processor report, it finds the corresponding virtualization module for simulation by decoding the target instruction and reflecting the final effect in the environment in the special mode.

    Hardware-assisted full virtualization is a complete virtualization method because instructions also carry access to memory and peripherals themselves. The interception of the processor instruction level means that VMM can simulate a virtual host the same as the real host. In this environment, as long as any operating system can run on an equivalent host in reality, it can run seamlessly in this virtual machine environment.

3.2.5 Paravirtualization

Paravirtualization is also called quasi-virtualization. Paravirtualization enables VMM to virtualize physical resources by modifying instructions at the source code level to avoid virtualization vulnerabilities. As mentioned above, x86 has some instructions that are difficult to virtualize. Full virtualization uses binary code translation to avoid virtualization vulnerabilities at the binary code level. Paravirtualization takes another approach: to modify the code of the operating system kernel (i.e., the API level) so that the operating system kernel completely avoids these instructions that are difficult to virtualize.

The operating system usually uses all the processor functions, such as privilege levels, address space, and control registers. The first problem that paravirtualization needs to solve is how to insert the VMM. The typical approach is to modify the processor-related code of the operating system to allow the operating system to actively surrender the privilege level and run on the next level of privilege. In this way, when the operating system tries to execute a privileged instruction, the protection exception is triggered, thereby providing an interception point for the VMM to simulate. Now that the kernel code needs to be modified, paravirtualization can be further used to optimize I/O. In other words, paravirtualization does not simulate real-world devices because too many register simulations will reduce performance. On the contrary, paravirtualization can customize highly optimized I/O protocols. This I/O protocol is wholly based on transactions and can reach the speed of a physical machine.

3.2.6 Mainstream Server Virtualization Technology

Many mainstream virtualization technologies are generally divided into two types: open source and closed source. Open source virtualization technologies include KVM and Xen, and closed source virtualization technologies include Microsoft’s Hyper-V, VMware’s vSphere, and Huawei’s FusionSphere.

Open source virtualization technology is free and can be used at any time. Their source code is public, and users can customize some special functions according to their needs. Open source virtualization technology has high technical requirements for users. Once the system has problems, you need to rely on your own technology and experience to complete the repair of the system. With closed-source virtualization technology, users cannot see the source code, nor can they perform personalized customization. Closed-source virtualization products are generally charged and provide users with “out-of-the-box” services. During use, if there is a problem with the system, the manufacturer will provide full support.

There is no difference between open source and closed source for users, only which one is more suitable.

In the open source virtualization technology, KVM and Xen are equally divided, KVM is full virtualization, and Xen supports both paravirtualization and full virtualization. KVM is a module in the Linux kernel, which is used to realize the virtualization of CPU and memory. It is a process of Linux, and other I/O devices (network cards, disks, etc.) need QEMU to realize. Xen is different from KVM in that it runs directly on the hardware and then runs a virtual machine on it. Virtual machines in Xen are divided into two categories: Domain0 and DomainU. Domain0 is a privileged virtual machine that can directly access hardware resources and manage the DomainU of other ordinary virtual machines. Domain0 needs to be started before other virtual machines are started. DomainU is an ordinary virtual machine and cannot directly access hardware resources. All operations need to be forwarded to Domain0 through the front/back-end driver, and then Domain0 completes the specific operations and returns the results to DomainU.

3.3 Supporting Technology of Server Virtualization

3.3.1 CPU Virtualization

The CPU virtualization technology abstracts the physical CPU into a virtual CPU, and a physical CPU thread can only run the instructions of one virtual CPU at any time. Each guest operating system can use one or more virtual CPUs. Between these guest operating systems, the virtual CPUs are isolated from each other and do not affect each other.

Operating systems based on the x86 architecture are designed to run directly on the physical machine. At the beginning of the design, these operating systems are designed assuming that they completely own the underlying physical machine hardware, especially the CPU. In the x86 architecture, the processor has four operating levels, namely Ring0, Ring1, Ring2, and Ring3. Among them, the Ring0 level has the highest authority and can execute any instructions without restrictions. The run level decreases sequentially from Ring0 to Ring3. Applications generally run at the Ring3 level. The kernel mode code of the operating system runs at the Ring0 level because it needs to control and modify the state of the CPU directly, and operations like this require privileged instructions running at the Ring0 level to complete.

To realize virtualization in the x86 architecture, a virtualization layer needs to be added below the guest operating system layer to realize the sharing of physical resources. It can be seen that this virtualization layer runs at the Ring0 level, and the guest operating system can only run at the level above Ring0, as shown in Fig. 3.6.

Fig. 3.6
figure 6

Software CPU virtualization under the x86 architecture

However, the privileged instructions in the guest operating system, such as interrupt processing and memory management instructions, will have different semantics and produce other effects if they are not run at the Ring0 level, or they may not work at all. Due to the existence of these instructions, it is not easy to virtualize the x86 architecture. The key to the problem is that these sensitive instructions executed in the virtual machine cannot directly act on the real hardware but need to be taken over and simulated by the virtual machine monitor.

Full virtualization uses dynamic binary translation technology to solve the guest operating system’s privileged instruction problem. The so-called dynamic translation of binary code means that when the virtual machine is running, the trapping instruction is inserted before the sensitive instruction, and the execution is trapped in the virtual machine monitor. The virtual machine monitor dynamically converts these instructions into a sequence of instructions that can perform the same function before executing them. In this way, full virtualization converts sensitive instructions executed in the kernel state of the guest operating system into a sequence of instructions with the same effect that can be executed through the virtual machine monitor, while non-sensitive instructions can be run directly on the physical processor.

Different from full virtualization, paravirtualization solves the problem of virtual machines executing privileged instructions by modifying the guest operating system. In paravirtualization, the guest operating system hosted by the virtualization platform needs to modify its operating system and replace all sensitive instructions with super calls to the underlying virtualization platform. The virtualization platform also provides a calling interface for these sensitive privileged commands.

Whether it is full virtualization or paravirtualization, they are pure software CPU virtualization and do not require any changes to the processor itself under the x86 architecture. However, pure software virtualization solutions have many limitations. Whether it is the fully virtualized binary code dynamic translation technology or the paravirtualized super call technology, these intermediate links will inevitably increase the complexity and performance overhead of the system. In addition, in paravirtualization, support for guest operating systems is limited by the capabilities of the virtualization platform.

As a result, hardware-assisted virtualization came into being. This technology is a hardware solution. The CPU that supports virtualization technology adds a new instruction set and processor operating mode to complete CPU virtualization functions. At present, Intel and AMD have introduced hardware-assisted virtualization technologies Intel VT and AMD-V, respectively, and gradually integrated them into newly launched microprocessor products. Taking Intel VT technology as an example, processors that support hardware-assisted virtualization have added a set of virtual machine extensions (VMX), which adds about 10 instructions to support virtual related operations. In addition, Intel VT technology defines two operating modes for the processor, namely root mode and non-root mode. The virtualization platform runs in root mode, and the guest operating system runs in non-root mode. Since hardware-assisted virtualization supports the guest operating system to run directly on it, there is no need for dynamic translation or hyper-calling of binary codes, thus reducing the related performance overhead and simplifying the design of the virtualization platform.

3.3.2 Memory Virtualization

Memory virtualization technology manages the real physical memory of a physical machine in a unified manner and packs it into multiple virtual physical memories for use by several virtual machines, so that each virtual machine has its own independent memory space. Since memory is the most frequently accessed device by virtual machines in server virtualization, memory virtualization, and CPU virtualization have an equally important position.

In memory virtualization, the virtual machine monitor must manage the memory on the physical machine and divide the machine memory according to the memory requirements of each virtual machine, while keeping the memory access of each virtual machine isolated from each other. Essentially, a physical machine’s memory is a contiguous address space, and access to the memory by upper-level applications is mostly random. Therefore, the virtual machine monitor needs to maintain the mapping relationship between the memory address block in the physical machine and the continuous memory block seen inside the virtual machine to ensure that the virtual machine’s memory access is continuous. Modern operating systems use segment, page, segment page, multi-level page tables, cache, virtual memory, and other complex technologies for memory management. The virtual machine monitor must be able to support these technologies so that they remain valid in a virtual machine environment and guarantee a high level of performance.

Before discussing memory virtualization, let’s review classic memory management techniques. Memory as a storage device is indispensable for applications’ operation because all applications must submit codes and data to the CPU for processing and execution through memory. If there are too many applications running on the computer, it will exhaust the memory in the system and become a bottleneck in improving computer performance. People usually use extended memory and optimization procedures to solve this problem, but this method is very costly. Therefore, virtual memory technology was born. For virtual memory, all CPUs based on the x86 architecture are now equipped with Memory Management Unit (MMU) and Translation Lookaside Buffer (TLB) to optimize virtual memory performance. In short, classic memory management maintains the mapping relationship between virtual memory and physical memory as seen by the application.

To run multiple virtual machines on a physical server, the virtual machine monitor must have a mechanism for managing virtual machine memory, that is, a memory virtual management unit. Because a new memory management layer is added, virtual machine memory management is different from classic memory management. The “physical” memory seen by the operating system is no longer the real physical memory, but the “pseudo” physical memory managed by the virtual machine monitor. Corresponding to this “physical” memory is a newly introduced concept-machine memory. Machine memory refers to the real memory on the physical server hardware. In memory virtualization, there are three types of memory: process logical memory, virtual machine physical memory, and server machine memory, as shown in Fig. 3.7. The address spaces of these three types of memory are called logical addresses, “physical” addresses, and machine addresses.

Fig. 3.7
figure 7

Memory virtualization

In memory virtualization, the mapping relationship between process logic memory and server machine memory is taken care of by the virtual machine memory management unit. There are two main methods for the realization of the virtual machine memory management unit.

The first is the shadow page table method, as shown in Fig. 3.8a. The guest operating system maintains its page table, and the memory address in the page table is the “physical” address seen by the guest operating system. Simultaneously, the virtual machine monitor also maintains a corresponding page table for each virtual machine, but this page table records the real machine address. The page table in the virtual machine monitor is established based on the page table maintained by the guest operating system and will be updated of the guest operating system page table, just like its “shadow,, so it is called a “shadow page” table. VMware Workstation, VMware ESX Server, and KVM all use the shadow page table method.

Fig. 3.8
figure 8

Two main methods for implementing the memory management unit of a virtual machine

The second is the page table writing method, as shown in Fig. 3.8b. When the guest operating system creates a new page table, it needs to register the page table with the virtual machine monitor. At this time, the virtual machine monitor will deprive the guest operating system of the write permission of the page table and write the machine address maintained by the virtual machine monitor to the page table. When the guest operating system accesses the memory, it can obtain the real machine address in its page table. Each modification of the page table by the guest operating system will fall into the virtual machine monitor, and the virtual machine monitor will update the page table to ensure that its page table entries always record the real machine address. The page table writing method needs to modify the guest operating system. Xen is a typical representative of this method.

3.3.3 Device and I/O Virtualization

In addition to CPU and memory, other vital components in the server that need to be virtualized include equipment and I/O. Device and I/O virtualization technology unified management of the real devices of physical machines, packaged them into multiple virtual devices for use by several virtual machines, and responded to the device access requests and I/O requests of each virtual machine.

At present, mainstream equipment and I/O virtualization are all realized through software. As a platform between shared hardware and virtual machines, the virtualization platform provides convenience for device and I/O management and provides rich virtual device functions for virtual machines.

Take VMware’s virtualization platform as an example. The virtualization platform virtualizes the devices of physical machines, standardizes these devices into a series of virtual devices, and provides a set of virtual devices that can be used for virtual machines, as shown in Fig. 3.9. It is worth noting that the virtualized device may not completely match the model, configuration, and parameters of the physical device. However, these virtual devices can effectively simulate the actions of the physical device and translate the device operations of the virtual machine to the physical device. And return the running result of the physical device to the virtual machine. Another benefit of this unified and standardized approach to virtual devices is that virtual machines do not depend on the implementation of underlying physical devices. Because for the virtual machine, it always sees these standard equipment provided by the virtualization platform. In this way, as long as the virtualization platform is always consistent, virtual machines can be migrated on different physical platforms.

Fig. 3.9
figure 9

Device and I/O virtualization

In server virtualization, the network interface is a unique device that plays an important role. Virtual servers provide services to the outside world through the network. In server virtualization, each virtual machine becomes an independent logical server, and the communication between them is carried out through a network interface. Each virtual machine is assigned a virtual network interface, which is a virtual network card from the inside of the virtual machine. Server virtualization requires modification of the network interface driver of the host operating system. After modification, the network interface of the physical machine must be virtualized with a switch through software, as shown in Fig. 3.10. The virtual switch works at the data link layer and is responsible for forwarding data packets delivered from the physical machine’s external network to the virtual machine network interface and maintains the connection between multiple virtual machine network interfaces. When a virtual machine communicates with other virtual machines on the same physical machine, its data packets will be sent out through its virtual network interface, and the virtual switch will forward the data packet to the virtual network interface of the target virtual machine after receiving the data packet. This forwarding process does not need to occupy physical bandwidth because a virtualization platform manages the network in software.

Fig. 3.10
figure 10

Network interface virtualization

3.3.4 Storage Virtualization

With the continuous development of information services, network storage systems have become the core platform of enterprises. Many high-value data have accumulated, and applications surrounding these data have increasingly higher requirements for the platform. Not only in storage capacity, but also in data access performance, data transmission performance, data management capabilities, storage expansion capabilities, and many other aspects.

RAID technology is the embryonic form of storage virtualization technology. It provides a unified storage space for the upper layer by combining multiple physical disks in an array. For the operating system and upper-level users, they don’t know how many disks there are in the server, they can only see a large “virtual” disk, that is, a logical storage unit. NAS and SAN appeared after RAID technology. NAS decouples file storage from the local computer system and centralizes file storage in NAS storage units connected to the network, such as NAS file servers. Heterogeneous devices on other networks can use standard network file access protocols, such as NFS under the UNIX operating system and the Server Message Block (SMB) protocol under the Windows operating system, to follow the permissions of the files on it Restrict access and updates. Unlike NAS, although it also separates storage from the local system and concentrates it on the local area network for users to share and use, SAN is generally composed of disk arrays connected to Fibre Channel. Servers and clients use SCSI protocol for high-speed data communication. SAN users feel these storage resources are the same as the devices directly connected to the local system. The share stored in the SAN is at the disk block-level, while the share stored in the NAS is at the file-level.

At present, not limited to RAID, NAS and SAN, storage virtualization has been given more meaning. Storage virtualization allows logical storage units to be integrated within a wide area network and can be moved from one disk array to another without downtime. In addition, storage virtualization can also allocate storage resources based on users’ actual usage. For example, the operating system disk manager allocates 300GB of space to the user, but the user’s current usage is only 2GB, and it remains stable for a while, the actual allocated space may only be 10GB, which is less than the nominal capacity provided to the user. When the user’s actual usage increases, the appropriate allocation of new storage space will improve resource utilization.

3.3.5 Network Virtualization

Network virtualization usually includes virtual local area networks and virtual private network. A virtual local area network can divide a physical local area network into multiple virtual local area networks, and even divide the nodes in multiple physical local area networks into one virtual local area network. Therefore, the communication in the virtual local area network is similar to the way of physical local area networks and is transparent to users. The virtual private network abstracts network connections, allowing remote users to access the internal network of the organization as if they were physically connected to the network. Virtual private networks help administrators protect the network environment, prevent threats from unrelated network segments on the Internet or Intranet, and enable users to quickly and securely access applications and data. At present, virtual private networks are used in a large number of office environments and become an important supporting technology for mobile office.

Recently, various vendors have added new content to network virtualization technology. For network equipment providers, network virtualization is the virtualization of network equipment, that is, traditional routers, switches and other equipment are enhanced to support a large number of scalable applications. The same network equipment can run multiple virtual network equipment, such as firewalls, VoIP, and mobile services.

The specific content of network virtualization will be introduced in detail in Chap. 4.

3.3.6 Desktop Virtualization

Before introducing desktop virtualization in detail, we must first clarify the difference between server virtualization and desktop virtualization.

Server virtualization is the division of a physical server into multiple small virtual servers. With server virtualization, numerous servers rely on one physical machine to survive. The most common server virtualization method is to use a virtual machine, which can make a virtual server look like an independent computer. IT departments usually use server virtualization to support various tasks, such as supporting databases, file-sharing, graphics virtualization, and media delivery. By consolidating servers into less hardware, server virtualization reduces business costs and increases efficiency. But this kind of merger is not often used in desktop virtualization, and the scope of desktop virtualization is wider.

Desktop virtualization is to replace the physical computer with a virtual computer environment and deliver it to the client. The virtual computer is stored in a remote server and can be delivered to the user’s device. Its operation mode is the same as that of a physical machine. One server can deliver multiple personalized virtual desktop images. There are many ways to achieve desktop virtualization, including terminal server virtualization, operating system streaming, virtual desktop infrastructure (VDI), and desktop as a service (DaaS).

Servers are easier to know what to do next than virtual desktops because servers perform the same tasks almost every day. Users need to specify software and tools for server virtualization or use the same management tools for server virtualization and desktop virtualization.

Desktop virtualization usually requires a server to host the virtual image, and sometimes the situation is more complicated. End-users want a good desktop experience, but desktop virtualization cannot accurately predict virtual desktop users’ behavior. This means that desktop virtualization needs to support actual computer applications plus all the infrastructure required for virtual desktops. On a regular working day, the machine hosting the virtual desktop may have more workload than other virtual servers.

Desktop virtualization decouples the user’s desktop environment from the terminal device it uses. What is stored on the server is the complete desktop environment of each user. Users can use different terminal devices with sufficient processing and display functions, such as personal computers or smartphones, to access the desktop environment through the network, as shown in Fig. 3.11. The most significant benefit of desktop virtualization is using software to configure personal computers and other client devices from a centralized location. The system maintenance department can manage numerous enterprise clients in the data center instead of on the desktop of each user, which reduces on-site support work and strengthens the control of application software and patch management.

Fig. 3.11
figure 11

Desktop virtualization

Whether it is desktop virtualization or server virtualization, security is an issue that cannot be ignored. In the internal information security of enterprises, the most dangerous element is the desktop device. Many companies have even introduced desktop terminal security management software to prevent the hidden dangers of the terminal from affecting the safe operation of other devices in the LAN and the theft of important background data. Through desktop virtualization, all data authentication can achieve consistent policy and unified management, which effectively improves the information security level of the enterprise. Furthermore, through the implementation of desktop virtualization, users can transfer the original terminal data resources and even the operating system to the server in the back-end data center. In contrast, the front-end terminal is transformed into a lightweight display-oriented and computing-assisted client.

Desktop virtualization can help companies further simplify the lightweight client architecture. Compared with the existing traditional distributed personal computer desktop system deployment, the lightweight client architecture deployment service using desktop virtualization can reduce the purchase cost of hardware and software for the enterprise and further reduce the enterprise’s internal management cost and risk. With the rapid upgrading of hardware, the increase and distribution of software, and the decentralization of working environments, the work of managing and maintaining terminal equipment has become more and more difficult. Desktop virtualization can reduce the cost of electricity, management, personal computer purchase, operation, and maintenance for enterprises.

Another advantage of desktop virtualization is that because the user’s desktop environment is saved as a virtual machine, the user’s desktop environment can be snapshot and backed up by taking a snapshot and backup of the virtual machine. When the user’s desktop environment is attacked or a major operation error occurs, the user can restore the saved backup, which significantly reduces the maintenance burden of the user and the system administrator.

3.4 Main Functions of Virtual Machine

3.4.1 Virtual Machine Snapshot

In daily life, we will record the beautiful moments in life by taking photos. In virtualization, the “snapshot“of a virtual machine is very similar to taking pictures in our lives. It can record the state of the virtual machine at a certain moment. Through photos, we can retrieve our memories; through snapshots, we can also restore the virtual machine to its state at a certain moment. Virtual machine snapshots are generally used before destructive tests such as upgrading, patching, and testing of the virtual machine. Once the virtual machine fails, the snapshot can be used to restore the virtual machine quickly. The storage system completes the virtual machine snapshot function. The Storage Networking Industry Association (SNIA) defines a snapshot as: an entirely usable copy of a specified data set, which includes the corresponding data at a certain point in time (the point in time when the copy started). A snapshot can be a copy of the data it represents or a copy of the data. Figure 3.12 shows the snapshot.

Fig. 3.12
figure 12

Snapshot

The snapshot has the following characteristics.

Snapshots can be generated quickly and used as a data source for traditional backup and archiving, reducing, or even eliminating the window for data backup. Snapshots are stored on the disk and can be accessed quickly, which improves the speed of data recovery. Disk-based snapshots enable storage devices to have flexible and frequent recovery points. By using snapshots at different time points, accidentally erased or damaged data can be quickly and conveniently restored, and online data recovery can be performed on it.

The snapshot establishes a pointer list to indicate the address where the data is read in terms of specific technical details. When the data changes, the pointer list can provide a real-time data in a very short time and copy it. There are two common snapshot modes: Copy-On-Write (COW) snapshots and Redirect-On-Write (ROW) snapshots. Both COW and ROW belong to the knowledge in the storage field, and most vendors use ROW when creating virtual machine snapshots. No matter it is COW or ROW, there will be no real physical copy action, just make changes on the mapping.

3.4.2 Rapid Deployment and Cloning of Virtual Machines

When we buy a virtual machine on the public cloud, the background will quickly generate a virtual machine with an operating system for us, and it will take much less time than installing the operating system on a computer by ourselves. This is how the use virtual machines can quickly be deployed in virtualization.

The rapid deployment of virtual machines can be achieved in two ways: deployment by template and virtual machine cloning.

The essence of the template is also a virtual machine, which can be understood as a copy of the virtual machine, which also contains the virtual machine disk and the configuration file of the virtual machine. Using templates to create virtual machines can greatly save time for configuring new virtual machines and installing operating systems. After the virtual machine template is created, users are not allowed to start and start at will. This design is to ensure that the template will not be changed due to random editing, and it will never occupy the computing resources of the cluster. The virtual machine deployed using the template and the template are independent of each other. If you want to update or re-edit the template, you need to convert the template to a virtual machine first, and then remake the virtual machine into a template after editing.

Virtual machine templates are very useful for deploying a large number of similar virtual machines because they can maintain the consistency of the virtual machines, and at the same time can automatically modify the parameters (such as host names and security identifiers) that require different parameter values. For example, if an existing group of testers needs to test the company’s newly developed software, the administrator can use the virtual mechanism of the software as a template, and then quickly deploy a batch of virtual machines with the same configuration to different testers. For tests with different scenarios and different requirements, once any problems occur during the testing process, the administrator can delete the faulty virtual machine and then redeploy the same virtual machine to the tester. In addition, different virtual machine templates can contain different software. For example, the template used by financial staff can be pre-installed with the financial system, and the template used by sales staff can be pre-installed with the ERP system. These different templates can be used for the corresponding staff at any time. Create a virtual machine that meets their needs.

In addition to deploying a virtual machine using a template, a virtual machine can also be quickly deployed using the virtual machine itself. This function is called virtual machine cloning (see Fig. 3.13). Different from using template deployment, virtual machine cloning is a complete copy of the source virtual machine at a certain point in time. All cloned virtual machine settings, including personalized data such as hostname and IP address, are identical to the source. The virtual machine is the same. We all know that if two identical IP addresses appear in the LAN, the system will automatically report an error. Therefore, it is best not to start the cloned virtual machine at the same time.

Fig. 3.13
figure 13

Virtual machine clone

3.4.3 Virtual Machine Backup

In the past few years, data protection technology under the virtual server environment has made significant progress. Virtual server backup has evolved from simply backing up virtual machines (just like backing up physical systems) to backing up operating system clients and even dedicated backup programs with all the advantages of virtualization technology.

Anyone who manages a virtual server environment expects backup software for a virtual server environment to have a set of core functions. Of course, not all of them are included, but some mature technologies are needed to bring user experience throughout the backup process.

When evaluating suppliers and their products, it is important to ensure that these functions and features are included, and it is equally important to understand how to implement these technologies when changing from one supplier to another. The following are common backup techniques used in virtual server environments.

  1. 1.

    Changed block tracking

    Each backup management software is designed to back up virtual machines and supports Changed Block Tracking (CBT) technology. There is no need to back up the entire virtual machine with CBT technology, only the changed files. When each backup task starts, only the changed blocks of the virtual machine will be backed up. CBT technology effectively reduces the total amount of data backed up to the backup server or backup target through the network.

    Usually, when the virtual machine backup software starts to back up, it will take a snapshot of the backup device, so that the state before the backup will be preserved. When performing a CBT backup, the previously backed up virtual machine on the backup device is updated with the changed block. Once the CBT transfer process is completed, the most recent backup data will be retained on the backup device, and the previous backup will be used as a point-in-time recovery in the form of a snapshot.

  2. 2.

    Granular recovery

    In the past, CBT backup had a flaw: if you want to restore a file from a backup instance, you must first restore the entire instance. In this respect, CBT is similar to mirror-based backup. However, unlike mirror-based backup, current virtual server backup software can restore part of it, for example, files and mail messages can be restored separately from the mirror.

    Generally, granular recovery can be achieved by copying data from the backup device and directly mounting the virtual machine or using the recovery wizard to enter the virtual machine to extract independent components directly. The recovery wizard method is the preferred recovery method because it can be recovered in a few steps on one interface.

3.4.4 Virtualization Cluster

Clustering is a way to combine a group of computers as a whole to provide resources to users. Computing, storage, and network resources can be provided in a virtualized cluster, and the cluster is complete only if they are included.

  1. 1.

    High availability

    The basic principle of high availability: Use cluster technology to overcome the limitations of a single physical host, and ultimately achieve uninterrupted business or reduced interruption time. High availability in virtualization only guarantees the computing level. Specifically, high availability at the virtualization level is the high availability of the entire virtual machine system. That is, when one computing node fails, the other node in the cluster can start it quickly and automatically and replace it.

    Virtualized clusters generally use shared storage. Virtual machines are composed of configuration files and data disks. Data disks are stored on shared storage, and configuration files are stored on computing nodes. When a computing node fails, the virtualization management system (such as vCenter and VRM) will rebuild the failed virtual machine on other nodes according to the recorded virtual machine configuration information.

  2. 2.

    Load balancing technology

    Load balancing is a cluster technology that shares specific services (network services, network traffic, etc.) to multiple network devices (including servers, firewalls, etc.) or multiple links, thereby improving business processing capabilities and ensuring business high reliability. Load balancing has the following advantages.

    • High performance: Load balancing technology distributes services to multiple devices more evenly, improving the performance of the entire system.

    • Scalability: Load balancing technology can easily increase the number of devices or links in the cluster and meet the growing business needs without reducing the quality of the business.

    • High reliability: The failure of a single or even multiple devices or links will not cause business interruption, which improves the reliability of the entire system.

    • Manageability: A large amount of management work is concentrated on the equipment applying load balancing technology, and the equipment group or link group only needs regular configuration and maintenance.

    • Transparency: For users, a cluster is equivalent to a device or link with high reliability and good performance, and users cannot perceive and do not need to care about the specific network structure. Increase and decrease of equipment or links will not affect normal business.

  3. 3.

    High scalability

    In a traditional non-virtualized environment, all services are deployed on physical machines. It is possible that in the early stage of system construction, the business volume is not very large, so the hardware resources configured for the physical machine are relatively low. With the increase in business volume, the original hardware cannot meet the demand, and the hardware can only be upgraded continuously. For example, upgrading the original one-channel CPU to two-channels, and upgrading the 256GB of memory to 512GB, this expansion method is called Scale-Up. However, there is an upper limit on the hardware that a physical machine can bear. If the business volume continues to increase, the server can only be replaced in the end, and it is inevitable to stop and expand the capacity.

    In virtualization, all resources are pooled, and all resources that carry service virtual machines come from this resource pool. When the above business volume continues to increase, we do not need to upgrade a single server’s hardware resources, but only need to increase the resources in the resource pool. In the implementation, you only need to increase the number of servers. This expansion method is called Scale-Out. The cluster supports horizontal expansion, so it is easier to expand than traditional non-virtualization.

3.4.5 Hot Addition Virtual Machine Resources

Hot additions here refer to adding compute, storage, and network resources to a virtual machine while it is powered on.

From the administrator’s point of view, the CPU, memory, and other parameters of the virtual machine are all parameters in the configuration file, the user can modify the hardware configuration of the virtual machine by modifying the corresponding parameters in the configuration file. As shown in Fig. 3.14, CPU and memory usage has reached 75% during the user’s use of virtual machines, and the user-side experience can be very poor and may even affect normal usage. At this point, using the functionality added by the virtual machine resource hot, you can add CPU and memory resources to the virtual machine online, so that the user side of the resource utilization quickly down to normal levels.

Fig. 3.14
figure 14

Hot addition of virtual machine resources

In addition to CPU and memory, storage and network resources also support hot addition. For example, expanding the capacity of a virtual machine disk, adding a network card to a virtual machine, and so on. In addition to requiring the virtual machine itself to support the hot-added functionality, the virtual machine’s operating system must also be supported for the hot-added resources to take effect immediately. Otherwise, the virtual machine needs to be restarted and only used after the operating system recognizes the hardware resources. In most cases, resources only support heat addition, not reduction. For example, an administrator can scale a virtual machine’s disk from 10GB to 20GB, but a reduction from 20GB to 10GB may not necessarily be executable. The addition of storage resources supports the addition of different disks to virtual machines in addition to the expansion of existing disks.

3.4.6 NUMA

Non-Uniform Memory Access Architecture (NUMA) is a technology that improves data read/write speed. In modern times, the computer’s single CPU’s computing speed has reached the bottleneck, so the designer adopts the multi-core CPU method to improve the computer’s computing speed. CPU and memory are connected via Northbridge, and as the number of CPUs increases and memory increases accordingly, the response speed on Northbridge becomes slower and more obvious. As a result, the designers bound memory evenly to each CPU, thus avoiding congestion from sharing the North Bridge. Figure 3.15 shows a multi-way CPU arrangement comparison.

Fig. 3.15
figure 15

Multi-channel CPU arrangement comparison

After modification, memory and CPU are bound, the CPU has a shorter response time to read data from the bound memory (local access), and a longer response time to read data across CPU access memory (remote access). Since local access is fast, let the program run with a CPU and its bound memory, which increases productivity, which is NUMA. When NUMA is used, it treats the CPU and the memory bound to it as a NUMA node, each with its own internal CPU, bus, and memory. If you access across nodes, you need to correspond to the virtual machine through the interconnection between CPUs. Using NUMA technology allows virtual machines to use hardware resources on the same NUMA node to improve the responsiveness of virtual machines.

3.5 KVM

3.5.1 Introduction to KVM

KVM is an open source virtual machine software based on GPL licensing. KVM was first developed by Qumranet and appeared on the Mailing List of the Linux kernel in October 2006 and was integrated into the Linux 2.6.20 kernel in February 2007 as part of the kernel. The architecture of the KVM is shown in Fig. 3.16. KVM uses a hardware virtualization approach based on Intel VT or AMD-V technology, as well as a combination of QEMU to provide device virtualization. Architecturally, it is argued that KVM is the host model because Linux was not designed with virtualization support in place, and KVM exists as a kernel module. However, as more and more virtualization features are added to the Linux kernel, there is also talk that Linux is already a Hypervisor, so KVM is the Hypervisor model. The promoter and maintainer of the KVM project also considers the KVM to be a Hypervisor model.

Fig. 3.16
figure 16

KVM architecture

KVM supports a variety of hardware platforms, including IA32, IA64, S390, and PowerPC. KVM can also be ported to other operating systems, and there are currently projects to port KVM to Free BSD. KVM is still in its infancy. Today, KVM is proliferating with the addition of many Linux kernel developers. KVM is characterized by a very good combination with the Linux kernel, so KVM inherits most of Linux’s functionality. Of course, like Xen, KVM is very portable as open source software.

3.5.2 KVM Virtualization Technology

KVM is an x86 hardware based on virtualization extensions (Intel VT or AMD-V) and is a fully native, full virtualization solution for Linux. Some of the virtualization support is mainly used in Linux and Windows client operating systems in quasi-virtual network drivers. KVM is currently designed to support a wide range of customer operating systems, such as Linux, BSD, Solaris, Windows, Haiku, ReactOS, and AROS Research Operating System, through loadable kernel modules.

In the KVM architecture, the virtual machine is implemented as a regular Linux process, scheduled by a standard Linux scheduler. In fact, each virtual CPU appears as a regular Linux process. This enables KVM to enjoy all the features of the Linux kernel. It is important to note that the KVM itself does not perform any simulations and requires the user space program to set the address space of each virtual client server through the /dev/kvm interface, provide it with simulated I/O, and map its video display back to the host’s display. At present, this app is the “big name” QEMU.

The functional features of KVM are described below.

  1. 1.

    Memory management

    KVM inherits powerful memory management capabilities from Linux. The memory of a virtual machine is stored like the memory of any other Linux process and can be exchanged in the form of large pages for higher performance or shared as disk files. NUMA allows virtual machines to effectively access large amounts of memory.

    KVM supports the latest hardware-based memory virtualization capabilities, as well as Intel’s Extended Page Table (EPT) and AMD’s Nested Page Table (NPT) (also known as Fast Virtualization Index-RVI) for lower CPU utilization and higher throughput.

    Memory page sharing is supported by a kernel feature called Kernel Same-Page Merging (KSM). KSM scans each virtual machine’s memory, and if the virtual machine has the same memory pages, KSM merges those pages into a page shared between virtual machines, storing only one copy. If a client tries to change this shared page, it gets its own private copy.

  2. 2.

    Storage

    KVM can use any storage supported by Linux to store virtual machine images, including storage devices with IDE, SCSI, SATA interfaces (including mechanical and SSDs), NAS (including NFS and SAMBA/CIFS), or SAN that supports iSCSI and Fibre Channel. Multipath I/O can be used to improve storage throughput and provide redundancy. Because KVM is part of the Linux kernel, it can leverage a mature and reliable storage infrastructure supported by all leading storage vendors, with a well-documented storage stack for production deployments.

    KVM also supports virtual machine mirroring on shared file systems such as global file systems (GFS2) to allow virtual machine mirrors to be shared across multiple hosts or using logical volume sharing. Disk mirroring supports on-demand allocation, allocating storage space only when virtual machines need it, rather than allocating the entire storage space in advance, improving storage utilization. KVM’s native disk format is QCOW2, which supports snapshots and allows for multiple levels of snapshots, compression, and encryption.

  3. 3.

    Device driver

    KVM supports hybrid virtualization, where quasi-virtualized drivers are installed in the customer’s operating system, allowing virtual machines to use optimized I/O interfaces instead of analog devices, providing high-performance I/O for networks and block devices. KVM quasi-virtualized drivers use the Virtio standard developed by IBM and Red Hat in the Linux community, which is a stand-alone, build-on device driver interface that allows for better virtual machine interactivity by using the same set of device drivers for multiple hypervisors.

3.6 FusionCompute

3.6.1 Introduction to FusionCompute

FusionCompute is a core component of Huawei’s virtualization solutions. By deploying virtualization software on the server, it manages virtualization of computing, storage and network hardware resources and centralizes the management of virtual resources, business resources, and user resources of multiple servers through virtual resource management (VRM) software, forming a clustered virtual resource site and realizing flexible distribution, high reliability, and efficient virtualization resource management system. At the same time, FusionCompute Pro component packages can be deployed to enable unified management of multiple resource sites in different geographies and the ability to enable resource domain management for different users through virtual data centers (VDC).

FusionCompute, eBackup, UltraVR, and other components make up Huawei FusionSphere virtualization suite.

FusionCompute’s location in the FusionSphere (version 8.0.0) virtualization suite is shown in Fig. 3.17.

  1. 1.

    Other components in FusionSphere

    1. (a)

      Hardware infrastructure layer

      Hardware resources include server, storage, network, security, and other cloud computing basic physical equipment, which supports users from small- to large-scale construction or expansion, and can run various enterprise applications from entry level to enterprise level. There are many types of equipment, which can provide users with flexible deployment options.

    2. (b)

      FusionStorage block

      FusionStorage Block is a distributed storage software that highly integrates storage and computing. After the software is deployed on a general x86 server, all servers’ local hard disks can be organized into a virtual storage resource pool to provide block storage functions.

    3. (c)

      eBackup

      eBackup is a virtualized backup software that cooperates with FusionCompute’s snapshot function and CBT backup function to implement FusionSphere virtual machine data backup.

    4. (d)

      UltraVR

      UltraVR is a disaster-tolerant business management software that uses the asynchronous remote replication feature provided by the underlying SAN storage system to provide Huawei FusionSphere with data protection and disaster-tolerant recovery of key virtual machine data.

  2. 2.

    Technical characteristics of FusionCompute

    FusionCompute mainly has the following technical features.

    1. (a)

      Unified virtualization platform

      FusionCompute uses virtualization management software to divide computing resources into multiple virtual machine resources to provide users with high-performance, operable, and manageable virtual machines.

      • Support the allocation of virtual machine resources on demand.

      • Support multiple operating systems.

      • QoS guarantees resource allocation and isolates the influence between users.

    2. (b)

      Support multiple hardware devices

      FusionCompute supports various servers based on x86 or ARM hardware platforms and is compatible with a variety of storage devices, allowing operators and enterprises to choose flexibly.

    3. (c)

      Large cluster

      A single cluster (version 8.0.0) can support up to 128 hosts and 8000 virtual machines.

    4. (d)

      Automated scheduling

      FusionCompute supports customized resource management SLA policies, failure judgment standards, and recovery policies.

      • Reduce maintenance costs through unified coordination of IT resource scheduling, thermal management, and energy consumption management.

      • Automatically detect the load of servers or services, intelligently schedule resources, balance loads of servers and service systems, and ensure a good user experience of the system and the service system’s best response.

    5. (e)

      Perfect authority management

      FusionCompute can provide complete permissions management functions based on different roles and permissions and authorize users to manage the system’s resources.

    6. (f)

      Rich operation and maintenance management

      FusionCompute provides a variety of operating tools to achieve controllable and manageable services and improve the entire system’s operating efficiency. At the same time, it supports “black box” rapid fault location as follows:

      • By obtaining the exception log and program stack, the system shortens the problem location time and quickly solves the exception problem. Support automated health checks.

      • Through automated health checks, the system can promptly detect faults and give early warnings to ensure that the virtual machine can be operated and managed. Support full Web interface.

      • Monitor and manage all hardware resources, virtual resources, user service provisioning, etc. through a Web browser.

    7. (g)

      Cloud security

      FusionCompute adopts various security measures and strategies and complies with information security laws and regulations to provide end-to-end business protection for user access, management and maintenance, data, network, virtualization, etc.

Fig. 3.17
figure 17

Location of FusionCompute in the Fusion Sphere (version 8.0.0) virtualization suite

3.6.2 FusionCompute Computing Virtualization

This section will introduce FusionCompute’s computing virtualization from five aspects: server virtualization, virtual machine resource management, dynamic adjustment of virtual machine resources, distributed resource scheduling and power management, and virtual machine hot migration.

  1. 1.

    Server virtualization

    Server virtualization is the abstraction of server physical resources into logical resources, turning a server into several or even hundreds of isolated virtual servers. Therefore, it is no longer limited by physical boundaries, but makes CPU, memory, disk, I/O, and other hardware into a resource pool that can be dynamically managed, thereby improving resource utilization and simplifying system management. At the same time, hardware-assisted virtualization technology improves virtualization efficiency and increases the security of virtual machines.

    1. (a)

      Bare metal architecture

      The Hypervisor of FusionCompute uses a bare metal architecture to install virtualization software on the hardware to virtualize hardware resources. Due to the use of bare metal architecture, FusionCompute can bring users close to server performance, highly reliable and scalable virtual machines.

    2. (b)

      CPU virtualization

      FusionCompute virtualizes the CPU of a physical server into a virtual CPU (vCPU) for use when the virtual machine is running. When multiple vCPUs are running, FusionCompute will dynamically schedule the physical CPU capabilities among the vCPUs. In FusionCompute 8.0.0, each virtual machine supports a maximum of 255 vCPUs.

    3. (c)

      Memory virtualization

      FusionCompute supports memory hardware-assisted virtualization technology to reduce memory virtualization overhead and improve memory access performance by about 30%. At the same time, FusionCompute supports smart memory reuse strategies, automatically optimizes and combines various memory reuse strategies to achieve a high memory reuse rate. In FusionCompute 8.0.0, each virtual machine supports up to 6 TB of virtual memory.

      FusionCompute supports the following memory reuse technologies.

      • Memory bubble: The system actively reclaims the physical memory that the virtual machine does not use temporarily and allocates it to the virtual machine that needs to reuse memory. The reclamation and allocation of memory are dynamic, and applications on the virtual machine are unaware. The total amount of allocated memory used by all virtual machines on the entire physical server cannot exceed the server’s total amount of physical memory.

      • Memory swap: Virtualize external storage into memory for virtual machine use, and store temporarily unused data on the virtual machine on external storage. When the system needs to use these data, it exchanges with the data reserved in the memory.

      • Memory sharing: Multiple virtual machines share memory pages with zero data content.

    4. (d)

      GPU pass-through

      FusionCompute supports directly associating the Graphic Processing Unit (GPU) on a physical server to a specific virtual machine to improve the graphics and video processing capabilities of the virtual machine to meet customer demand for high-performance graphics processing capabilities such as graphics and video.

    5. (e)

      iNIC network card

      FusionCompute supports virtualizing the iNIC network card on a physical server and then associates it with multiple virtual machines to meet users’ high requirements for network bandwidth. Virtual machines associated with iNIC network cards only support manual migration on hosts that use iNIC network cards in the same cluster.

    6. (f)

      USB Device Pass-Through

      FusionCompute supports directly associating USB devices on physical servers to specific virtual machines to meet users’ demand for USB devices in virtualization scenarios.

  2. 2.

    Virtual machine resource management

    Customers can create virtual machines through custom methods or based on templates and manage cluster resources. This includes dynamic resource scheduling (including load balancing and dynamic energy saving), virtual machine management (including creating, deleting, starting, shutting down, restarting, hibernation, waking up virtual machines, etc.), storage resource management (including common disks and shared disks), and virtual machine security management (including custom VLANs, etc.). In addition, the QoS of virtual machines (including CPU QoS and memory QoS) can be flexibly adjusted according to the business load.

    1. (a)

      Virtual machine life cycle management

      The virtual machine supports multiple operation modes, and users can flexibly adjust the state of the virtual machine according to the business load. The virtual machine operation method is as follows.

      • Create/delete/start/close/restart/query virtual machine

        FusionCompute accepts a request for creating a virtual machine from the business management system and selects appropriate physical resources to create a virtual machine based on the virtual machine specifications (vCPU, memory size, disk size), mirroring requirements, network requirements, etc. defined in the request. After the creation is complete, query the running status and attributes of the virtual machine. In the process of using virtual machines, users can shut down, restart, or even delete their virtual machines. This function provides users with basic virtual machine operation and management functions, which is convenient for users to use virtual machines.

      • Sleep/wake up the virtual machine

        When the business is running at a low load, only part of the virtual machines can be reserved to meet business needs, and other idle virtual machines can be hibernated to reduce the energy consumption of the physical server. When a high-load business operation is required, the virtual machine is then awakened to meet the high-load business’s normal operation requirements. This function meets the business system’s flexibility requirements for resource requirements and can improve the resource utilization of the system.

    2. (b)

      Virtual machine template

      By using the virtual machine template function, the user can define a normalized template for the virtual machine and use the template to complete the creation of the virtual machine.

    3. (c)

      CPU QoS

      The CPU QoS of virtual machines is used to ensure the allocation of computing resources of virtual machines, isolate the mutual influence of computing capabilities between virtual machines due to different services, meet the requirements of different services on virtual machine computing performance, and reuse resources to the greatest extent and reduce costs. When creating a virtual machine, you can specify the corresponding CPU QoS based on the virtual machine’s expected deployment service’s CPU performance requirements. Different CPU QoS represents different computing capabilities of virtual machines. For a virtual machine with a specified CPU QoS, the system’s QoS guarantee for its CPU is mainly reflected in the minimum guarantee of computing power and resource allocation priority.

    4. (d)

      Memory QoS

      Provide virtual machine memory intelligent multiplexing function, relying on memory reservation ratio. Through memory multiplexing technologies such as memory bubbles, the physical memory is virtualized into more virtual memory for use by virtual machines, and each virtual machine can fully use the allocated virtual memory. This function can reuse memory resources to the greatest extent, improve resource utilization, and ensure that at least the reserved size of memory can be obtained when the virtual machine is running, ensuring reliable business operation. The system administrator can set the virtual machine memory reservation according to the actual needs of the user.

    5. (e)

      Dynamic reuse of virtual resources

      When a virtual machine is idle, it can automatically release some of its memory, CPU, and other resources according to the conditions that can be set and return it to the virtual resource pool for the system to allocate to other virtual machines. Users can monitor dynamic resources on the Web interface.

  3. 3.

    Dynamic adjustment of virtual machine resources

    FusionCompute supports the dynamic adjustment of virtual machine resources, and users can dynamically adjust resource usage according to business load. The dynamic adjustment of virtual machine resources includes the following aspects.

    1. (a)

      Adjust the number of vcpus offline/online

      Regardless of whether the virtual machine is offline (shutdown) or online, users can increase the virtual machine’s number of vCPUs as needed. If you want to reduce the number of vCPUs, the virtual machine needs to be offline. By adjusting the number of vCPUs offline/online, you can meet the demand for flexible computing power adjustment when the business load on the virtual machine changes.

    2. (b)

      Offline/online adjustment of memory capacity

      Regardless of whether the virtual machine is offline or online, users can increase the virtual machine’s memory capacity as needed. Like adjusting the number of vCPUs, the virtual machine needs to be offline if the memory capacity is to be reduced. By adjusting the memory capacity offline/online, you can meet the demand for flexible memory adjustment when the business load on the virtual machine changes.

    3. (c)

      Offline/online mount or unmount virtual network card

      When the virtual machine is online or offline, the user can mount or uninstall the virtual network card to meet the business demand for the number of network cards.

    4. (d)

      Offline/online mount virtual disk

      Regardless of whether the virtual machine is offline or online, the user can mount the virtual disk, without interrupting the user’s business, increase the storage capacity of the virtual machine, and realize the flexible use of storage resources.

  4. 4.

    Distributed resource scheduling and power management

    FusionCompute provides various virtualized resource pools, including computing resource pools, storage resource pools, and virtual networks. Resource scheduling refers to the intelligent scheduling of these virtualized resources according to different loads to balance various resources in the system. While ensuring the high reliability, high availability, and good user experience of the entire system, it effectively improves data center resource utilization.

    FusionCompute supports the following two types of scheduling.

    1. (a)

      Load balancing scheduling

      In a cluster, in the process of monitoring the running status of computing servers and virtual machines, if it is found that the business load of each computing server in the cluster is different and exceeds the set threshold, the virtual machine can be implemented according to the load balancing strategy formulated by the administrator in advance. Migration makes the utilization of resources such as CPU and memory of each computing server relatively balanced.

    2. (b)

      Dynamic energy-saving scheduling

      When dynamic energy-saving scheduling is used in conjunction with load scheduling balancing, dynamic energy-saving scheduling can be used only after the load balancing scheduling is turned on. In a cluster, in the process of monitoring the running status of computing servers and virtual machines, if the business volume in the cluster is found to be reduced, the system will concentrate the business on a few computing servers and automatically shut down the remaining computing servers; if it is found in the cluster as the business volume increases, the system will automatically wake up the computing server and share the business.

  5. 5.

    Virtual machine live migration

    FusionCompute supports the free migration of virtual machines between hosts on the same shared storage. During the virtual machine migration, there will be no interruption in user business. This function can avoid business interruption caused by server maintenance and reduce the power consumption of the data center at the same time.

3.6.3 FusionCompute Storage Virtualization

This section will introduce FusionCompute’s storage virtualization from two aspects: virtual storage management and virtual storage thin provisioning. The specific content of storage virtualization will be introduced in detail in Sect. 5.5.

  1. 1.

    Virtual storage management

    Storage virtualization abstracts storage devices as data storage, and virtual machines are stored in their own directories as a set of files in the data storage. Data storage is a logical container, similar to a file system, which hides each storage device’s characteristics and provides a unified model to store virtual machine files. Storage virtualization technology can better manage the storage resources of the virtual infrastructure, so that the system can greatly improve the utilization and flexibility of storage resources and improve the uptime of applications.

    The storage units that can be packaged as data storage are as follows:

    • Logical Unit Number (LUN) divided on SAN storage (including iSCSI or Fibre Channel SAN storage).

    • File system divided on NAS storage.

    • Storage pool on FusionStorage.

    • The local hard disk of the host.

    • The local memory disk of the host.

    • Data storage can support the following file system formats.

    • Virtual Image Management System (VIMS): A high-performance file system optimized for storing virtual machines. The host can deploy the virtual image management system data storage on any SCSI-based local or networked storage devices, including Fibre Channel, Ethernet Fibre Channel, and iSCSI SAN equipment.

    • NFS: The file system on the NAS device. FusionSphere supports the NFS V3 protocol and can access the NFS disk designated on the NFS server, mount the disk, and meet storage requirements.

    • EXT4: FusionSphere supports server local disk virtualization.

  2. 2.

    Thin provisioning of virtual storage

    Virtual storage thin provisioning is a method of optimizing storage utilization by flexibly allocating storage space on demand. Thin provisioning can virtualize a larger virtual storage space for users than the actual physical storage space. Only the virtual storage space that writes data will allocate physical storage space for it, and the virtual storage space that has not written data does not occupy physical storage resources, thereby improving storage utilization.

Thin provisioning of virtual storage is based on disk provision, and administrators can allocate virtual disk files in “normal” format or “compact” format.

  • Storage-independent: Thin provisioning of virtual storage has nothing to do with operating system and hardware. Therefore, as long as the virtual image management system is used, thin provisioning of virtual storage can be provided.

  • Capacity monitoring: Provide data storage capacity early warning, you can set a threshold, and generate an alarm when the storage capacity exceeds the threshold.

  • Space reclamation: Provides virtual disk space monitoring and reclamation functions. When the storage space allocated to the user is large but the actual use is small, the allocated but unused space can be reclaimed through the disk space reclamation function. Currently supports new technology file system (New Technology File System, NTFS) format virtual machine disk recovery.

3.6.4 FusionCompute Network Virtualization

This section will introduce FusionCompute’s network virtualization from four aspects: virtual network card, network I/O control, distributed virtual switch (DVS), and virtualized network support IPv6. The specific content of network virtualization will be introduced in detail in Sect. 4.4.

  1. 1.

    Virtual network card

    The virtual network card has its own IP address and MAC address. From a network perspective, the virtual network card is the same as the physical network card. FusionCompute supports smart network cards, which can implement multi-queue, virtual switching, QoS, and uplink aggregation functions to improve the I/O performance of virtual network cards.

  2. 2.

    Network I/O control

    The network QoS strategy provides bandwidth configuration control capabilities, including the following aspects.

    • Bandwidth control based on the sending direction of the network plane: Provides bandwidth control functions based on the network plane. The management plane, storage plane, and service plane are based on physical bandwidth capabilities, and a certain quota of bandwidth is allocated to ensure that traffic congestion on each plane does not affect other planes.

    • Bandwidth control based on the sending direction and receiving direction of port group member interfaces: each member interface of the Port Group provides traffic shaping and bandwidth priority control capabilities.

    • QoS function does not support traffic restriction between virtual machines on the same host.

  3. 3.

    Distributed virtual switch

    The distributed virtual switch‘s function is similar to that of an ordinary physical switch, and each host is connected to the distributed virtual switch. One end of the distributed virtual switch is a virtual port connected to the virtual machine, and the other end is an uplink connected to the physical Ethernet adapter on the host where the virtual machine is located. Through it, the host and virtual machine can be connected to realize system network intercommunication. Besides, the distributed virtual switch is used as a single virtual switch among all associated hosts. This feature enables virtual machines to ensure that their network configuration remains consistent when migrating across hosts.

  4. 4.

    Virtualized network supports IPv6

    The system supports business plane virtual machine configuration and communication using IPv6 addresses. The virtual machine can support IPv6 single stack, IPv4 single stack or IPv4, and IPv6 dual-stack. The dual-stack is defined in the RFC4213 standard, which refers to the installation of IPv4 and IPv6 protocol stacks on terminal devices and network nodes to achieve information intercommunication with IPv4 IPv6 nodes, respectively. Nodes with IPv4/IPv6 dual protocol stacks are referred to as “dual-stack nodes.” These nodes can send and receive IPv4 packets as well as IPv6 packets. They can use IPv4 to communicate with IPv4 nodes, and they can also use IPv6 to communicate with IPv6 nodes.

    A device interface configured as a dual-stack can be configured with an IPv4 address or an IPv6 address, or both. The IPv6 address allocation method of virtual machines supports the use of a third-party DHCPv6 server to allocate IPv6 addresses, the use of hardware gateways for stateless address automatic configuration or static IP address injection.

3.7 Desktop Cloud

3.7.1 Introduction to Desktop Cloud

The desktop cloud uses cloud computing and server virtualization technologies to run the computer’s desktop in the form of a virtual machine on the virtual background server and provides the desktop virtual machine to the end-user in the form of a cloud computing service. Users can access virtualized desktops in the desktop cloud through remote desktop protocols on dedicated thin clients and ordinary personal computers.

The desktop cloud experience is basically the same as that of a personal computer. It performs the same functions as a personal computer, realizing daily office work, graphics/image processing, and other tasks. Desktop cloud brings the benefits of cost reduction, simple management, easy maintenance, safety and reliability to enterprises, and has been recognized and applied by some industries. Desktop cloud technology has large-scale applications in government offices, bank securities, call centers, school computer rooms, R&D and design units, etc.

3.7.2 Desktop Cloud Architecture and Key Technologies

The desktop cloud architecture is shown in Fig. 3.18.

Fig. 3.18
figure 18

Desktop cloud architecture

The desktop cloud architecture is mainly composed of thin clients, network access, consoles, identity authentication, applications, servers, etc.

  1. 1.

    Thin clients

    A thin client is a device that we use on the desktop cloud. It is generally a device embedded with an independent embedded operating system that can connect to the server’s desktop through various protocols. In order to make full use of existing resources and maximize the application of IT resources, the architecture also supports some transformations to traditional desktops and installs some plug-ins to make them have the ability to connect to desktops running on the server.

  2. 2.

    Network access

    The desktop cloud provides various access methods for users to connect. Users can connect through wired or wireless networks. These networks can be local area networks or wide area networks. When connecting, they can use ordinary connection methods or secure connection methods.

  3. 3.

    Consoles

    The console can configure the server running the virtual desktop, such as configuring network connections and configuring storage devices. The console can also monitor some basic performance indicators of the runtime server, such as memory usage and CPU usage. If you need to monitor more resources, we can use IBM’s Tivoli-related products.

  4. 4.

    Identity authentication

    An enterprise-level application solution must have a security control solution. The more important thing in a security solution is user authentication and authorization. In the desktop cloud, users are generally authenticated and authorized through products such as Active Directory or LDAP. These products can easily add, delete, configure passwords, set their roles, and assign different permissions to different roles. Modify user permissions and other operations.

  5. 5.

    Applications

    There are some specific application scenarios. For example, the users used are call center operators. They generally use the same standard desktop and standard applications and basically do not need to be modified. In this scenario, the desktop cloud architecture provides a way of sharing services to provide desktops and applications, so that more services can be provided on a specific server.

  6. 6.

    Servers

    In desktop cloud solutions, more applications are to distribute various applications to virtual desktops, so that users only need to connect to a desktop to use all applications, as if these applications are installed on the desktop. The experience provided to users under this architecture is the same as using a traditional desktop.

Of course, the architecture shown in Fig. 3.18 is just a rough description of our reference implementation. In a specific application, we should make various decisions in the architecture according to the user’s specific situation. These considerations mainly include the type of user, the scale of the user, the user’s workload, the user’s usage habits, and the user’s requirements for service quality. This is a relatively complicated process.

3.7.3 Typical Application Cases of Desktop Cloud

Compared with traditional personal computers for office work, desktop clouds use thin terminals with low hardware configuration as the leading office equipment, which has low cost, information security, easy management, support for mobile office, energy-saving, and environmental protection.

As a concrete embodiment of cloud computing technology, traditional server virtualization technology service providers VMware, Citrix, Microsoft, and other companies have launched desktop cloud solutions. Companies such as IBM and HP also have a lot of investment in desktop clouds, such as IBM’s cloud computing smart business desktop solution, Sun’s SunRay solution, and Huawei’s FusionAccess desktop cloud. Desktop cloud has become a hot spot in the technical field, and major manufacturers will continue to introduce mature overall technical solutions.

Let’s take Huawei’s desktop cloud application as an example to introduce.

Starting in 2009, Huawei began to deploy desktop clouds at the Shanghai Research Institute (Huawei Shanghai Research Institute for short). About 8000 of the 10,000 employees in the Shanghai Research Institute are R&D personnel. They are mainly engaged in developing technologies and products such as wireless and core networks. These employees no longer need a computer host and can work through thin terminals, LCD monitors, and keyboards. Employees only need to enter the account number and password to connect to the virtual server in the data center and handle daily work anytime, anywhere.

By adopting the desktop cloud system, Huawei Shanghai Research Institute has achieved many resource savings.

In addition, there are many successful application cases of Huawei’s desktop cloud. For example, the Dutch National Television (NPO) adopted Huawei’s desktop cloud solution FusionAccess to meet the digital new media era’s challenges. Huawei helps NPOs build new cloud platforms to improve work efficiency and reduce operation and maintenance costs. Huawei’s VDI platform allows NPOs to perform audio and video processing anytime, anywhere.

3.7.4 Introduction to FusionAccess

FusionAccess, a Huawei desktop cloud product, is a virtual desktop application that provides virtual desktop applications by deploying a cloud platform on hardware, allowing end-users to access cross-platform applications and the entire virtual desktop through thin terminals or other devices connected to the network. As of March 2020, Huawei Desktop Cloud has more than 1.1 million users worldwide, while maintaining the No. 1 market in China.

The Huawei FusionAccess solution covers cloud terminals, cloud hardware, cloud software, network and security, consulting and integrated design services, and provides customers with end-to-end cloud office solutions. An example of its solution architecture is shown in Fig. 3.19.

Fig. 3.19
figure 19

Huawei FusionAccess solution architecture example

The technologies or components involved in FusionAccess are as follows:

  1. 1.

    HDP

    HDP is called Huawei Desktop Protocol, which is a new generation of virtual desktop transmission protocol developed by Huawei. Through HDP, the thin client can remotely access the virtual desktop. It can achieve clearer and more detailed text and image display, clearer and smoother video playback, more realistic and full sound quality, better compatibility, and lower bandwidth.

  2. 2.

    Application virtualization

    Application virtualization is a set of solutions for delivering applications on demand. It is used to centrally manage applications in the data center and instantly deliver applications to users anywhere and using any device. Application virtualization is based on HDP and is mainly used in four scenarios: simple office, secure Internet access, branch offices, and mobile office.

  3. 3.

    Linux desktop

    Huawei FusionAccess supports enterprise users to use Linux or Windows virtual desktops for office work. Users can log in and access virtual desktops using thin terminals, portable computers, smart phones, and other methods.

  4. 4.

    Terminal

    The terminal is the terminal device, which is the main device on the terminal user side in the desktop cloud solution. End-users connect to access various virtual desktops on the data center side through this device and connect various peripherals for office use. Currently supported terminals include thin clients, portable computers, tablets, smartphones, etc.

    Although it has not been a long time to enter the market, and software and products are constantly being developed and improved, with the continuous deepening of cloud computing applications, Huawei’s FusionAccess desktop cloud solution will solve the traditional personal computer office model for customers. Many challenges such as security, investment, and office efficiency bring more ideal solutions and new user experience.

3.8 Exercise

  1. (1)

    Multiple choices.

    1. 1.

      The following is a compute virtualization of ().

      1. A.

        CPU virtualization.

      2. B.

        Network virtualization.

      3. C.

        Memory virtualization.

      4. D.

        I/O virtualization E. Disk virtualization.

    2. 2.

      In Huawei’s FusionCompute architecture, the host role is ().

      1. A.

        CNA.

      2. B.

        UVP.

      3. C.

        KVM.

      4. D.

        VRM.

    3. 3.

      The following description shows the benefits of virtualization by ().

      1. A.

        With virtualization, multiple virtual machines can be run simultaneously on a physical host.

      2. B.

        With virtualization, the CPU utilization of a physical host can be stabilized at about 65%.

      3. C.

        With virtualization, virtual machines can be migrated between multiple hosts.

      4. D.

        With virtualization, multiple applications can run simultaneously on the operating system of a physical host.

  2. (2)

    Answer the following questions.

    1. 1.

      What is virtualization? What are the characteristics of virtualization?

    2. 2.

      Briefly describe the relationship and difference between virtualization and semi-virtualization.

    3. 3.

      Based on what you have learned, a brief introduction to the supporting technologies for server virtualization and a description of the application scenarios for these technologies.

    4. 4.

      How is memory virtualization different from storage virtualization?

    5. 5.

      What is NUMA? How NUMA works in virtualization?