Keywords

1 Introduction

Cloud computing is a technology developed on the basis of distributed computers, parallel computing and network computing, and it is an emerging business model. Cloud computing has had a huge impact on the development of society in just a few years. Currently, cloud computing has swept various IT industry fields.

The full name of JCOS is Jie Cloud Operating System which is an enterprise-level openstatck management platform. It is a SaaS cloud computing management platform for enterprise-level users to uniformly manage multiple cloud resources. Through the comprehensive application of technologies such as hyper-convergence, software-defined networking, containers, and automated operation and maintenance, enterprises can quickly realize the “cloudification” of IT infrastructure with the smallest initial cost. At the same time, the product can achieve “building block stacking” flexible expansion and upgrade on demand with the expansion of the scale of the enterprise and the growth of its own business.

1.1 Structure System of JCOS Cloud Platform

JCOS is a mature cloud computing product. It is a professional cloud computing management platform developed in accordance with the OpenStack open source architecture. By deploying the JCOS platform, you can experience convenient, safe, and reliable cloud computing services. It integrates management, computing, network, storage and other services into one, and ultra-convenient cloud services that can be clouded out of the box can be realized through UDS all-in-one. The architecture of Jieyun is shown in Fig. 1 below.

Fig. 1.
figure 1

JCOS architecture diagram

There are four core units in the JCOS architecture. The four major units can provide a powerful cloud computing service experience, which are computing unit, network unit, storage unit, and management unit.

2 Enterprise-level JCOS Cloud Platform Design

In order to improve deployment efficiency and reduce errors caused by manual configuration, this solution JCOS uses the open-source openstack deployment tool fuel. The fuel is a customized JCOS deployment end. JCOS uses fuel for automated deployment, which can improve deployment efficiency and reduce possible errors caused by manual configuration. Therefore, the controller fuel master needs to be prepared before deployment. Fuel master can be deployed on a physical machine or a virtual machine. Generally, it can be deployed on a virtual machine.

The basic deployment process of the JCOS platform is shown in Fig. 2.

Fig. 2.
figure 2

Deployment process

2.1 Server Configuration

Virtualization Settings

Since UDS nodes will be used as computing nodes, each computing node needs to enable the virtualization setting support as shown in Fig. 3.

Fig. 3.
figure 3

VT enable

Server Startup Sequence

After the server virtualization is set up, you need to set the server's startup sequence to the hard disk in the first startup sequence and the network in the second startup sequence. If there are both UEFI and Leagacy boot modes, select Leagacy.

Configure Node IPMI Address

The node IPMI address can be set in the BIOS, or you can open the server management interface to modify the IPMI address through the browser using the default IPMI address. If it is set in the BIOS, go to the BMC network configuration under Server Mgmt to configure it.

Hard Disk RAID Settings

According to the system prompt when the server starts, you can make the corresponding RAID configuration. Press Ctrl + R during startup to enter the RAID card setting interface, you can set RAID5 to improve the reliability of data storage.

2.2 Cloud Computing Network Planning

The servers participating in the deployment of JCOS are called nodes, and the interconnection needs to be through an external switch, and the vlan or port on the external switch is isolated to form a network. Among them, there are 6 JCOS platform deployment networks, which are shown in the following Table 1.

Because of the large number of server network interfaces, these networks are isolated directly through ports. At this time, it is only necessary to determine the corresponding relationship between different networks and different network interfaces. The connection topology diagram of the UDS server is shown in Fig. 4.

Fig. 4.
figure 4

Connection topology

Isolating the network through ports eliminates the need to plan VLANs. We generally only need to plan the IP addresses of the external network and the floating network, and use the default IP addresses for other networks. Table 2 is the specific plan.

Table 1. Deployment networks
Table 2. Network planning scheme

3 Enterprise-level JCOS Cloud Platform Deployment Plan

The following conditions must be prepared for the deployment of the JCOS platform on the fuel deployment side.

  • The node server and the deployment server are connected in the same Layer 2 network.

  • The node server sets PXE priority to start.

  • The node server must enable hardware virtualization in the BIOS settings.

3.1 Opensatack Environment

Choose the deployment mode “HA multi-node” mode. In this mode, an odd number of controller nodes need to be deployed. The basic services of the cluster have high availability guarantee in this mode.

If you deploy OpenStack on a physical machine, select “KVM”. If you are testing OpenStack in a virtual machine, select “QEMU”. This deployment scheme runs on hardware, so we select “KVM”.

3.2 Node Allocation

After entering the main interface of the cloud platform, we turn on the power of the node server to automatically obtain the IP address and load the operating system. After the node is discovered, the discovered node will be displayed in the unallocated node pool.

3.3 Assigning Roles

Select the node that needs to be allocated from the unallocated node pool, and assign the corresponding role according to the demand. If it is only a single device, you need to assign all roles to the node.

4 Result Analysis

After the deployment of the JCOS cloud platform, the Windows cloud host and the Cent OS cloud host are created. Under the same host configuration, the CPU uses 2 cores and the memory uses 2G. The efficiency of the JCOS cloud platform is more than 2 times more optimized than the time of VM virtualization, and the result is shown in Fig. 5.

Fig. 5.
figure 5

Comparison of results

5 Conclusion

As the first truly enterprise-level Openstack cloud management platform in China, JCOS has been widely used in education, healthcare, government, IDC, operators and other industries. It has the advantages of high performance, stable operation, wide compatibility, and quick deployment. Platform monitoring and management, and log information maintenance are important features of platform operation and maintenance. After an enterprise deploys a private cloud, the maintenance and update of its system becomes faster, and the task of network administrators becomes relatively easy.