Keywords

1 Introduction

Computer networks are currently used in most companies, and represent an important means of interoperability and data communication. As the World Wide Web and users are explicating at a very rapid rate, the performance of World Wide Web systems become rapidly high [1]. Since its inception, the foundation for the deployment of networks pointed to a variety of devices from different manufacturers and architectures, and often operates at varying speeds. The different links in the local area network can operate at different speeds and can run at different medias, such as 1 Gbps or 100 Mbps, copper or fiber [2]. The copper-based communications encode data via electrical impulses, unlike the optical fiber that uses light signals for this purpose. The existence of these two physical means of data communication, in varying degrees of use, must be rendered compatible. However, the communication of heterogeneous systems is not always an easy task, and it is not always possible to obtain optimal and predictable results.

A computer network consists of several connected hosts, which can be represented by a desktop, a laptop, a smartphone, among others. In such an heterogeneous client environment, efficient content adaptation and delivery services are becoming a major requirement for the new Internet service infrastructure [3]. However, many of these equipments may have different architectures, and also use different operating systems and applications.

All the previous elements are part of computer networks, but also constitute as conflict elements, which makes it even more difficult to measure the performance of a network. Typical evaluation methods, such as benchmark performance, however, are limited in applicability. Often they are not representative of the traffic characteristics of any customer facility [4]. The issue of uncertainty, therefore, should be considered. A possible solution could be the analysis of experts in the field of computer networks. This approach may not be suitable for all cases, since not always the professional knows profoundly the network to be analyzed. Moreover, although differences exist, some elements are common in network communications. For the establishment of network communication, there must always be a request from the side of the “client”. It is a typical protocol of request-response, which controls the data transfer between server and client (such as a web browser) [5].

This request, when answered by the side of the “server” – typically a proxy, produces a corresponding response. Proxy servers are designed with three goals: decrease network traffic, reduce user (client) perceived lag, and reduce loads on the origin servers [6].

Every request from the client passes through the proxy server, which in turn may or may not modify the client request based on its implementation mechanism [7]. This response is accompanied by several attributes that can be used to analyze network performance. The most representative attributes may be used as a means of determining the network operating parameters. This work aims to analyze and detect problems in a computer network from a public university with about two hundred hosts, divided into two different departments (academic and administrative) with the aid of Paraconsistent Logic. In the academic department, there are six computer labs with twenty hosts each, plus two coordination rooms, with the total of ten hosts each. In the administrative department, five operating rooms, with approximately fifty hosts, as well as servers, routers and switches, all connected by copper or fiber optic links, and operating for fifteen hours a day, five days a week. Each department has different needs and use different services and applications. Therefore, it is clear the high degree of heterogeneity and uncertainty of the analyzed scenario, which makes it appropriate to use a non-classical logic, the subject of this paper.

2 Methodology

Responsive service plays a critical role in determining end-user satisfaction. In fact a customer who experiences a large delay after placing a request at a business’s web server often switches to a competitor who provides faster service [8]. Network infrastructure needs to be constantly improving to satisfy QoS (Quality of Service) users demand, including both technology aspects (e.g. fastest links, proxies and servers) and related software [9].

To parameterize the operation of the network, a day of operation shall be monitored, during 15 h, divided into 30-minute intervals. Some of the most significant attributes shall be used, such as:

  • Total network packets (bytes).

  • Total response time (ms).

  • Average speed (bytes/ms).

  • Number of requests.

  • Number of zero bytes responses.

From the network logs, it is possible to extract the values of the attributes, shown in Table 1:

Table 1. Atributes values obtained from a day operation of a computer network

The first attribute is used to analyze the response time (in milliseconds) related to the conducted requests. The second attribute is related to the volume of data (in bytes) that was requested in a given interval. At first, one might think that the higher the value, the more efficient the network operation. However, this attribute is loaded of uncertainty, considering that it can also denote network congestion. The third attribute range is calculated based on the first two, by simple arithmetic average, to calculate the use of network bandwidth. The fourth attribute is the number of requests that occurred in a given interval. This attribute itself is not enough to determine the level of the network quality. A network with many requests may indicate either a good performance or a high rate of retransmissions, which is considered undesirable. The fifth attribute is especially important when considered in conjunction with the fourth attribute, as it allows differentiating situations where there is large number of retransmissions. The obtained values of the attributes are then tabulated and normalized in the range from 0 to 1. For a contextualized view, the image of Fig. 1 can give a good idea of network operation from two significant parameters: average speed and number of zero bytes responses:

Fig. 1.
figure 1

Comparison between average speed and number of zero bytes responses

With the values obtained, it is possible to analyze specific scenarios in the operation of a network, through the development of a ranking of the evidence (favorable or unfavorable) using the Paraconsistent Annotated Evidential Logic Eτ.

The concepts of Paraconsistent Logic Eτ will be used from this point. According to Abe [10]: “The atomic formulas of the logic Eτ are of the type p(μ, λ), where (μ, λ) ∈ [0, 1]2 and [0, 1] is the real unitary interval (p denotes a propositional variable)”. Therefore, p(μ, λ) can be intuitively read: “It is assumed that p’s favorable evidence is μ and unfavorable evidence is λ.”. This will lead to the following conclusion:

  • p (1.0, 0.0) can be read as a true proposition,

  • p (0.0, 1.0) as false,

  • p (1.0, 1.0) as inconsistent,

  • p (0.0, 0.0) as paracomplete, and

  • p (0.5, 0.5) as an indefinite proposition.

To determine the uncertainty and certainty degrees, the formulas are [11]:

  • Uncertainty degree: Gun(μ, λ) = μ + λ−1 (0 ≤ μ, λ ≤ 1);

  • Certainty degree: Gce(μ, λ) = μ − λ (0 ≤ μ, λ ≤ 1);

An order relation is defined on [0, 1]2: (μ1, λ1) ≤ (μ2, λ2) ⇔ μ1 ≤ μ2 and λ2 ≤ λ1, constituting a lattice that will be symbolized by τ.

With the uncertainty and certainty degrees, it is possible to manage the following 12 output states, showed in the Table 2.

Table 2. Extreme and non-extreme states

All states are represented in Fig. 2:

Fig. 2.
figure 2

Decision-making states of lattice τ

Based on the values of the attributes, obtained from one day operation of the computer network, two different scenarios from two time intervals on another day of operation will be analyzed in order to verify the operation of the network.

In the selected intervals, the following values were obtained, as shown in Table 3:

Table 3. Network attributes from two assessed scenarios

A computer network that is operating at high speeds within its parameters is taken as favorable evidence. Therefore the average speed attribute can be considered a directly proportional greatness. This argument can also be applied to the number of requests attribute, since it indicates that the network has been operated in full working capacity to meet the user demands. In what concerns the zero byte responses attribute, the opposite occurs, as a network with high non responses indicates that the searched resources could not be found, thus it can be considered an inversely proportional greatness.

In both evaluated scenarios, the attribute values shall be normalized based on the operating values of the computer network. These values shall be used as degrees of favorable evidence for the average speed and number of requests attributes, as directly proportional greatnesses. The opposite shall be applied to the number of zero bytes responses attribute. In this case, the favorable evidence shall be defined as its denial. The favorable (μ) and unfavorable (λ) degree evidences are taken from the normalized values of the attributes, and are presented in Table 4:

Table 4. Normalized values and favorable (μ) and unfavorable (λ) evidences of the attributes

After the parameterization of the network attributes, the proposition “The computer network is functioning within its normal operating values?” shall be analyzed. For this purpose, the Para-analyzer will be applied, representing scenarios 1 and 2, respectively in Figs. 3 and 4:

Fig. 3.
figure 3

Analysis of scenario 1 result by the Para-analyzer algorithm

Fig. 4.
figure 4

Analysis of scenario 2 result by the Para-analyzer algorithm

The global analysis is calculated considering the favorable evidences (μ) multiplied by their respective weights (all equal, in both scenarios), and finally added. The same is done to the unfavorable evidence (λ) [11].

3 Analysis of the Results

In scenario 1, the global analysis presents a quasi-false result tending to paracomplete and inconsistent to the normal network performance. Although the number of zero bytes responses attribute has high favorable evidence, this was not enough to represent a standard operation, since the other two attributes have not been sufficient to support the results. Diagnosis: the analyzed network in scenario 1 is not congested due to the low number of requests and is able to locate the searched resources. Abnormally, it still functions in low speed, which leads to the conclusion that the network is underutilized, or the network infrastructure project was oversized.

In scenario 2, the global analysis presents a quasi-true result, tending to paracomplete and inconsistent to the normal network performance. The high average speed and number of requests presents a situation of full use of the network capacity. However, it is observed that it begins to show clear signs of degradation due to the high number of zeros bytes responses. Diagnosis: the analyzed network in scenario 2 operates in a high degree of utilization, with early congestion signals and performance degradation.

4 Conclusion

As seen in both presented scenarios, the determination of the parameters in a computer network is a complex task. By their uncertainty and contradictory characteristics, and its dynamic operation, the Paraconsistent Annotated Evidential Logic Eτ emerges as an important tool for analysis of this type of environment.

Some possible solutions for scenario 1:

  • Downsizing: sale or exchange of network devices (adapters, switches, routers) whose nominal capacity is beyond the need of the network.

  • When possible, sharing or assignment of the installed infrastructure to another company or institution.

  • Outsourcing services for companies that do not wish to have their own infrastructure.

Some possible solutions for scenario 2:

  • Determining whether the congestion problem is systemic or occurs at only a few hosts. This can be done with the use of the Para-analyzer in different hosts of the network, and comparing the results with those obtained initially from the operating parameters.

  • If the problem occurs at only few hosts, the solution is the physical or logic correction of the affected host(s). This task is usually simple, and its resolution is performed by a computer technician.

  • If the problem is systemic, the analysis shall consider the possibility of upgrading (where possible) or even exchange of switches or routers by other with higher capacity.