1 Introduction

Cloud Computing is a computing model enabling access to a shared configurable computing resources that can be rapidly provisioned and released with minimal management effort [1] through three business model approaches: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) [1, 2]. These cloud service models have become key tools for industry, academia and society [3]. This has produced a migration process of complex tasks such as management, processing and storage to the cloud [4] where these services are online consumed and used by organizations [5]. This has resulted in an explosive demand for cloud resources opening new research areas, specifically in virtualization topics [2].

Microservice architecture is a new dynamic cloud data processing and storage paradigm that has emerged as a new software development method for building large software applications. The basic idea is to split large services into several independently deployable software pieces called microservices. A microservice is an abstraction that encapsulates a system component (application) with a set of management software (input and output interfaces, load balancers, etc) into a single “small” service. This technique allows microservices to interconnect with other microservices to create larger services [6]. Microservice architecture can provide cloud services with the following features: modularity, isolation, elasticity, reusability [7, 8], scalability as well as straightforward implementation and maintenance [9, 10]. As a result, microservice architecture arises as a suitable architectural style for cloud technologies adoption [11, 12].

The design patterns are used to establish controls over the behavior of a system of microservices [13] by assigning roles and responsibilities to the microservices. These controls consider the execution order of microservices as well as the management of workload and I/O operations. Each design pattern is focused on creating a generic task execution and data management producing a given behavior that is suitable to solve a given problem. With a design pattern it is possible to determine the way microservices are executed such as sequentially, parallelly, concurrently or distributedly. For example, pipe&filters, manager–worker, divide&conquer processing patterns were designed for producing pipelines, distribution and parallelism respectively [13,14,15] (see examples in Fig. 1).

Fig. 1
figure 1

Examples of microservice systems by using design patters: A pipe&filters, B manager–worker, C divide&conquer

As it can be seen, the stages of these patterns are assigned to microservices to build distributed and parallel systems of microservices for solving problems arising when processing large volumes of data by reusing the original codification [11, 16, 17]. For example, the pipes&filters pattern allows to couple microservices into a system following a linear sequence, the manager–worker pattern regulates the distribution of workload to the microservice replicas, whereas in a similar way, divide&conquer produces a workload distribution using a concurrent and parallel processing of workload by using microservice replicas. It is expected that a system based on processing patterns can improve the system quality, speed and accessibility [18] without modifying the codification of the applications encapsulated into the microservice abstractions.

To manage microservices in a PaaS model, cloud providers employ virtual container managers that deploy microservices on PaaS as containerized processing patterns by using a service mesh approach, which enables the discovery, routing, traffic segmentation, traceability, monitoring, security and communication among microservices [16, 19].

Manager–worker and pipes&filters are the most common patterns available and chosen when integrating microservice systems into cloud services using a service mesh approach [20]. This selection is because existing service meshes only support the pipes&filters pattern [15], while the manager–worker pattern should be deployed by using the underlying container management system, i.e., Kubernetes (K8s) [14].

In practice, the microservice system designer basically designs systems using pipes&filters patterns and delegates the use of other patterns to lower layers [21].

Although designers commonly create microservice systems by using a sequential pattern (e.g. pipes&filters), using a single pattern could not provide microservices with features required in real scenarios. For example, handling large data volumes through multiple chained applications (i.e., a composite pipes&filters of pipes&filters pattern) or processing large digital products (e.g., medical images, satellite images, data lakes, backups) through parallel processing in which case a composite divide&conquer pattern is better suited to this type of work.

To improve system efficiency, users could combine, for example, the original pipe&filter pattern with other patterns (e.g., the divide&conquer pattern), enhancing workload management and processing. The combination of patterns would result in an efficiency improvement of the original pattern. However, current service meshes only support the management of the pipes&filters processing pattern and other patterns (e.g., the manager–worker pattern) could be implemented by the underlying container manager (e.g., Kubernetes), which is usually capable of supporting distribution patterns [14, 21]. However, working directly with the container management system becomes more complicated for users.

Some current alternatives for creating processing patterns are the use of specialized tools, such as OpenShift [22] or Jenkins [23]. However, these alternatives require from the designers to assume workload management and the corresponding interconnection with service mesh managers [20] and virtual container managers [21].

This paper presents an approach to facilitate the creation of processing patterns and the conversion of these patterns into microservice systems, following a service mesh strategy. The main contributions of the paper are the following:

  • A microservice system construction strategy based on decentralized structures called Eblocks (Eblocks Peers). This design-driven strategy enables developers to create processing patterns to be integrated into a microservice system. The main idea of the Eblock design principle is not only focused on decoupling the processing of applications encapsulated into microservices from the data and tasks management but also to reduce the dependency with a central plane, which is mandatory in traditional service mesh approaches [15, 24].

  • An orchestrator scheme based on the Infrastructure as Code (IaC) approach. Our orchestrator scheme materializes the designs created by the users in the form of containerized application systems, producing an automatic materialization of processing patterns by reducing the participation of developers to the design phase. It is considered that a virtual container platform is already installed and ready to instantiate virtual containers, which is something expected according to technological trends [25]. An IaC strategy suggests automating the configuration of system dependencies and the provisioning of local and remote instances. The IaC goal is to streamline and automate the process of managing and deploying infrastructure, making it more efficient, scalable, and less error-prone [26].

We carried out a real-life case study, where a group of existing applications [27,28,29] collaborating in a traditional workflow had to be converted into a microservice system that integrates flexible processing patterns (parallel and distributed) following a service mesh approach to improve the overall performance of the system. For comparison purposes, the implementation of the microservices system was carried out using two service mesh approaches: one utilizing our proposal, which is based on a special structure called Eblock and an application model for creating integrated microservices systems with processing patterns, and another one employing the Istio platform [15], a popular service mesh found in the literature. Encouraging results motivate the adoption of the Eblock approach.

The paper is structured as follows, Sect. 2 describes the design principles of Eblock and provides details of its internals, representing the main scaffolding for building different processing patterns following an implicit service mesh approach. Section 3 describes an application model to integrate and deploy processing patterns in a microservice system using Eblock structures. The evaluation scenario to validate our proposal is presented in Sect. 4. The experiments conducted and the results obtained are shown in Sect. 5. Section 6 analyzes the most relevant related work, and Sect. 7 presents conclusions and future work.

2 A new approach to create processing patterns in microservice applications

A design pattern is a general, reusable solution to a commonly occurring problem in software design, which may or may not include a structure. A processing pattern is a type of design pattern that addresses the organization and execution of computational tasks, particularly those related to data processing, promoting modularity, flexibility and scalability. Our approach to facilitate the creation of processing patterns and the conversion of these patterns into microservice applications, following a service mesh strategy, considers three main phases: (1) design and construction, (2) generation and (3) deployment.

In the design and construction phase, users provide all necessary information to build processing patterns using a basic computational structure called Eblock. This structure encapsulates traditional components considered in a microservices architecture (e.g., an application or function and its software dependencies and environment variables) and the required service mesh components for authentication, I/O, workload, discovery, and monitoring. The encapsulation process produces an abstract component that considers implicit service mesh management, enabling designers to create different combinations of processing patterns by assigning a role to each Eblock according to the chosen processing pattern. Since every Eblock has a role when a processing pattern is defined, along with the necessary information to communicate among them, the involved Eblocks can organize themselves to create the defined processing pattern, collectively forming a microservices application. The integration of the configured Eblocks results in a service mesh with a repository of Eblocks that can be discovered, downloaded, shared, and executed.

In the generation phase, the information obtained from the first phase will be interpreted to generate a configuration file (JSON file) that describes the specifications of each Eblock intended for use in a processing pattern. This pattern will then be deployed as a microservice application using a container management system, such as Kubernetes.

During the deployment phase, an orchestrator creates images of the Eblocks considered in the designed pattern and invokes the container management system to generate instances and the computational context of the microservice application. This process ensures the realization of the behavior defined by the processing pattern.

2.1 Eblock: a descentralized structure to create processing patterns

As mentioned earlier, our approach to constructing microservice systems is based on a fundamental structural unit called Eblock, utilized in the creation of processing patterns. This structure is illustrated in Fig. 2.

Fig. 2
figure 2

Eblock’s components

The Eblock structure is an abstract element that is represented by the following five components:

  • Processing microservice (PM): It represents the main purpose of the Eblock. A PM executes a transformation/processing on incoming data. The transformation process involves selecting an application for execution by a designated microservice, such as a database, machine learning application, or a specialized function like encoding, replication, encryption, or compression. A PM can deliver its results through an output gateway. The application within the PM is defined by the code provided by developers, and this code is not modified by an Eblock.

  • Workload manager (WM): It is responsible for distributing the workload to Eblocks participating in complex processing patterns such as Manager–Worker and Divide&Conquer. For example, if an Eblock is chosen to participate in a Divide&Conquer processing pattern and receives the Divide role, then the WM component will be responsible for splitting the input data and distributing the resulting data segments to the Eblocks that received the role of workers. Worker Eblocks (microservices) will receive and process the incoming data to deliver the processed data through their output interfaces. During execution time, WM can gather information on the computing resources of the Eblock Workers involved in this processing pattern. This information allows WM to make informed decisions on how to distribute the data (workload), preventing the saturation of Eblocks with low resources. WM is not necessary in processing patterns where workload distribution is not required.

  • Discovery (Dis): This component is responsible for locating Eblocks in the service mesh through a distributed hash table (DHT). The table is constructed using a peer-to-peer (P2P) network, established among the various Eblocks during execution. The discovery component of an Eblock initiates the Eblock P2P network. The DHT was implemented using a version of Chord [30], adapted to enhance the key-value searching process in the DHT discovery scheme.

  • Authentication (Aut): This component is responsible for executing the authentication process between the Eblock and a container manager. An Eblock becomes authenticated by the mesh manager when it presents a valid authentication token. The first time a valid user registers an Eblock with the mesh manager, the manager assigns a unique code (authentication token) to the Eblock. This token enables the Eblock to be recognized as a potential element in a processing pattern and to be available in the service mesh.

  • Monitoring (Mon): This component logs the results of tasks performed by other Eblock components, including the PM, discovery, and authentication, in files accessible to users via REST requests.

As it can be observed, the principles of an Eblock are not only focused on decoupling the processing of applications encapsulated into the microservices from the data and tasks management but also to reduce the dependency with a central plane, which is mandatory in traditional service mesh approaches [15].

Considering that a computational model is an abstract representation of a system that captures its essential features to understand its behavior, the Eblock structure is presented as a set of components (or independent microservices) that will respond to the main requirements for creating a processing element including properties for being coupled in a processing pattern following a service mesh approach.Footnote 1

This set of components running together represents one basic generic structure (Eblock) that could be part of a processing pattern. The purpose of every component in an Eblock is summarized as follows: PM represents the computing assignment that will be given to the Eblock, WM determines the role that the Eblock will have in a processing pattern (Filter, Manager, Worker, among others), Discovery, Authentication and Monitoring respond to the core tasks of a service mesh approach, enabling the location of the Eblock, its secure interaction with other Eblocks and its own monitoring.

2.2 Eblock construction model

In this section the Eblock structure and its construction model are described by using a special notation to facilitate understanding. Every component of the Eblock is represented as follows:

  • PM: Processing Microservice

  • WM: Workload Manager

  • Dis: Discovery

  • Aut: Authentication

  • Mon: Monitoring

The construction of an Eblock is defined as follows:

$$\begin{aligned} \text {EB} = \{\text {PM, WM, Dis, Aut, Mon}\} \end{aligned}$$

As explained in Sect. 2.1, PM refers to the computing assignment that will be given to the Eblock (EB). An application or set of applications (depending of the work to be done by the Eblock) will form the PM component. Every component includes a communication interface that we call Port, used for exchanging messages between components. PM is defined as follows:

$$\begin{aligned} \text {PM} = \{\text {App1, App2}, \ldots , \text {Appn, Port}\} \end{aligned}$$

The application (App) or the set of applications (App1, App2,...,Appn) that form the PM component will have the following resources:

$$\begin{aligned} \text {App} = \{\text {Bin, Path, Input/Output}\} \end{aligned}$$

where Bin refers to the specific application that will be executed, Path refers to the App working directory, Input and Output indicate the directories where the App will read or write data respectively, if required. In our proposal, these directories can be managed by a distributed file system or a content delivery service (CDS), making storage independent of the physical server that hosts the Eblock.

The Eblock’s role in a processing pattern must align with the task a user intends to execute. Some examples of Eblock roles include the following:

$$\begin{aligned} \text {Roles} = \{\text{Filter, Manager, Worker, Divide, Conquer, Combine}\} \end{aligned}$$

These roles enable the construction of various processing patterns, including Pipe and Filter, Manager–Worker, and Divide and Conquer. For instance, if an Eblock is designated as a Manager in a manager–worker pattern, it is expected that other Eblocks will be defined as Workers to establish this processing pattern. These processing patterns adhere to a service mesh approach, requiring the use of three essential components: Discovery, Authentication, and Monitoring. These components facilitate a secure coupling of different Eblocks, forming the processing pattern that incorporates monitoring services.

Considering this notation, a processing pattern can be defined from a set of predefined Eblocks as follows:

$$\begin{aligned} \text {Pattern} = \{\text {EB1, EB2}, \ldots , \text {EBx}\} \end{aligned}$$

The specific processing pattern is defined from the roles that were assigned to every Eblock. A set of patterns can be also combined using a service mesh approach. The definition of a new complex processing pattern is as follows:

$$\begin{aligned} \text {ServiceMesh} = \{\text {Patter1, Pattern2}, \ldots , \text {Patterny}\} \end{aligned}$$

Figure 3 shows an Eblock with its internal components that are running as independent processes, all of them could be in virtual containers (e.g., Docker container). The communication messages shown in this figure illustrate some of the tasks that can be executed by the Eblock when running. For example, the PM component can request to: (a) the Aut component to authenticate with the service mesh manager, (b) the Dis component to register this Eblock in the service mesh manager, allowing it to be found by other Eblocks, (c) read or write data from or to a CDS, (d) couple with other Eblocks considering its defined role in the WM, (e) execute the App assigned to this Eblock, (f) send data to other Eblocks in a processing pattern, and (g) get monitoring data from the Mon component to verify the state of other Eblock components. Since every component can run as an independent process as part of an Eblock, it is necessary that every component uses its communication interface to exchange messages with other components. As mentioned in the definition of the PM component, every component includes a communication interface, for example sockets, as shown in Fig. 3. Every component exposes a port number to establish socket connections. To facilitate the explanation, in Fig. 3 the port numbers were randomly assigned. These port numbers must be defined during the Eblock design and construction phase.

Fig. 3
figure 3

Eblock’s internal communication

3 An application model to create microservice systems integrating processing patterns

This section describes an application model to create processing patterns that will be integrated in a microservice system. The application model refers to an abstract representation or description of how a software application is structured, organized, and executed. In this sense, our application model addresses the implementations of Eblocks and processing patterns, including supporting software components that are required to execute and deploy the generated Eblocks and patterns. An application model was defined as part of our proposal as the Eblock structure was designed to interact with other Eblocks when defining processing patterns.

To facilitate the construction and deployment of a processing pattern using Eblocks, our application model considers the following supporting components (SC):

  • Interpreter: An application with a graphical user interface (GUI) that allows users to friendly provide input data that will be used for the creation of Eblocks. This input data is converted into a JSON configuration file, which includes all Eblocks data involved in a potential processing pattern defined by the user, and determines the internal components to be added in the Eblocks definition according to the role defined by the user.

  • Generator: It converts the JSON file obtained from the Interpreter into YAML files, where the required internal components of each Eblock are defined. Each YAML file describes a participating Eblock in a processing pattern, associating it with its partners based on its assigned role. Generator does not make decisions on how these Eblocks will be organized, as this information is determined by the processing pattern defined by the user through the Interpreter component.

  • Orchestrator: It establishes the execution priority of YAML files based on the roles assigned to the Eblocks. When deploying Eblocks involved in a processing pattern, their assigned roles determine a functional order. Eblocks that functionally depend on others should be deployed after the independent Eblocks they rely on.

These SC can be defined using the following notation:

$$\begin{aligned} & \text {SC} = \{\text{Interpreter, Generator, Orchestrator}\} \\ &\text{Interpreter}= \{\text{METADATApattern, APPint, JSON}\}\end{aligned}$$

where METADATApattern refers to the information given by users, using a GUI, to define an Eblock or set of Eblocks that will form a processing pattern. APPint indicates the interpreter that will be required to execute this pattern, and JSON contains, in a JSON file format, the definition of all Eblocks that will be involved in a processing pattern.

$$\text {Generator} = \{\text{JSON, APPgen, YAMLfiles}\}$$

where JSON refers to the JSON file obtained from Interpreter, APPgen indicates the application that will generate YAML files from the JSON file. YAMLfiles refer to a set of YAML files generated from the JSON file, where every YAML file represents an Eblock that will include the required components to represent a specific role in a processing pattern.

$$\text {Orchestrator} = \{\text{YAMLfiles, APPorq, Request}\}$$

where YAMLfiles refers to the YAML files generated by Generator. APPorg denotes the orchestrator application responsible for deploying the Eblocks in the correct order according to the defined processing pattern. Request represents the instruction sent to the container manager for executing the configured processing pattern.

Figure 4 depicts a schematic view of the phases for creating and deploying a processing pattern using Eblock structures and SC. In the first step, a user interacts with a GUI to declare necessary information for creating each Eblock involved in the processing pattern. These Eblocks include applications designated as PMs, the desired processing pattern type (e.g., Pipe and Filter, Manager–Worker, Divide-Conquer), and the role assigned to each Eblock. In the second step, a SC (Interpreter) produces a JSON file that describes all the user-defined Eblocks. The third step involves the Generator, which creates a set of YAML configuration files from the JSON file produced by the Interpreter. Each YAML file details an Eblock, specifying the necessary elements for playing its corresponding role in the defined processing pattern. Finally, the Orchestrator is responsible for providing instructions to the container manager (e.g., Kubernetes) to instantiate each Eblock in the correct order based on the defined processing pattern. The Orchestrator follows an Infrastructure as Code (IaC) approach, automatically configuring system dependencies and generating the necessary instructions to provision local and remote instances using the underlying virtual container manager.

Similar to Fig. 3, in Fig. 4 the port numbers used in the Eblock components are illustrative and are defined during the Eblock creation process.

Fig. 4
figure 4

A schematic view of the application model to create microservice systems integrating processing patterns based on the Eblock structure

Algorithm 1 outlines the steps to encapsulate an App as part of a PM using a virtual container, specifically a Docker container in this example. Algorithm 2 details the procedure to deploy an Eblock as a processing pattern, potentially comprising one or more applications. In this example, Kubernetes serves as the container manager.

Algorithm 1
figure d

Virtual containerization of an application in a processing microservice component

Algorithm 2
figure e

Processing pattern deployment

Figure 5 illustrates an example of a JSON file structure generated by one of the supporting software components, the Interpreter. This file describes a processing pattern created by a user, where each defined Eblock has an assigned role. In this instance, the user requests the deployment of a customized Manager–Worker pattern, incorporating an additional element, a Filter (EB4), after a Worker (EB2). This configuration is visible in Fig. 5a under the definingPattern and roles sections. Each Eblock is associated with an execution role; for instance, \(Eblock_{1}\) (EB1) is assigned the manager role, while EB2 and EB3 function as workers, and EB4 serves as a filter. In this manner, a user can construct and define a processing pattern through a GUI, using a declarative file generated by the Interpreter. Other components of the application model, such as the Generator and Orchestrator, utilize this file to deploy the pattern correctly on a container manager. This action is facilitated by the information in the depends section of the JSON file, as illustrated in Fig.  5a. This section specifies the order of deployment for Eblocks based on their dependencies, providing all the necessary information for building a processing pattern within a service mesh approach. For instance, Fig.  5b showcases, in the spec section, all the components that should be generated in an Eblock with the role of Manager in a Manager–Worker pattern.

Fig. 5
figure 5

Extract of a JSON file describing a processing pattern

If the Eblock is assigned a Manager or Divide role, the spec section in Fig. 5b includes a WM represented by lb in the container section. In contrast, if the Eblock is assigned a Filter, Worker, Conquer, or Combine role, the declarative file specification would omit a WM as it is unnecessary. The entries Discover, Auth, and Monitor in the container section represent core tasks of a service mesh approach, facilitating the Eblock’s location, secure interaction with other Eblocks, and self-monitoring.

4 Evaluation scenario

This section outlines the infrastructure, set of conditions, and criteria used to assess our proposal for creating microservice systems that integrate processing patterns using the Eblock structure within a service mesh framework. The primary objective of the evaluation was determined through a real-life case study, where a set of existing applications [27,28,29], collaborating in a traditional workflow, needed to be transformed into a microservices system capable of incorporating flexible processing patterns (parallel and distributed). This transformation, following a service mesh approach, aimed to enhance the overall performance of the system. The performance metrics required for this case study are defined in Sect. 4.2.

The system architecture of the preexisting applications, without employing our proposed approach, is illustrated in Fig. 6. Further details about these applications are provided in Sect. 4.3.

Fig. 6
figure 6

Initial system architecture

Table 1 describes the computing infrastructure. The operating system is CentOS Linux 7 and the computer architecture is x86-64. In this infrastructure, a Kubernetes cluster was set up following the configuration described in Fig. 7.

Table 1 Computing infrastructure
Fig. 7
figure 7

Kubernetes cluster configuration

4.1 Repositories

The set of documents to be analyzed and processed by the Semantic Analyzer, Risk Analyzer, Cipher and Descipher microservices consists of 2745 documents in txt format. These documents contain abstracts of articles on clinical research and clinical illnesses with a total file size of about 12 MB and an average file size of 4.0KB.

4.2 Metrics

The performance metrics were derived from the requirements outlined in the case study introduced in Sect. 2.2. These metrics include the following:

  • \(RT = \sum \limits _{i=1}^n ST+FET_{t}\)

  • \(PRT = CT_{m} + RT\)

  • \(SThr = FS / PRT,\)

where RT represents the microservice response time, ST denotes the microservice service time, FET stands for the total file exchange time, n represents the number of microservice replicas, PRT signifies the overall pattern response time, \(CT_{m}\) is the microservices coupling time, FS denotes the files size, and SThr stands for the system throughput.

4.3 Description of the applications used in the case study

  • Semantic analyzer: Service that analyzes each document and determines the topic that a document statistically represents. The analysis process involves identifying themes within a document set for the generation of a lexicon.

  • Risk analyzer: It calculates risk scores for defining risk mitigation actions by receiving content and context criteria identified by the previous service. It then compares this information with criteria established by the organization to determine if a document contains sensitive information. Finally, it computes the risk level (RL) based on each calculated risk score.

  • Cipher: In this service a symmetric encryption process is carried out by using a secure session key whose size depends on the calculated RL.

  • Decipher: Service that enables document deciphering, requiring specific parameters for execution, including the assigned risk level (RL), key size, and the name of the document to be deciphered.

  • Database: Service responsible for recording the data resulting from semantic analyzer, risk analyzer and cipher.

  • Metadata: Service that ensures the accurate addressing of requested resources. It is responsible for defining the HTTP request routes between the various services involved.

  • User interface: Service that provides users with access to the different services mentioned above. Users can define parameters to carry out processes such as topic identification, graph visualization, document ciphering, and document deciphering.

5 Experiments and results

Several experiments were carried out to evaluate performance of the use of Eblocks structures for creating processing patterns following a service mesh approach. The experiments are based on the mentioned case study in which the resulting system with a set of microservices is deployed using different processing patterns. We compared the system generated using our Eblock approach versus the system generated using the Istio [15] platform.

5.1 Configurations for the case study

The case study involves deploying a system that integrates a set of microservices to provide added value to input data, represented by a collection of documents. Additionally, a content delivery service named SkyCDS [31] was deployed as an independent storage service to facilitate data exchange between microservices. Figure 8 illustrates the deployed system using an implicit service mesh based on the Eblock approach, while Fig. 9 depicts the system deployment using the Istio service mesh approach, highlighting one of the required components, the envoy proxy.

Fig. 8
figure 8

System architecture deployed using Eblocks to communicate microservices

Fig. 9
figure 9

System architecture deployed using Istio to communicate microservices

By default, a non-distributed pipes and filters pattern was defined. However, the Kubernetes cluster, consisting of two worker nodes as shown in Fig. 7, allows for the deployment of the system in a distributed manner. Figure 10 illustrates the system configurations in which microservices are deployed on specific worker nodes, following different processing patterns in distributed and non distributed environments. Despite of omitting SkyCDS representation in this figure, in all cases SkyCDS was deployed on the master node (i.e., the Disys0 node).

Fig. 10
figure 10

Microservices system generated from the case study deployed using different processing patterns

Based on the manager–worker pattern shown in Fig. 10, in both Eblock and Istio environments, we deployed a set of replicas for the decipher microservice in 2, 3 and 4 units to compare efficiency when using manager–worker and pipes&filters patterns.

A relevant topic to clarify in Fig. 10D, where a manager–worker pattern is illustrated, Istio is responsible for handling the communication to each replica deployed by Kubernetes. In this scenario, it is necessary a coordinated work in which users have to execute a Kubernetes command (e.g., kubectl scale) or declare the number of replicas at system creation time, in a declarative yaml file, for Kubernetes to create replicas on the same execution node. In this node, Istio enables communication with the replicas using a load balancer based on a round robin algorithm. On the other hand, the Eblock approach enables Eblock replica’s creation on different worker nodes as it is illustrated with the decipher microservice in Fig. 10F. The Eblock approach utilizes a load balancer based on a pseudo-random algorithm.

5.2 Comparison of using Eblock and Istio approaches in the case study

Figure 11 illustrates the file transfer time measurement between microservices and SkyCDS, showing similar values for the upload process and SkyCDS publish event in each microservice. Differences are evident in the download process, particularly in the cipher microservice, which exhibits a wide standard deviation in this operation (Fig.  12). This variation could be attributed to the deployment of the SkyCDS service on a server, leading to queued requests. Consequently, the subsequent service (decipher microservice) takes more time to download files (Fig. 11).

Fig. 11
figure 11

File exchange time between microservices and SkyCDS

Fig. 12
figure 12

File exchange time standard deviation

After deploying the system with Eblock and Istio approaches based on the architectures shown in Figs. 10A and B, specifically the pipes&filters pattern, we observed the microservices’ service times as depicted in Fig. 13 for a non-distributed environment and in Fig. 14 for a distributed environment.

Fig. 13
figure 13

Microservice service time in a non distributed environment (pipes&filters pattern)

Fig. 14
figure 14

Microservice service time in a distributed environment (pipes&filters pattern)

According to the pipes&filters pattern, the required time to execute services in a distributed environment is higher than in a non-distributed environment due to messages passing between microservices. This observation is evident in both Figs. 13 and 14. Furthermore, in the same figures, we note that the Eblock and Istio approaches do not exhibit a significant difference in their microservice service times using the pipes&filters pattern. This is confirmed in Figs. 15 and 16, which show the standard deviation of microservice service times. Comparable values are observed in cipher and decipher service times in both non-distributed (Fig. 15) and distributed (Fig. 16) environments.

Fig. 15
figure 15

Standard deviation of microservice service times in non distributed environment (pipes\&filter pattern)

Fig. 16
figure 16

Standard deviation of microservice service times in distributed environment (pipes\&filter pattern)

In this case study, the manager–worker pattern presented in Fig. 10 was deployed using both the Eblocks replica creation approach and the Kubernetes replica creation command (kubectl scale). In both cases, Eblocks or Kubernetes replicas were scaled from 1 to 4 units, i.e., activating up to four decipher microservice workers. The pattern shown in Fig. 10C illustrates the creation of decipher workers from 1 to 4 (only 2 were shown in the figure) on the same node (Disys9). In this configuration, all the system’s microservices were deployed on the same node (Disys9), representing the non-distributed version. In the pattern shown in Fig. 10D, decipher replicas were located on the same node (Disys5), while other microservices were located in Disys9, representing the distributed version. These two system deployment versions correspond to the Istio approach.

Figure 10E depicts a non distributed manager–worker pattern where Eblocks are replicated in the same node, while Fig. 10F illustrates a distributed pattern where decipher Eblocks are executed in both nodes (Disys9 and Disys5) to distribute the workload.

Figure 17 visualizes differences between pipes&filters pattern deployments based on a unique decipher microservice (one Eblock or replica) and manager–worker pattern based on 2, 3 or 4 decipher microservices workers, in a non distributed or distributed environment deployed on a Istio or Eblock approach (labeled as Istio scenario and Eblocks respectively).

Fig. 17
figure 17

Deciphered microservice’s service time (execution of internal pattern)

We observe a minimum difference in service time executing Eblocks approach or Istio approach. Nevertheless, this figure highlights difference between non distributed pattern and distributed pattern, and an increased efficiency when microservice workers are replicated, reducing service time to process the same repository.

Following the same workload distribution concept between Eblocks, an internal pattern is set up inside every decipher Eblock (from 1 to 4 Eblocks), this means that inside a decipher microservice is executed n internal processes, where n represents decipher’s Eblocks in execution. In other words, if there are created two decipher Eblocks, inside each Eblock is processed two deciphering process (see Fig. 18) in order to reduce service time. We visualize it in Fig. 17 with InterIntra legends in both a non distributed way (NonDist) and a distributed (Dist) way.

Fig. 18
figure 18

Internal pattern execution inside Eblock

From Fig. 17, we observe that the Eblock approach in a non-distributed environment, executing an internal manager–worker pattern (labeled as NonDist Eblocks InterIntra), exhibits the least service time compared to the other execution environments, followed by a distributed internal pattern (labeled as Dist Eblocks InterIntra) (Fig. 19).

Fig. 19
figure 19

Microservices coupling times

Despite Eblock and Istio approaches having similar service times (see Figs. 13 and 14), the ability to execute a pattern inside an Eblock increases system efficiency and establishes a significant difference between both approaches, as illustrated in Fig. 17.

Figure 19 depicts coupling time required to join microservices. The Istio approach is observed to couple microservices faster, both in a non-distributed environment and in a distributed environment.

Despite the Eblock approach having a slower coupling time, it does not significantly impact the overall pattern response time PRT as the difference in this measurement is relatively small (see Fig. 20).

Fig. 20
figure 20

Coupling times standard deviation

In Fig. 21 we analyze the deciphered microservice’s service time, focusing on the Eblocks and Istio approaches (excluding ’Eblocks InterIntra’ environments from Fig. 17.

Fig. 21
figure 21

Deciphered microservice’s service time (M–W manager–worker)

In the pipes&filters pattern, we observe that the service time does not show a significant difference in both approaches, with overlapping service times.

In a non-distributed manager–worker pattern employing 2, 3, and 4 workers, the Eblock approach demonstrates shorter service execution times. However, in distributed manager–worker patterns, there is no consistent behavior, except for the observed trend that increasing the number of workers reduces the differences in service times for both approaches. This phenomenon is particularly noticeable in the manager–worker pattern formed by 4 workers. One possible interpretation of this behavior is that Istio initially appears faster due to its round-robin load balancer algorithm. However, as we increase the number of Eblocks, workload distribution becomes more equitable due to its pseudo-random algorithm.

Finally, Fig. 22 illustrates the calculated throughput of each deployed system based on a processing pattern and using either the Eblock or Istio approach. As described earlier, the Eblock approach executing an internal pattern (labeled as Eblocks ii) -whether distributed or not- demonstrates the least service time (see Fig. 17 with Eblocks InterIntra legends), thereby reducing pattern response time (PRT) and, consequently, achieving higher system throughput (STh) compared to other configurations.

Fig. 22
figure 22

System throughputs calculated in pipes&filter patterns and manager–worker patterns

Figure 22 illustrates the superior performance of the Eblock approach compared to the Istio approach. This difference in performance is a result of the service time variances shown in Figs. 13 and 14, as system throughput (STh) depends on the response times of the microservices involved in the system.

Table 2 Difference between Eblock and Istio approaches

In Table 2, we provide a summary of the differences between the Eblock and Istio approaches, considering the execution of patterns. A tick in the table indicates that the feature described below is considered as part of the approach. Notably, the Eblock approach encompasses features such as the ability to implement a pattern based on a combination of other patterns (PC), the capability to create microservices replicas in a distributed manner (DP), and the ability to establish an internal pattern within an Eblock (Int). These are features that the Istio approach lacks.

6 Related work

In literature, two main strategies for deployment of processing patterns in distributed systems that integrate containerized applications or microservices can be distinguished: a) those deployed by a container management system (e.g., Kubernetes) and b) those deployed by a service mesh (e.g., Istio). They have different scope and focus. For example, Kubernetes (K8s) is a container management platform that focuses on automating the deployment, scaling, and management of containerized applications. It provides a higher-level abstraction for managing infrastructure resources and deploying applications in a distributed environment, while a service mesh (e.g., Istio) is focused on the communication and interactions between services within a microservices architecture. It addresses concerns such as traffic routing, security, observability, and policy enforcement without being primarily concerned with the deployment and management of containers.

6.1 Deployment of patterns using a container management system

Different design patterns for container-based distributed systems are described in [18]. They classified these patterns into those where containers were deployed on a single node and those where containers were deployed on multiple nodes.The patterns for a single node exhibit analogies with the pipe&filter pattern and manager–worker pattern. On the other hand, the patterns observed for multiple nodes share similarities with the divide&conquer pattern and manager–worker pattern. It’s important to note that this work did not consider the use of a microservice manager in the analysis.

An exhaustive analysis of the Kubernetes container orchestration platform was carried out in [21], describing the elements of its architecture and showing examples of deployments on such a platform. Two patterns were implemented: the pipes&filters pattern and the manager–worker pattern, the latter executed by using replicas of a microservice. The non-functional requirement to be addressed is the availability of the microservice in case of failure. While no service mesh was used in this work, it is noteworthy to highlight the usefulness of Kubernetes in deploying these two patterns.

An implementation of a pipes&filters pattern was made in [32] considering two processes running in different containers located in the same pod (where a pod is the smallest building block of an application in a Kubernetes cluster [17]). The pattern was deployed using a YAML configuration file, providing a pod with the ability to encrypt communications with other pods. An implementation of the pipes&filters and manager–worker patterns in Kubernetes at deployment time was presented in [14]. Althoug this work does not employ a service mesh, it serves to demonstrate pattern deployment.

Table 3 summarizes the characteristics of the aforementioned works, establishing relationships between the different patterns deployed and the container manager used. As the authors in [18] mention the use of containers but do not specify which manager was used, this work was not included in the table.

Table 3 Deployment of processing patterns using Kubernetes (K8s)

6.2 Deployment of patterns using a service mesh

A framework called SmartVM was described in [33], which is a service capable of performing monitoring, discovery, networking actions and auto-scaling of workflows in a Docker Swarm environment. This framework enables the execution of tasks related to a service mesh, including service discovery and monitoring. The design pattern deployed when utilizing such a framework, is the manager–worker. The prototype aimed to satisfy non-functional requirements such as concurrency and system performance.

One analyzed work that deploys the service mesh approach in three different layers is presented in [34]. In this setup, three meshes are deployed: one at the application level, another at the network level, and the last one at the hardware level. At the hardware level, interconnection protocols are utilized to establish the mesh, while at the network level, a software-defined network (SDN) is employed. Notably, a network orchestrator participates at this level, assuming the control plane role and leaving the data plane role to devices such as routers and switches. Finally, at the application level, the presented mesh closely resembles the architecture of a generic service mesh, relying on proxies for inter-service communication. The only pattern presented in this work, deployed at the application level, corresponds to the pipes&filters pattern.

A protection scheme to provide security to a mesh of services was proposed in [35]. Encapsulation methods were applied to the entities where network traffic circulates (proxies), and thus mitigate security risks in sending information between proxies. This approach allows visualizing the possibility of including systems or services within the pods themselves (Kubernetes), and these serve for specific tasks within the system (encryption and decryption). The pattern deployed in this solution was a pipes&filters pattern based on three layers (interface, business logic and database).

The Istio service mesh was used to test the deployment of applications based on the manager–worker pattern in [36]. Authors evaluated the maintainability of the microservices system within the Kubernetes environment and the use of Istio as a service mesh. Similarly, the work of [20] used the same service mesh (Istio) and Kubernetes as a container manager to deploy a manager–worker pattern. This approach was focused on evaluating the availability of microservices deployed in this container manager. In both studies, the deployment of the pattern was managed by Kubernetes, so that the service mesh was only involved in managing the communication between the services.

An open source service mesh called Network Service Mesh in a Kubernetes environment was employed in [37]. The implementation of this mesh equips Kubernetes with the capability to manage both local and remote connections to and from the microservices deployed in this environment. The architectural solution described involves an entity that communicates with the service mesh at the network level. This entity is positioned as a dependency within the Kubernetes pods deployed on each node of the cluster. In essence, this approach integrates an entity into each pod that communicates with the network services mesh, facilitating communication with services on the same node as well as services on other nodes.

The deployment of pipes&filters patterns on Kubernetes was proposed in [38], where special attention was paid to the latency between the filters. The authors proposed a filter placement method based on the analysis of telemetric data obtained from the service mesh (Istio). They created a graph to evaluate connections between filters, determining the closest node for relocating the filter and, consequently, reducing system latency.

In [39], a monitoring model based on a service mesh deployed in Kubernetes was implemented. The monitoring service operates at the pod level, where proxy entities are deployed for both inbound and outbound traffic flow. Additionally, monitoring entities, named handlers, are positioned at both ends of the proxy (inbound and outbound), along with a third entity called exporter. This approach allows for observing the dynamism of the pod, enabling the inclusion of services within it that can later be consumed externally by other services.

Table 4 Deployment of processing patterns using a service mesh approach

Table 4 provides a summary of works that showcase patterns deployed in a service mesh. It is noteworthy that the majority of these works focus on using either the pipes&filters pattern or the manager–worker pattern. Additionally, it is important to highlight that there are relatively few works related to the deployment of design patterns in a service mesh, as it is a nascent area of study that has recently gained prominence in research.

The contributions highlighting the ability to create specific and adaptable pods in Kubernetes, allowing a pod to include services encapsulated as virtual containers and form a more complex system with diverse tasks, have played a significant role in shaping the Eblock approach. Systems adopting this approach can incorporate independent services, and the results of these services are visible from outside the pod. In essence, a pod has the capability to include additional services beyond its main function, providing non-functional requirements that can be utilized by external systems or users.

7 Conclusions and future work

This paper presents a novel approach to constructing microservice systems that seamlessly integrate processing patterns through an implicit service mesh strategy. At the heart of this approach is a fundamental structural unit known as the Eblock. The Eblock serves as the cornerstone, abstracting the components involved in a processing pattern. This abstraction includes key elements such as PM and WM for creating processing patterns as microservice applications. Additionally, it encompasses integration components like Discovery (Dis), Authentication (Aut), and Monitoring (Mon), enabling the Eblock to perform essential tasks within a service mesh. These integration components facilitate implicit interactions among a group of Eblocks, adhering to a service mesh strategy. The proposed approach features an application model for crafting and deploying processing patterns, leveraging Eblock structures and SC, Interpreter, Generator, and Orchestrator. Notably, this approach empowers the creation of new processing patterns by combining different ones, a capability not currently available in existing service meshes.

The Eblock structure is designed to meet non-functional requirements within the system, eliminating the need for users to install a centralized service mesh. This not only reduces computing requirements but also shortens the learning curve associated with centralized service mesh tasks.

Our experimental evaluation was guided by a real-life case study, involving the conversion of an existing group of applications, which previously collaborated in a traditional workflow, into a microservice application that integrates processing patterns following a service mesh strategy. Two approaches were implemented for evaluation: one utilizing Istio, an open-source service mesh platform, and the other leveraging our Eblock-based approach.

The experimental results demonstrated that the Eblock-based approach enables the deployment of processing patterns using an implicit distributed service mesh. In this configuration, control components such as discovery, authentication, and monitoring are integrated into the Eblock structure, eliminating the need for an additional service mesh platform. These components are executed within each deployed Eblock, allowing the creation of microservice distributed replicas and consequently reducing the service time of a given service.

The experimentation highlighted a notable dependency of Istio on Kubernetes, particularly in the creation of more intricate processing patterns like manager–worker. Istio relies on Kubernetes to create microservices replicas, limiting this operation to the same node where the processing pattern is running. In contrast, Eblocks demonstrated versatility by enabling the creation of systems based on both pipes&filters and manager–worker patterns. The real-life case study underscored Eblock’s capability to transparently construct processing patterns without outsourcing this task to the container manager system.

Furthermore, the Eblock approach allowed the definition of internal patterns within each Eblock, resulting in reduced microservice service times and increased throughput for all systems.

To define a pattern using the Eblock approach, the application encapsulated in its PM component is launched using an API Rest, simplifying the specification of execution parameters. While the API Rest is a current requirement in this approach, a future implementation aims to introduce a generic API for consuming applications encapsulated in Eblocks. In the current implementation, users can define processing patterns (specifying the number of required replicas) during deployment. A future enhancement will include a replication service to create replicas at execution time based on performance metrics.

Our proposal offers two primary contributions. First, it introduces a microservice system construction approach centered on decentralized structures known as Eblocks. These Eblocks empower users to seamlessly create processing patterns for integration into a microservice system. Second, our orchestrator scheme, following the IaC approach, translates user-designed structures into containerized applications (microservices). This results in the automatic realization of processing patterns, directing developers’ involvement primarily towards the system design phase.