1 Introduction

Services computing or service-oriented computing is a paradigm that emerged in the 2000s [1]. There might be different definitions, but the common essence is to make use of “services,” that is, components that can be accessed via network by using published API. This allows for the rapid development of new applications by combining existing services, thus focusing more on application requirements rather than implementation details. Such a principle, to bridge the gap between business and IT, had been common in software engineering. The emergence of web-based services enlarged the potentials with easier access methods and a notable number of publicly available services. Nowadays, the use of services has become common even in the closed contexts, e.g., business applications built with the micro-service architecture, and smart home applications built with services provided by Internet-of-Things (IoT) devices. Cloud computing is the most successful application of services computing. The principles of services have been thus serving as the essential foundation in the current and emerging computing paradigms.

In the research viewpoint, more challenging visions were investigated to realize automated service selection and composition. There has been enormous effort on automated techniques that focus on dependability aspects such as service price and reliability as well as compatibility of data exchanged. We still see active research in the services computing community in key conferences such as ICSOC (Int’l Conf. on Service-Oriented Computing) and ICWS (Int’l Conf. on Web Services).

This monograph describes and reviews our work in two directions, specifically, dependability of service composition both in the cyber space and the physical space. In the former case, i.e., web and cloud services, the primary challenge was the proper selection of service providers for achieving the best quality, assuming a non-trivial number of candidate providers or service plans. In the latter case, i.e., IoT services, the challenge is consistency as effects of the different services can affect each other or the same user.

The experience on these directions provided excellent opportunities to explore both a functional aspect and a quality aspect of services. Since the research work conducted around 2010–2015, there have been rapid changes in the research trends in the world and also for myself. Now I am focusing more on the software engineering aspect with the industry for automated driving systems and AI systems. However, the experience with services computing made the solid foundation for my research.

In the remainder of this monograph, the research work on web and cloud services will be described in Sect. 2. The work on physical services will then be described in Sect. 3. Finally, retrospective discussion will be given in Sect. 4.

2 Service Composition in Cyber Space

2.1 Background Around 2010

The initiative for web services was actively investigated since its emergence around 2000. It was driven by intensive effort on standard specifications for remote integration of program components via Internet protocols and XML-based formats. Besides the detailed specifications such as SOAP and WSDL, the essential vision was to enable easy, rapid, and flexible realization of application goals by combining services published in the network, especially the web into a composite service, called service-oriented computing or services computing [1].

Although there were some efforts to fully automatically compose a service, given the input and the designated output and effect, this problem was too difficult, especially in terms of feasibility. It was because this full service composition requires that candidate component services have formal description of their functions to allow the planning task, i.e., description of input, precondition, output, and effect in a logical language with shared ontology.

Therefore, the most common problem setting was service selection in a workflow or business process of service composition. The assumption is that the workflow is given by human users, e.g., get an article from a news retrieval function and send it to a translation function. But there are many candidate services for the involved functions, then there is a problem of how to select from them. This problem is computationally hard when we assume there are an enormous number of possible combinations of candidate services. There have been very active research studies since the first representative work of QoS-aware service selection [2] in 2003.

The standard problem of quality-aware service selection is described in Fig. 1. In the figure, a workflow is shown with sequential and parallel execution. For each service type, or a task in the workflow, there are multiple services as candidates. We distinguish these services by quality of service (QoS) such as price, availability, and response time. We can use the term SLA, service-level agreement, to refer to the QoS values that should be ensured by the providers. We consider the aggregated QoS of the workflow. For example, the aggregated price for the composite service can be calculated as the sum of the price values of the involved services, assuming all are executed in each invocation. Similarly, the aggregated reliability for the composite service can be calculated as the product of the reliability values of the involved services.

Fig. 1
A diagram presents the different values of Q o S attributes for each service type, with the workflow for the composite service.

Quality-aware service selection

As the most simple form, the baseline problem of quality-aware service description can be described. Note that we use simplified formalization for the illustration purpose in this chapter and the definitions may differ from those in the original papers.

Definition 1. Quality-Aware Service Composition

Given a set of service candidates for each task or service type required in the workflow, we choose one of the candidates to maximize the overall quality of the workflow.

$$\begin{aligned} \texttt{max}\quad \texttt{OverallQuality}(services) \end{aligned}$$

where \(services=[s_1, s_2, \cdots , s_N]\) with \(s_i \in SC(i)\), N is the number of service types or tasks necessary in the workflow, and SC(i) is the given service candidates for each service type.

OverallQuality of the workflow is obtained by integrating each of the quality aspects \(q \in Q\) by a weighted sum with the weights of each q as w(q), as \(\sum w(q) = 1\):

$$\begin{aligned} \texttt{OverallQuality}(services)=\sum _{q \in Q} w(q) \texttt{Aggregate}(services, q) \end{aligned}$$

The Aggregate function depends on the quality aspect. For the price, it is a sum of the price values for each selected service, made negative (as we “maximize” the quality):

$$\begin{aligned} \texttt{Aggregate}(services, \texttt{price}) = -\sum _{t \in ST} \texttt{price}(services(t)) \end{aligned}$$

where ST is the set of service types in the workflow and services(t) is the selected service for a service type t.

As another example, for the availability, the overall availability of the workflow is a product of that value for each service when we use all of the services, i.e., if we do not involve alternative services:

$$\begin{aligned} \texttt{Aggregate}(services, \texttt{availability}) = \prod _{t \in ST} \texttt{availability}(services(t)) \end{aligned}$$

2.2 Different Quality Aspects in Service Selection

We had intensive research studies on quality-aware service selection on the web. The direction was to involve practical aspects into the standard problem of quality-aware service selection and also investigate technical solutions for the extended problems, which are more computationally intensive. Below we overview how the baseline problem was extended.

2.2.1 Probabilistic Selection

The work in [3] considered conditional contracts and usage patterns during the service selection. For example, the SLA may declare the ensured response time differs during the working hours, e.g., 9am–5pm weekdays. On the other hand, the client side, who is going to make the service selection, also has usage patterns, e.g., often use the services during the night time for batch processing.

The baseline problem in Sect. 2.1 is extended so that the atomic quality values of each service, such as price(service(t)), is now not a static constant but an expected value, which may be obtained by simulation for example.

2.2.2 Combined Use of Functionally Equivalent Services

The work in [4] considered using multiple services for one service type. For example, we may keep two service candidates for one service type, and invoke the second one when the first one does not respond. Or, we may invoke multiple services and adopt the fastest response. By considering such combined usages, we can make additional virtual service candidates for each service type.

The baseline problem in Sect. 2.1 is extended by changing the way of making the sets of candidate services. Given the original service candidates SC(i) for the service type i, we can extend the candidates with combined services:

$$\begin{aligned} SC'(i) = \bigcup _{ss \subseteq SC(i)} \texttt{combine}(ss) \end{aligned}$$

where combine makes different ways of aggregation of functionally equivalent services. The quality functions such as price are extended as well to handle the combined services, e.g., sum of price and minimum of response time in the case of parallel composition.

2.2.3 Different Granularity

The work in [5] considered with different granularity of service functions. For example, suppose there are two successive service types of “English newspaper download” and ”translation to Japanese.” One service may work for the two service types if it provides “Japanese version download of English newspaper.” The mathematical representation changes from the baseline one in Sect. 2.1 to select a service sequence for the whole workflow, not a service for each service type.

2.2.4 Network Quality and Location Awareness

The studies in  [6,7,8] considered network quality and location awareness. One aspect is the latency between services that may matter in data-intensive workflow. The other aspect is the location diversity for higher availability when we consider backup scenarios when some of the best services are unavailable.

For the network latency aspect, we can extend the baseline problem in Sect. 2.1 by including the network quality in the optimization target:

$$\begin{aligned} \texttt{max}\quad \texttt{OverallQuality}(services) + w_{NET}\texttt{OverallNetworkQuality}(services) \end{aligned}$$

where \(w_{NET}\) refers to a weight to decide the balance of the service quality and network quality.

Here, for \(services=[s_1, s_2, \cdots , s_N]\),

$$\begin{aligned} \texttt{OverallNetworkQuality}(services)=\sum _{i \in [0,N)} \texttt{Latency}(s_i, s_{i+1}) \end{aligned}$$

The second setting of location diversity will be discussed in 2.4.

2.3 Self-Adaptive Network-Aware Service Selection

As the concrete work, the work in  [7] is described briefly. This work considered network awareness or location awareness by integrating the network latency and transfer rate into the service composition. Although the standard QoS of each service includes the execution time, the actual response time is affected by the network latency, especially for data-intensive applications. It is therefore essential to consider this aspect in service selection, i.e., sometimes it can make sense to choose services nearby.

We employed a network model from the network research but also made a custom genetic algorithm for service selection. Specifically,

  • A mutation operator is used to make a random change in the current solution candidate in the evolutionary process of genetic algorithms. We made a custom mutation operator that replaces a service candidate selected in a current solution by another candidate nearby.

  • A crossover operator is used to make a new solution from two parent solutions in the evolutionary process of genetic algorithms. We made a custom crossover operator that tries to “smoothen” the network flow. Figure 2 shows this process. The service locations are mapped on the two-dimensional coordinates and we start with the two parents (the leftmost) that were chosen to create a new solution, called offspring. To select the service for the i-th task, we look at the middle location of the services of the \(i+1\)-th task from the two parents.

  • These custom operators and standard ones are used in an adaptive way by updating the probabilities of each operator during the evolutionary process.

  • Specific data structures were used to efficiently make the above query on locations such as a K-D tree.

Fig. 2
5 scatter plots of the Y axis versus the X axis. It plots the data points and the evolutionary process starts with the two parents to create a new solution offspring, using three different services. Each service has a custom operator with updating probabilities.

Custom crossover operator in network-aware service selection (cited from [7])

Figure 3 shows an example of the evaluation results of the technique for network-aware service selection. If we put extremely a high weight on the network latency, on the right side of the graph, the optimal path selection by Diikstra algorithms, not considering QoS, is slightly better. But otherwise, our technique of SanGA outperforms the other approaches, including a genetic algorithm with straightforward network-awareness (GA*).Footnote 1

Fig. 3
A line graph plots the utility versus the weight of latency. It plots four increasing trends, two increasing slopes, and a decreasing trend for randomizer, G A asterisk, Net G A, San G A, Dijkstra =, Dijkstra tilde, and G A, respectively.

Example of evaluation on network-aware service selection (cited from [7])

2.4 Consistency in Service Selection

2.4.1 Problem Setting

Although there were a large amount of studies of quality-aware service selection, the limitation was assumption on the exactly identical functions of candidate services, i.e., all are compatible if the target task is the same. It is necessary to consider the consistency or compatibility of slightly different output-input connection.

In addition, the typical setting of service selection did not consider the failure. It is of course possible to employ an adaptive mechanism at runtime to search for an alternative service after detecting a service failure. However, this may not be optimal, for example, when a service with no good alternative was selected. This makes a similar extension to the combined use of functionally equivalent services in 2.2.2, but now we think of functionally compatible services as well.

These aspects were handled in the work in [8, 9]. We select a list of service candidates for each service type so that at runtime we can switch between them when the primary one is unavailable and the quality of such backup plans can be explored in a probabilistic way during the selection procedure.

The baseline problem in Sect. 2.1 is now extended to select a list of candidate services for each service type:

$$\begin{aligned} \texttt{max}\quad \texttt{ExpectedOverallQuality}(services_{backup}) \end{aligned}$$

where \(services_{backup} = [S_1, S_2, \cdots , S_n]\) and \(S_i\) refers to a list of service candidates for i-th service type.

We consider the compatibility constraint or the possibility that available services for the same service type may have slightly different interfaces. The selected service candidates \([S_1, S_2, \cdots , S_N]\) must satisfy \(\forall s_i \in S_i, s_{i+1} \in S_{i+1}\) . Compatible \((s_i, s_{i+1})\). The compatibly may be defined with the semantic web technique that uses formal ontology, or at the minimum with common semantics of programming languages, e.g., we can pass an integer output to a float input.

The quality is now considered as an expected value by considering the availability as the success probability of each service. For example, given a service candidate list \([s_{i1}, s_{i2}]\), the expected price for this service type is \(p_{i1}\) PRICE \((s_{i1}) + (1-p_{i1})p_{i2}\) PRICE \((s_{i2})\) where \(p_{i1}, p{i2}\) are the success probabilities for the candidate services. Note that this is a simplified version as we also employed a location-aware availability model, e.g., consider the fact that services in the same datacenter are likely to become unavailable at the same time.

2.4.2 Proposed Methods

To effectively deal with the compatibility aspect, we employed a clustering approach to efficiently traverse compatible services [10]. Figure 4 shows how the selection problem is modified to deal with the functional consistency problem. For each service type, candidate services are organized in clusters with the compatibility relations between services. For example, S6 and S7 can be used as alternatives of the currently chosen one, S5, which intuitively means they require the same or less input and produce the same or more output.

Fig. 4
A flow diagram presents how the Q o S values are processed through C 1, C 3, C 4, and C 5, with ten service types that are represented by S 1, S 2, S 3, S 4, S 5, S 6, S 7, S 8, S 9, and S 10.

Service selection with functional consistency (cited from [10])

We also developed a custom genetic algorithm with the following features:

  • The QoS values are calculated in a probabilistic way, i.e., as the expected value, by considering the reliability of each service candidate.

  • In order to assess the reliability, locations of service candidates are considered, i.e., service candidates in the same region can fail at the same time.

  • Custom mutation and crossover operators are used to prioritize service candidates with more location diversity.

  • A custom step is added in the evolutionary process in which incompatible combinations of services are sometimes replaced with compatible ones. This computation is efficiently done with the cluster structure as shown in Fig. 4.

Figure 5 shows the tool interface for this extended service selection. QoS values are shown with backup plans and location diversity is explored. The proposed algorithm uses multi-objective optimization to allow for producing the Pareto-front solutions, i.e., solutions with different prioritization over multiple evaluation criteria. Users can choose among the solutions such as “the best quality in the normal plan but poor in backup plans” or “so-so quality in either of normal or backup plans.”

Fig. 5
A screenshot of the window. It features a spider chart that plots the Q o S values on the top left. The top right has a table with the best and worst values. The bottom left is the workflow diagram to obtain redundancy. The bottom right has a world map with service locations.

Tool interface for QoS-aware service selection with backup plans and location diversity (cited from [8])

Figure 6 shows an example of evaluation result of the custom algorithm (SHUURI and SHUURI\(_2\)). The problem becomes more difficult when the service compatibility is more limited (the horizontal axis) and the proposed algorithm, SHUURI\(_2\), outperforms in the optimization performance measured by hypervolume, a common criterion to evaluate Pareto-front solutions.

Fig. 6
A line graph plots the hypervolume ratio versus service compatibility in percentage. It plots 5 decreasing trends covered with data points for S H U U R I 2, S H U U R I, N S G A- I I, G D E 3, and P S O.

Example of evaluation on robust and consistent service selection (cited from [8])

2.5 Service Selection in Cloud Computing

Cloud computing emerged as the new paradigm after the trend of services computing. The problem of selecting infrastructure services for computational resources also emerged as the central problem as practical cloud services offer many plans with different qualities such as CPU speed and memory size even inside one service provider. We also investigated algorithms for selecting cloud services. The work in [11] considered cloud service selection for workflow applications with deadline constraints by extending ant colony optimization algorithms. We also worked on consolidation of virtual machines [12].

3 Service Composition in Physical Space

3.1 Background Around 2015

Besides the intensive work on the web and cloud services, Internet-of-Things (IoT) and smart cities, including smart home, smart office, etc., attracted wide attention in the 2010s. Given the increasing capability of sensors and actuators, more and more applications were investigated as a combination of functions provided by such devices, which can be said service composition in the physical world.

Similar to web service composition, the workflow to combine multiple services is described in a high-level language, e.g., Node-RED.Footnote 2 In the case of physical services, the length of the workflow is rather limited and the key characteristic is the event-driven behavior to respond to environmental events, e.g., user movement. Event-based behavior description is also used, rather than workflow-based one, such as sensiNact [13]. With sensiNact, service composition can be specified as ECA rules in the form of “ON event IF condition DO action.” Such rules are also called trigger-action programming [14].

3.2 EU-Japan Smart City Projects

We worked in the context of two EU-Japan projects, ClouT and BigClouT.Footnote 3 The projects aimed at providing reference architecture and its implementation for making use of web, cloud, and physical services in smart cities. The architecture and its implementation were holistic, covering infrastructure-level, platform-level, and software-level as in the common layers of cloud computing, i.e., we had smart-city versions of IaaS, PaaS, and SaaS integrating not only cloud resources but also sensor and actuator devices as well as human acting as sensors and actuators.

Service composition was one of the key aspects of the City-PaaS in the projects. In addition to the web and cloud service composition mechanisms presented in 2, we investigated supporting tools for physical service composition at development time and runtime.

3.3 Consistency in Physical Service Composition

The essential difference of physical services from web and cloud services are interactions among multiple users and multiple composite applications. In other words, the effect of services can be shared among different users in the same physical place, thus potentially leading to inconsistency or undesirable situations. It is thus necessary to deal with a different type of consistency from that for the web and cloud services.

As a simple scenario, consider a smart office system that supports presentation of slides and electronic posters, demonstration of tools, and discussion in a room (Fig. 7). This system is expected to support both presenters and audiences, often without explicit commands from them while preventing undesirable situations. In this section, a very small part is discussed to quickly illustrate the difficulties with ECA rules.

Fig. 7
An illustration presents how a smart office system that supports the presentation of slides and electronic posters, demonstration of tools, and discussion in a room, using touch screens, microphones, displays, and speakers.

Example scenario of smart office system

An example of specifications of this system is shown in Fig. 8, regarding the simple usage of shared displays. It includes requirements on the system R1 and R2, as well as behavior specification (ECA rules) to meet the requirements, B1 and B2.

Fig. 8
A page presents four potential conflicts. R 1 is an authorized user who can use a display nearby to browse shared business information such as a shared calendar. B 1 given a request by an authorized user. R 2 is shared business information, and B 2 is the system stops.

Example specifications with potential conflict

The example specifications are not satisfactory in the sense that the set of behavior specifications B1 and B2 does not meet requirement R2. In fact, behavior B1 can start to show the information on the display even when there is already an unauthorized user there. This situation means that there is a conflict between R1 and R2, i.e., they cannot be met as they are (without any restrictions). If a decision is made to put higher priority on R2, B1 and R1 are then modified by adding a constraint: “only if there is no unauthorized user nearby”.

This conflict is only detected by considering specific test scenarios, either executed in the physical environment, in a simulation model, or even in the engineer’s mind. It may be thus overlooked by engineers, and it is essential to have automated, systematic support to detect such scenarios or potentials of conflicts.

3.4 Verification Framework

Our work investigated modeling of physical effects and verification to detect potential conflicts [15,16,17]. Figure 9 describes the framework. The left side shows three elements of the input and the right side shows one element of the output. The dashed rectangle denotes the boundary of the tool: users of the framework do not need to look at its inside. This architecture is defined to bridge the gaps between practical domain-specific representations for smart space applications and required formal inputs for model checkers.

Fig. 9
A flow diagram presents how the verification report of the model checker output is obtained from the semantic E C A rules model with the metadata and the verification settings.

Overview of proposed framework for consistency verification of physical service composition

3.4.1 Underlying Formal Modeling

We developed a formal modeling framework to capture the essence of smart space services by abstracting away the implementation details. The core idea is to model the physical effects of services on users, such as “see” and “hear.” Such effects may or may not be active for a user depending on whether the user is inside the “scope” of the service, i.e., enough nearby the device.

Figure 10 illustrates a few examples of the abstract formal models as described below.

  • The left figure denotes the situation of the example scenario, where a user comes near and is able to see the display that has been activated for another user. This is explained by inclusion of the two users in the scope for visual interaction with the display device.

  • The middle figure denotes a situation of sound conflicts, where a user hears different sounds from different audio devices and becomes uncomfortable, e.g., when a movie player is automatically activated while a recipe reader is running in a smart home application. This is explained by inclusion of the user in the overlapping two scopes for audio interaction with the two devices.

  • The right figure denotes a situation in which a user sees different direction instructions in a smart museum application. This is explained similarly by inclusion of the user in the two overlapping scopes for two visual devices.

Fig. 10
An illustration presents the abstract format models. On the left, the user comes near to see the display. In the middle, a user hears different sounds from different audio devices. On the right, the user sees different directions of instruction with 2 visual devices.

Scope-based modeling of physical services

These examples include conflicts that can occur depending on the relationships between users and scopes, i.e., user inclusion, or between scopes, i.e., scope overlap. By modeling and examining such relationships explicitly, implicit assumptions on device layout or potential conflicts can be clarified.

3.4.2 Verification via Model Checking

Once we have the formal model of the smart space and its services, we can explore the possible state transitions. The state transitions are represented in the abstract form, for example: a user enters a scope of one service; then, the physical effect of the service becomes active; after that another user enters the same scope; finally, the physical effect of the original service is overridden by the newly activated one.

Model checking is an approach to have exhaustive exploration of the possible state transitions for verification [18]. SPIN is one of the popular tools for model checking [19]. The primary input of the SPIN is state transitions to explore and specified by a dedicated language called Promela. The other key input is what we want to verify. This can be given by a command, e.g., we want to detect deadlocks, or by properties specified in temporal logic. Typical properties include safety to show some undesirable state is never reached and liveness to show some desirable states will be eventually reached.

It has been a common approach to prepare a translation mechanism from a language that engineers are familiar with, such as UML or domain-specific languages, into a language used in a model checker, such as Promela. This approach is effective in our context as well. Engineers prefer to describe ECA rules in domain-specific languages and we can support model checking by providing a translation mechanism. We can also provide support typical properties to verify such as conflicts of sounds in the same space.

3.4.3 Integration with sensiNact

We implemented the architecture in Fig. 9 including the transformation function from ECA rules in the sensiNact platform to Promela for the SPIN model checker. In the sensiNact platform, ECA rules are specified with REST APIs. For example, an action part of an ECA rule may refer to invocation of the speaker service as speakerService1.play.act(). We need a mapping from this implementation-level description to the formal model. Specifically, we need metadata including the effect of each API, e.g., AUDIO as well as scope of the effect, e.g., Room1.

3.5 Runtime Adaptation

The verification framework allows for detecting potential inconsistency in applications of physical service composition specified with ECA rules. This task is expected to be conducted at development time by software engineers. As a more advanced use case, we also worked on runtime mechanisms for automated self-adaptation to detect and resolve potential inconsistencies when a new application of physical service composition is deployed by end users.

This runtime adaptation is implemented with the models@run.time approach [20]. In the models@run.time approach, the system makes use of its models used in the development time for monitoring and adaptation. This approach is significant as more and more systems are facing with increasing uncertainty, i.e., we cannot precisely predict all that occurs in the operation in the physical environment, user behavior, or black-box AI behavior.

In our case, we already had a framework for formal modeling and verification that aimed at support engineers at development time. This mechanism can be explored at runtime, for example:

  1. 1.

    The user installs a new application, which is written in the implementation language, e.g., sensiNact, but also accompanies the metadata.

  2. 2.

    The formal model of installed applications and the environment is updated with the new application.

  3. 3.

    Model checking is conducted and a scenario for conflict is detected.

  4. 4.

    The user is asked to fix it by providing priorities on the conflicting applications. We may iterate by going back to Step 3 until all the conflicts are resolved.

The critical difficulty here is the tasks imposed on the end user. One implementation we chose was use of priorities between applications or ECA rules. We can prepare a mechanism to rewrite the ECA rules according to the priority configuration. For example, we can make a modified rule “close the window if it is raining only if the CO2 density of the room is not too high” if the safety app, monitoring the CO2 density, has a higher priority than the comfort app, monitoring the weather.

There can be a variety in how to implement such an adaptation mechanism. For example, we may deploy simple conflict detection that only checks the device state, e.g., open versus close, not looking at the state transitions. This is much more lightweight but may cause too strict check such as reporting “open window” in the morning and “close window” at night as a conflict.

For supporting such variability, we have implemented the adaptation mechanism in a generic way via API. Specifically, the adaptation mechanism is separated as a component and it works with API provided by a platform such as getCurrentModel, addNewRule, and checkConsistency.

4 Retrospective Discussion

4.1 Services Computing

In this monograph, we reviewed our research in the services computing area. One direction was service composition in the web, and it focused on (constrained) optimization problems by assuming a large number of services with different QoS values. The other direction was service composition in smart spaces and it focused on the consistency problem.

Even though both directions focused on the same concept of services, the underlying technical assumption and thus the applied techniques were different. The primary assumption in web services is that services executed by different users do not affect each other. On the other hand, the essential common characteristic is the focus on the application-level goals by abstracting away the implementation detail. QoS aspects are absolutely essential in both types of services though we didn’t work on QoS optimization problems in IoT or fog computing [21].

The initial vision of services computing, flexibly combining services provided by various providers in the open network, turned out to be some or less impractical. This is because people did not choose to give rich annotations, even machine-readable description of API for fully automated service selection and composition. However, the vision was successfully employed for cloud computing where the services are simple and standardized or virtualized. In addition, the technical approaches of modeling quality and problem formulation have been leveraged even if we do not consider millions of candidate services. In this sense, contributions are essential from the 20 years of services computing.

4.2 Impact on the Author

The experiences with these two different directions have established the solid research foundation for the author, that is, investigation of application-level dependability goals with different types of automated techniques. The insights obtained in the experiences have helped the author tackle challenges in different domains such as automated driving systems [22,23,24,25], automated delivery robots [26,27,28], and games-as-a-service [29]. We have been making use of optimization techniques as well as formal verification techniques to deal with various quality aspects though the systems are monolithic, and we focus more on the software engineering aspects such as optimization-based test generation. For example, in the problem of automated delivery robots, we are exploring different types of risk, cost, and value metrics by optimization techniques.

5 Concluding Remarks

In this monograph, we have reviewed our research in the services computing area around 2010s. The author believes the past work has contributed to establish the foundation of various current studies such as fog computing and microservices even if the proposed techniques may not fit perfectly with the current practical environments.

The communities of services computing are still very active in Japan and in the world on top of the accumulated insights for engineering of service composition as well as quality modeling and investigation. On the other hand, there have been different approaches to quickly realize application goals such as (monolithic) AI systems including deep learning approaches and large language model (LLM) approaches such as ChatGPT [30]. It is very attractive to discuss the roles and directions of services computing with these emerging approaches.