1 Introduction: Toward Open Clouds

Sovereignty in the digital world has emerged into one of the most discussed topics in Europe in the past years, and these discussions have recently materialized in the Franco-German initiated Gaia-X project, with 22 founding members, which incorporated over 200 international day 1 members at its official launch. This shows that there is a strong interest in the industry.

Data spaces are one central building block that will allow organizations to share data within a well-defined, policy-controlled trust boundary and therefore provide the cornerstone for a flourishing ecosystem of novel use cases and business opportunities.

Google Cloud is committed to Europe, and our strategy is very much aligned with the objectives of Gaia-X: openness, security, trust, transparency, and federation. This is reflected in our partnerships with European companies, our focus on security and trust, and in how we address sovereignty demands in our products and solutions.

From our perspective, two pillars are of utmost importance to meet the requirements of European organizations, and these are providing technological excellence, including accessibility to emerging technologies, and at the same time having a deep understanding of their priorities when it comes to digitally transforming their businesses. These need to be put into balance with the workload-specific sovereignty requirements of the particular organization.

When we divide the “X” in Gaia-X into two halves, two important developments become visible that will guide the individual digitization journeys of European organizations. First of all, there is the “Next Generation of trustworthy Data Infrastructure” as envisioned by the lower half, and second there is the “Data Economy” which is constituted by the upper half and will allow the implementation of new data-driven use cases and business models. These are guard-railed by core services like the federated catalogue, the identity and trust layer, as well as by the policy rules and architectures of standards, and interoperability standards, to ensure that related systems are in line with European values.

1.1 Openness

We strongly believe in open-source software (OSS) and open clouds [1], because these provide users additional degrees of freedom when it comes to “vendor lock in” and relieve them of their dependence from a single provider. Experience has shown that even in the best of cases, enterprise IT can be rigid, complex, and expensive.

Conversations with European customers that have a strong on-premises IT footprint indicated that they would like to take advantage of the innovative capabilities that public cloud offers, but are worried of ending up in “just another lock-in.”

Current industry trends indicate that multi-cloud and hybrid-cloud solutions are the future—not just for big multinational corporations but also for small- and medium-sized businesses. Technology users shall be able to build, port, and deploy their applications across platforms—cloud or on-premises. Open source, portable workloads, and open APIs are cornerstones of this approach. Instead of tethering customers to proprietary technology stacks, products, and solutions, providers should leverage open-source technologies, where it makes sense. An example is Google Cloud Anthos, which allows customers to manage their applications across different clouds, including our competitors, on-premises data centers, and the edge.

Some OSS projects have already proved that changing how industries work does not necessarily require a commercial interest. Kubernetes, Istio, and TensorFlow are just a few very successful examples of OSS initiated and open sourced by Google that gained wide adoption globally. The additional benefits of openness in the cloud ecosystem are giving cloud users greater control and flexibility while enabling healthy competition and unlocking new partnerships.

Initiatives like Gaia-X should act as an enabler for wider cross-organizational collaboration while at the same time allowing organizations to make sovereign choices about the technology stacks that they want to adapt. To allow accelerated growth, organizations should be able to pivot to global platforms if that is where they get access to the latest innovations in certain cases. This shall however happen under well-defined rules and in accordance with European values. The work on the policy rules and architecture of standards for Gaia-X has begun to leverage existing widely supported European frameworks, e.g., for portability and interoperability, which we welcome.

1.2 Security and Trust

Cloud users expect their providers to offer the highest level of security, and Google Cloud heavily invests in this domain since its inception. One external proof that our hard work is respected in the industry is the “The Forrester Wave™: Infrastructure as a Service (IaaS) Platform Native Security,” Q4 2020 report that listed Google Cloud as a leader.

While security can be a strong differentiating factor, it is still something that users “just expect” to be present. But even with best in class security, no one will decide to use a specific technology or platform if the second ingredient is missing—and this ingredient is trust.

Users need trust in their providers and their behavior. They must be sure that the providers respect their free choice and don’t add artificial barriers that would make it harder for them to switch data or software systems. We agree that customers should have the strongest level of control over their data that is stored and processed in cloud environments. In order to increase the transparency even further, customers need evidence through third-party audits, certification, and attestations and through additional transparency reports.

International Data Spaces (IDS) as the enabler of the Data Spaces Economy within Gaia-X has the potential to foster the uptake in cloud usage in general.

In order for this endeavor to be successful, the compliance rules for Gaia-X shall be founded on widely accepted certifications and attestations that customers and service providers already successfully work with like ISO, BSI C5, and others. As we are contributing to international standardization bodies ourselves, we believe that it is necessary to also consider the standards work from other global regions. This would lead to wider acceptance and would help establish and strengthen the perception of this European project internationally.

Our commitment goes beyond certifications, attestations, and reports. We provide guidance, documentations, and legal commitments to help our customers align with laws, regulations, and other frameworks. Therefore, we actively contribute to the implementation of a Cloud Code of Conduct based on the General Data Protection Regulation (GDPR) and are advocating toward a more modern approach to data security policies.

1.3 The Pillars of Digital Sovereignty

In order to practically address digital sovereignty [2] requirements in the Cloud, we see three important layers that need to be put into consideration, which are illustrated in Fig. 25.1.

Fig. 25.1
figure 1

The three pillars of sovereignty. © 2020, Google

1.3.1 Data Sovereignty

Customers need to be provided with a mechanism to prevent providers from accessing their data, unless the customers explicitly approve the access for specific provider behavior because they think the access requests are necessary.

Examples of such customer controls include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. Such controls allow the customer to be the ultimate arbiter of access to their data.

1.3.2 Operational Sovereignty

Depending on the industry that an organization operates in, there might be a requirement for further controls. With these capabilities, the customer benefits from the scale of a multi-tenant environment while preserving control similar to a traditional on-premises environment.

Examples of such controls include restricting the deployment of new resources to specific provider regions and limiting support personnel access based on predefined attributes such as citizenship or a particular geographic location.

1.3.3 Software Sovereignty

The demand to control the availability of workloads and to run them wherever an organization wants, without being dependent on or locked-in to a single cloud provider, has increased steadily. This includes discussions around the ability to survive events that require them to quickly change where their workloads are deployed and what level of outside connection is allowed.

This is only possible when two requirements are met, both of which simplify workload management and mitigate concentration risks: first, when users have access to platforms that embrace open APIs and services and, second, when they have access to technologies that support the deployment of applications across many platforms, in a full range of configurations including multi-cloud, hybrid, and on-premises, using orchestration tooling.

Examples of these controls are platforms that allow users to manage workloads across providers and orchestration tooling that allows them to create a single API that can be backed by applications running on different providers, including proprietary cloud-based and open-source alternatives.

1.4 Partnering with European Companies

Having the critical skill set in Europe and increasing the amount of technology knowledge has to be an important element of any successful European sovereignty strategy. As outlined above, Google is committed to open source. By partnering with European companies, their employees gain access to open source and state-of-the-art technologies, which will not only provide for more independence but also allow them to increase their knowledge and own market value while at the same time increasing the organizations’ footprint in the local markets.

We are also fully committed to contribute to the goals and values of Gaia-X and to the European Digital Data Ecosystem by providing best in class technology to our customers in Europe:

  • By investing further in capital, engineering, and go-to market resources in Europe

  • By implementing the highest data privacy, data residency, and security standards

  • By working with policymakers and partners to meet their specific requirements

We will bring our expertise to the table, but we also want to listen and learn from our European customers and partners around the table.

We envision these kinds of partnerships also in the context of the wider data spaces ecosystem, which would allow the amplification of knowledge transfer to an even border ecosystem and not just to the bilateral partners.

2 Technological Evolution from Cloud Native Applications to Data Spaces

The evolution of data spaces was a consequence of a longer technological evolution.

Today, organizations have the desire to reduce their on-premise infrastructure investments and to modernize their IT stacks to focus on their business differentiators, by leveraging innovative, highly scalable cloud services that provide strong security, competitive pricing, and access to continuous, fast-paced innovation. This trend became possible by the development of new technologies, several of them are open source, which allowed IT teams to rethink how they could best possibly support their business needs while at the same time increasing the efficiency of their infrastructure deployments.

2.1 Containerization: A New Paradigm

Containers are the first important milestone that we will highlight. The term container is taken from the principle of shipping containers. As long as we do not talk about special substances, the harbor infrastructure doesn’t care about what is inside a container, be it a luxury car or ten refrigerators. The handling of the container itself, due to its uniform interface, stays exactly the same. Containers allow software developers to package their source code with all the required dependencies in a self-contained way, so that they can be deployed and tested consistently across different environments like a developer’s laptop, test, system integration, or production. This containerization addresses risks like version mismatches in shared system libraries that can lead to potentially large incidents.

A notable technology at the beginning of this development is Docker, which provides a set of services that use operating system (OS)-level virtualization to deliver software in containers. Over time, the tooling ecosystem was massively extended to simplify working with containers and other solutions entered the market.

2.2 Container Orchestration: Kubernetes (k8s)

Once containers were used by a broader community, the next challenge was to run them efficiently on top of distributed infrastructure. One solution to this challenge, which quickly emerged to a broadly accepted standard, is Kubernetes, the open-source container orchestration system, which was initially announced by Google in mid-2014 and was heavily influenced by Google Borg’s [3] design.

Kubernetes is the core of several managed cloud services that aim to provide an excellent user experience, low operations overhead, and a high degree of automation capabilities for a seamless user experience. Container orchestration solutions nowadays allow for the implementation of highly efficient, scalable, and secure solutions that provide a strong foundation for all kinds of enterprise applications. The ability to use the underlying infrastructure in an efficient, customizable manner, for example, by leveraging bin packing, is just one of the benefits that this new model introduced.

2.3 Service Mesh: Istio

As large-scale containerized applications, consisting of hundreds or even thousands of containers, were built, the next challenge was to effectively connect these services with each other and to control their interaction behavior.

As containers are often described as microservices, the solution to this challenge that found quick adoption was termed Service Mesh, and one of the leading open-source implementations is Istio.

The Service Mesh uses so-called “sidecar” proxies that run alongside each service. This means that for each service, an individual sidecar proxy is instantiated, and the connections between the different sidecars form the mesh. Some important functionalities that a Service Mesh can provide are service discovery, load balancing, failure recovery, metrics and logging, monitoring, A/B-testing, canary rollouts, rate limiting, access control, and end-to-end authentication. These can help developers to focus on solving business problems and service specifics rather than the surrounding ecosystem features and requirements.

2.4 Data Mesh

The new Service Mesh principles influenced another important step in the progression toward data spaces. The term data mesh [4] was introduced in 2019 and describes an enterprise architecture that is a symbiosis between the agility of a service mesh combined with product management practices, platform thinking, and self-service around data stores.

The high-level idea is to manage data as if it was a product, including a team that is responsible for this “data product.” Besides the quality and availability aspects, the team also needs to ensure that data can be easily discovered and consumed by the surrounding ecosystem, which might be a single organization.

Data discovery, which is one of the key features, is made possible by mechanisms including metadata or machine-readable self-descriptions which describe the underlying data in terms of quality, usability, domain specificity, and further aspects. This helps business users identify information that fits their needs.

In order to ensure that data can be consumed frictionlessly, supplementing technologies, for example, GraphQL, could be added to the mix, but they might introduce new challenges and therefore need to be carefully evaluated before considering them as major building blocks of a technology strategy.

2.5 Data Space

While the concepts that were discussed in the previous sections enable broad and industry-agnostic data processing use cases for companies of all sizes, one important element is missing to enable cross-organizational collaboration on data, and this is a model of trust.

Once data leaves the control span of a given organization, the data provider can no longer execute governance on the data and has to rely on the consuming parties to follow agreements, licenses, and contracts. It would be beneficial to have a technology that can help keep control over shared data, by extending the control span to the destinations that request data, which would essentially allow the creation of a trust boundary on top of a virtual overlay network, with multiple participants and distributed locations—a data space.

This is where the Reference Architecture Model of the International Data Spaces Association (IDSA) comes into play, as it provides a solution that can fill this gap, in order to create a vivid and pluralistic data ecosystem that gets widely adopted.

Within a data space, data can be enriched with policies that are enforced by the platform components like the Connector and allow the data provider to control the life cycle of the data and decide how the data can be used. The governance model can also be amended with contractual agreements for cases where the technology alone cannot provide a strong enough foundation of trust.

3 Interoperability as a Key Enabler for Hybrid and Large-Scale Data Spaces

While cloud computing established the basis for efficient networks of functional services, “big data” and “data lakes” also had a strong influence on the evolution of data spaces. “Big data” found its first prominent implementation in 2005. The Hadoop Data Management Framework was developed by Yahoo, based on Google’s Map Reduce [5] paper.

3.1 Big Data and Data Lakes: An Early Generation of Data Spaces

The key idea behind big data is to consolidate data in one location in order to extract value out of it. These consolidated data repositories, and their surrounding ecosystem, are often described as data lakes. This concept has several commonalities with the data spaces idea. One is to bring together information from existing data stores.

The issue with this goal is that over time, the amount and velocity of data have increased and that at the same time, data is often distributed among independent data silos that don’t share common governance mechanisms or data models. This makes the integration of all these disparate data sources an extremely complex endeavor. The growing demand for digitalization and cross-enterprise service offerings introduced additional challenges, and often organizations don’t even know what data they have, where it originates from, or what the quality of the data is. To get a handle on the data that is present in an organization, data warehouses tried to evolve into “Enterprise Information Stores” which introduced new challenges as the increased velocity and volume of data led to inefficiencies with regard to data ingestion and transformation.

Big data quickly raised strong expectations as the promise of increasing the scope of usable data within an organization was very appealing. The ability to concentrate data in a single location and to join datasets to enable new use cases like machine learning was understood as a strong opportunity to build new products and to generate additional information and organizational wisdom. Some of the concepts underlying data spaces are quite similar to what big data tried to solve. Both aim to generate more value from shared data, by introducing common or joint use cases through evolution and innovation; what has changed is the scope.

One thing that happens in organizations from time to time is that a prototype will silently evolve into a productive solution. The transition from Extract Transform Load (ETL) to Extract Load Transform (ELT) in the data lake context provides an ideal candidate for such a silent transition as storing data from all the different sources “as-is” in a single location sounds like an easy exercise.

Such a transition usually doesn’t follow well-informed and purpose-designed architectural decisions, including systems and information architecture, and the idea that having data in a central location would magically solve all the challenges when it comes to data ingestion and consumption, real-time use cases, streaming and batch scenarios, the integration with other enterprise systems like analytics platforms, or the different requirements in process-driven and transactional use cases, all at the same time, was meant to fail. It is therefore no wonder that different analyst reports in the 2015 to 2017 years came to the conclusion that 60 to 85% of big data projects actually failed.

In alignment with the concepts of data warehouses and data marts, so-called data ponds were introduced to support the data lake architecture. Data gets ingested into the data lake and is then processed to fit a special purpose, and the “cleansed data” is eventually persisted in a data pond. This evolution introduces its own challenges, but on a high-level abstraction, the “new” architecture reminds of a classic enterprise data warehouse solution, which it originally aimed to replace.

While the “built for purpose” subsets of the data lake provided a good starting point, it still couldn’t address specific consumption use cases that required things like high Create-Read-Update-Delete (CRUD) performance or the modeling of specific data relationships in the context of a specific knowledge domain. To solve these challenges, a “polyglot” data environment was required that included different types of data stores like relational, document, or graph databases, columnar stores, and cubes. The target state was once again reminiscent of the classic data warehouse architecture.

Eventually, many data lake project owners came to the conclusion that the original data warehouse architecture did incorporate a lot of good concepts and that data lakes were better used as a complement than a replacement. While the scope of a data lake typically includes data sources and consumers within an organization, data spaces envision a much broader ecosystem of cross-organizational collaboration. And while this vision offers immense opportunities, it also introduces nontrivial challenges at the technical and organizational level.

The wider discussion around data meshes started an evolution that not only complemented classic data warehouse architectures but transformed them into distributed networks of Information Management (IM) factories, supporting specialized ingestion and consumption use cases and collaborating with each other to holistically solve larger-scoped IM scenarios.

Concepts like cloud and International Data Spaces (IDS) try to provide answers to some of the aforementioned challenges by integrating technology and architecture solutions to provide highly scalable infrastructure for data meshes while at the same time tackling the challenge of data governance throughout the whole life cycle of the data.

3.2 Gravitation and Expansion: The “Yin and Yang” of a Successful Data Spaces Strategy

An important learning with regard to data spaces, which particularly needs to be considered in cross-industrial initiatives like Gaia-X, is that a too strong focus on standardization might introduce negative side effects as well. One learning from the data lake era is that polyglot solutions can add a high degree of value to businesses—some of these solutions might however be proprietary to some degree.

While the added value through the consolidation of data into a common data store is well understood, the potential of dedicated, use case specific implementations of data processing capabilities in the context of complex use cases should not be underestimated.

Examples of such implementations can range from specific hardware requirements like edge components or sensor arrays, and use case specific functional and nonfunctional requirements like scalability or specific security patterns, to highly optimized cloud native data warehouse services, which apply sophisticated cloud architecture and operation patterns to provide massive performance. In some cases, these implementations even overcome limitations of classic IT systems like ingestion and consumption efforts in DWH environments or, with a somehow relaxed set of constraints, the CAP theorem [6].

Figure 25.2 illustrates the concept of a data space ecosystem as it is envisioned within initiatives like Gaia-X. Participants share or consume data provided via so-called data services. These services can either provide access to the data itself or provide applications that encapsulate the access, or offer data specific operator implementations to standardize or simplify the exchange. By embedding these services into more sophisticated data ecosystems, targeting domain-specific use cases like mobility, healthcare, or logistics, so-called smart services are introduced.

Fig. 25.2
figure 2

Data space (Network) Vision, incl. Gravitation core, (Smart) data services, and specialized contributors. © 2021, Google

Traversing up the typical data space enterprise architecture stack (Fig. 25.3), these smart services can either provide more advanced data management and operation services or even represent complex use cases by providing domain-specific data models and business-focused interfaces. The ability to combine services at the business level presents one of the foundations to form domain-wide use cases that go beyond simple data exchange scenarios.

Fig. 25.3
figure 3

Typical data space enterprise architecture stack based on the IDS Reference Architecture Model 3.0 [7], © 2021, Google

While the approach to combine internal and external IDS connectors into data space network clusters eliminates the need to have central data stores, having gravitational elements within a data space scenario does introduce many benefits. Besides potentially reducing network traffic, it gives the opportunity to consolidate data, similar to a classic data warehouse, allowing to combine and more holistically analyze domain and data space data to derive common information and wisdom out of it. Typical data warehouse scenarios usually promote a clear separation between data sources and data consumers; data spaces should introduce bidirectional communication as a core design goal. Feeding back the results provided by analyzing shared data can help with the incremental evolution of data spaces and its driving domain use cases as a whole. The combination of gravitation and expansion within a domain’s data space ecosystem is the key to building sophisticated use cases of high maturity.

Smart services that provide custom-tailored data processing and analytics capabilities will play an integral part in the upcoming data spaces, and fundamental platform services like Kubernetes can provide the foundation for them. The provided layers of abstraction allow the realization of portable solutions that can be seamlessly moved between different platforms, without the need of a complete rewrite. This portability increases the sovereignty of the organizations that deploy and operate such services.

An issue when agreeing on a least common denominator in the context of technologies and platforms can however be that functionality suffers. As mentioned earlier, several highly specialized and sophisticated cloud-based hyper-scale solutions can hardly be mapped to this least common denominator. If “portability under every circumstance” is defined as a core architecture principle, this could lead to a situation where organizations are no longer able to leverage bleeding edge technology that would provide them with a strong business advantage.

In “Don’t get locked up into avoiding lock-in” [8], Gregor Hohpe provides a well-balanced view on the different lock-in types and why “lock-in isn’t an all-or-nothing affair.” This article explains why common attributes like lock-in or coupling aren’t binary, debunks common myths, such as using open-source software magically eliminating lock-in, and points out that it is difficult to not be locked into anything, which is why some amount of lock-in might be acceptable in order to unlock the innovation potential of a target architecture to its fullest.

A vast amount of the motivators driving customers to invest into data space initiatives and associated use cases is the high demand for wider adoption of digital solutions within their specific business domain. In order to achieve this goal, they should have access to the best solutions in the market, so that they can focus on their business goal.

3.3 Portability and Interoperability: A Perfect Complement

While portability of data and services is important, organizations need to find the right balance and understand trade-offs between efficient digitization and innovation speed on the one hand and strong sovereignty requirements on the other hand. Therefore, another aspect needs to be added to the spectrum, which is already well-known in the enterprise world, and this is interoperability.

Portability can still be a core architecture principle to target for, but we pledge for a slightly more flexible approach that implicitly allows the use of complementary solutions that might not be fully portable as long as they provide a high degree of interoperability. This line of thinking is related to the free choice of the best possible solution for a given task.

When we envision a full stack of large enterprise IT, including all the legacy that was created over decades, an approach that would demand to reimplement everything to fit the defined framework would not be applicable, and as most organizations already use certain technologies, including the required investment in technology, licenses, and skills, they want to be able to continue using solutions that have proven to be valuable to achieving their business mission. These companies can still see a benefit in participating in data spaces, be it Gaia-X-compliant European data spaces or international data spaces.

While there are many ways to achieve this “best of both worlds” goal, Fig. 25.4 illustrates one high-level conceptual idea of how this could be approached. The idea is to bring together portability and interoperability in the context of data spaces-driven use cases.

Fig. 25.4
figure 4

Portability and interoperability perimeters in context of “Polyglot” data spaces. © 2021, Google

Figure 25.4 shows a data space with associated two “Gravitation Centers,” A and B, consisting of a core, typically represented by a shared data store and associated (Smart) data services. The latter are targeted to offer core processing functionalities associated with a certain data space type and can bring their own complementary data store if needed. Users of the data space access the provided services via defined interfaces and connect them with their very specific use case implementations. While it is important for services in the gravitation center to provide a high level of standardization and portability, the peripheral customer-specific parts of a use case implementation should be less restrictive in this sense. Portability should be interpreted in a sense that not only allows the exchange of data but data space-specific semantics as well as compliance with defined API sets, security measures, and protocols.

As a data space grows, it becomes harder to apply centralized governance; therefore, it is important to introduce additional distributed governance mechanisms. IDS Connector implementations can help govern data space participant endpoints and help solve the problem of decreasing trust for participants that are not part of the core services ecosystem (Fig. 25.5). This ensures that only authorized participants can access a service and that defined policies are enforced at all times, regardless of the deployment location of the participating service, be it in a Cloud or On-Premises or at the Edge.

Fig. 25.5
figure 5

Trust perimeters in the context of a data space gravitation center and IDS. © 2021, Google

3.4 Interoperability via Solutions-Specific Connector Implementations

IDS provides a reference architecture model that addresses the aforementioned requirements. The ability to manage policy-driven governance of data and data services (access and usage control) does not only provide the level of functionality needed, but it does this in a transparent and standardized way.

In order to participate in a data space collaboration scenario with customer or cloud-specific components while still complying with the required data sovereignty demands, customer or solutions-specific IDS connector implementations have to be provided.

In this context, there might be scenarios where the use of a specific solution makes sense, e.g., a cloud-based data warehouse with extended machine learning capabilities that allows business users to leverage the power of machine learning through a well-known interface like the sequential query language (SQL).

Regardless of whether such an adapter would be implemented by a customer, a partner, or a provider, a proper evaluation in alignment with regard to the suitability and demand of the underlying use case needs to be performed.

Figure 25.6 illustrates how such a connector construct could look like. Based on a reference connector implementation that would need to be fully portable, the target system-specific adapters would “only” need to comply with interoperability requirements.

Fig. 25.6
figure 6

Portability and interoperability in the context of the IDS Connector Reference Architecture. © 2021, Google

How such peripheral services would be embedded in a complete IDS governance environment, including IDS Broker and Clearing House services, that in some cases, especially when adding none open-source on-premise systems into the picture, might not fully comply with all sovereignty aspects targeted for a certain ecosystem, e.g., in context of Gaia-X, still needs to be discussed. We are confident that the value that these extended data space scenarios can introduce to the implementation of efficient digital use cases and innovation cycles will motivate the necessary steps to be taken.

4 Future Outlook

Over the course of the next few years, we expect organizations to develop new and innovative use cases and products that embed cross-organizational data sharing as a core principle.

This will lead to a higher demand in network and data center capacity, which will, given the climate change challenge and sustainability objectives, make efficient data center operations and Green IT commitments even more important. Energy-efficient cloud infrastructure is however not the only relevant challenge. The idea of implementing digital use cases by building large application and service composites from reusable services and artifacts shares several commonalities with the visions behind service-oriented architectures (SOA). Therefore, it makes sense to review why it encountered substantial challenges in order to evaluate potential learnings.

One fundamental issue was finding the right balance between a service-based ecosystem with many independent participants and a governance structure to maintain, mature, and refine it. Any service ecosystem will sooner or later face issues if it doesn’t implement a proper governance process that helps control the evolution of the system as a whole. While a central governance instance might not be a contemporary approach anymore, providing tools and best practices to allow participants within such an ecosystem to collaborate and align their service life cycle and evolution management to common goals and principles becomes a mandatory capability for long-term success.

Given the expected dynamics of use cases and distributed innovation power across the different industries and domains, it is very likely that several of such ecosystems will arise. In order to drive and accelerate innovation and digital progress across all areas of society, it will be important to have a connecting element between these ecosystems. How such mechanisms and governing elements can be established is under active development and still work in progress. It will be interesting to see how the different use cases will impact these developments. Initiatives like the International Data Spaces Association (IDSA) and Gaia-X could further emerge into such supporting entities, evolving with the growing ecosystem of data and applications, based on a network of secure and reliable interconnected services.

It will be important that the different stakeholders, including international cloud providers, collaborate in strong alignment to ensure that the best possible outcomes are reached.