1 Introduction

Nowadays there are several Research Infrastructures in Europe performing experiments on Smart Grid activities. Each of them has a particular strong point: hardware, software, models, procedures or the specific experience of the researchers. In order to exploit the potential of each one and to make it available to the community, the ERIGrid project developed advanced system validation methods and tools, together with common models, harmonized validation and deployment procedures.

One of the approaches developed in ERIGrid is the Laboratory coupling which allows to exploit the synergies among the research infrastructures. By sharing the hardware and software devices of different research infrastructures, new complex test cases can be performed on an extended system configuration: this favours technology development and facilitates the deployment phase so that the necessary engineering efforts are reduced and the time to market of innovations and solutions is shortened.

This section explains, first, the state of art for smart grids testing, and discusses the procedures and system configurations normally adopted for components or systems testing. Then, after some aspects related to the interoperability are introduced, the communication infrastructure developed in the framework of ERIGrid which allows to exchange data among the research infrastructures is discussed. Finally, Sect. 3 shows the new testing approaches enabled by coupling different RIs whereas in Sect. 4 four implementations of laboratory coupling are presented:

  • integration of a remote OLTC controller via IEC 61850;

  • state estimator web service;

  • hardware/software integration between different research infrastructures;

  • integration of remote hardware among different research infrastructures.

1.1 State-of-the-Art for Smart Grid Testing

The state of the art for smart grid testing is still lacking and in some cases is unclear. This is due to the high number of possible system configurations, functionalities and technologies to test in a smart grid environment. Some standardization bodies have already been developed together with specific testing procedures on particular aspects of smart grids. For instance, IEEE has published a standard for the interconnection and interoperability between the electric power systems and DERs [1]. This standard includes requirements, response to abnormal conditions, power quality, islanding, and test specifications and requirements for design, production, installation evaluation, commissioning, and periodic tests. Another standard, at the moment inactive, provided by IEEE, is related to the testing of microgrid controllers [2]. This standard wants to provide testing procedures which allow to test the energy management system of microgrids ensuring the “plug and play” functionality and establishing comparative performance indices.

Many other standards have been published and reviewed by national and international standardization bodies but there are still many testing procedures not included in these standards. This is the reason that brings research centres to develop and implement custom testing procedures. Focusing on the system configuration of a smart grid, four main testing approaches can be applied:

  • Simulation: this testing approach allows to simulate the behaviour of the system configuration based on its mathematical model. Typically, this is the first experiment performed since it demonstrates the functionality of the technology under development. However, since the system configuration is only a mathematical model, many characteristics of the real system under test might be neglected. This could affect the results of the testing, hence further experiments, with increased reality, have to be performed.

  • Hardware In the Loop: as a second round of testing, a HIL experiment might be performed. This technique, as deeply described in Sect. 4, allows to test hardware or software components under realistic conditions, coupling a real hardware setup for a domain (or part of a domain) with a real-time simulator. In this case the system under test includes real components, hence the test validation is very close to the field testing; only the part of the system under test simulated in the real-time simulator is a model.

  • Pure Hardware: similar to the field testing, an experiment can also be performed in a pure hardware system configuration. In this case, the whole system configuration is composed of real devices; the behaviour of the system is exactly the real one in case of field testing or it is very similar in case of laboratory testing.

  • Combination of different testing approaches: the system configuration of some tests requires, on one hand, a high reality level and, on the other hand, an extended system under test. These needs could not be satisfied by only one of the previous testing approaches. However, the combination of two or more testing approaches can enable some of these tests. This approach is beyond the state of the art, even considering the combination of different testing approaches in only research infrastructure. Indeed, the integration of different systems introduces several challenges. The problem is even worse in case of integration of multiple research infrastructures.

1.2 Multi-infrastructure Integration

Due to the increasing complexity of smart grids, an integrated approach for analysis and evaluation requires a large-scale validation scenario and may be unfeasible in one single research infrastructure (RI). Reflecting the interdisciplinary and dynamic nature of the field of smart grid research, many smart grid laboratories are designed to support a broad range of testing activities, from component testing to system testing, from hardware to software tests, from research to certification and education. This increased demand for flexibility is hard to achieve without also increasing the complexity of the laboratory infrastructure. Firstly, a combined expertise of different domains is required, which is not always the case of current specialized laboratory system; and secondly, the required complete RI for large scale CPES is theoretically possible but realistically not reasonable solution, in term of investments (equipment and expertise), operation (staff) and organization. Establishing an RI coupling framework allows the creation of a common resources and expertise pool to efficiently use the existing equipment and combine it with the complementary counterpart from others to validate researches in a holistic and near real-world environment.

On the other hand, developing such a holistic validation framework for CPES would also benefit researchers in terms of facilitating the replication of experiments and the verification of the validity of the results.

The technical obstacles for laboratory collaboration are more narrowly related to the interoperability among these infrastructures. In [3], a generic five layers of interoperability among a consortium is proposed (Fig. 1). The top layer involves the harmonization and agreement on information sharing policies (i.e. legal and admin support). The conceptual and semantic design of the holistic test, derived from the desired scenario and the individual RI’s capabilities, requires functional layer interoperability. The technical integration of RIs (i.e. actual interconnection) is deployed thanks to the three lower layers interoperability with possible involvement of SCADA architecture. Harmonization of information models (e.g. CIM), communication protocols as well as the aspect of synchronization, handling causality and latency compensation are therefore required for the seamless communication among infrastructures, the good emission and reception, and the correct interpretation of received information. This task is however not trivial due to the lack of flexible information models covering both power and ICT domains and due to the lack of efforts to harmonize the excessive numbers of communication protocols in the literature.

Fig. 1
figure 1

Interoperability architecture in cross-infrastructure holistic experiment [3]

Two important aspects of interoperability in a laboratory context are the exchange of information between technical devices, and the deployment and/or execution of test-specific third-party software on the infrastructure. In the case of external access to a laboratory, necessary software adaptations caused by deployment constraints or lack of a suitable Application Programming Interface (API) may be a major part of the effort of integrating the laboratory. In extreme cases, software may have to be rewritten to adapt to the target environment. Another potential obstacle is related to the differences in security and confidentiality policies between research infrastructures. Figure 2 describes different interfacing possibilities for integrating an external (third party) element (equipment, controller, etc.) to the local infrastructure:

E1 :

Direct communication between a laboratory internal supervisory controller and a third-party controller, for example to allow the third party controller to influence the control behaviour of the supervisory controller.

E2 :

Direct communication between a laboratory internal supervisory controller and a third party SCADA system, for example to allow test sequencing software to control a third-party test device which is bringing its own SCADA system to the test.

E3 :

Direct communication between a third-party controller and the laboratory internal SCADA system, for example to allow the third party controller to control a laboratory internal DER unit.

E4 :

Direct communication between the laboratory internal SCADA system and a third party SCADA system, for example to integrate equipment controlled by the third party SCADA system into the laboratory SCADA system in order to control all equipment through a single interface.

E5 :

Direct communication between the laboratory internal SCADA system and a third party IED, for example to allow test software to control both lab components and external test components through a single SCADA interface.

E6 :

Direct communication between a third party SCADA system and a laboratory internal IED, for example to integrate a laboratory device into a test setup consisting of third party devices which are controlled by a third party SCADA system.

E7 :

Direct communication between a third party IED and a laboratory internal IED, for example in order to allow an external IED to influence the behaviour of the internal IED.

E8 :

Direct communication between a laboratory internal IED and an external DER controller, for example to control a third-party device from the laboratory SCADA system through a laboratory RTU.

E9 :

Direct communication between a third party IED and a laboratory internal DER controller, for example to test a third party IED (e.g. a site controller) against a laboratory DER unit.

E10 :

Direct communication between a third party DER controller and a laboratory internal DER controller, for example in order to enable load sharing between multiple generator sets.

E11 :

Direct communication between a laboratory internal DER controller and a third party DER unit, for example to control a compatible third-party device from the laboratory SCADA system through the lab DER controller.

E12 :

Direct communication between a third party DER controller and a laboratory internal DER unit, for example in order to test a third party DER controller against a laboratory DER unit.

Fig. 2
figure 2

Different possibilities for multi-infrastructure integration

In general, for each RI coupling interface, it is required to satisfy at least the three lower interoperability layers as described in Fig. 1. In terms of supported interface protocols, a wide variety of solutions is found including TCP/IP, UDP/IP (ASN.1 encoding according to IEC 61499), Modbus/Sunspec, IEC 61850, Java RMI, Matlab API, XML-RPC, OPC, proprietary interfaces and many more.

2 JaNDER Communication Platform for Lab-Coupling

This section provides an overview of the communication platform developed in the ERIGrid project which allows to exchange data online. This could be used for coupling different RIs: testing a software or a controller in a remote laboratory, acquiring data from several RIs or even creating a virtual research infrastructure. This platform is called “Joint Test Facility for Smart Energy Networks with Distributed Energy Resources” (JaNDER). JaNDER is a cloud platform for the exchange of information (measurements, control signals, laboratory asset descriptions) between geographically distributed labs by using a secure internet connection. This section describes the three different JaNDER levels developed.

2.1 Features of the Cloud-Based Communication Platform

Based on the needs of the possible users of JaNDER (including research centers, academia and industries), the development of JaNDER focused on four key features:

  • Modularization of the implementation in order to ensure the integration: RIs with low resources can still implement basic functionality.

  • Development of a generic information model as the basis onto which support for more specific information models: this ensures that at least some functionality can be mapped to JaNDER, regardless of the automation level at the individual RI, while contributing to modularization.

  • Support for exchange of system configuration information: this supplements the exchange of dynamic data with static data, such as grid topology.

  • New replication mechanism: this removes the requirements for opening firewall ports at each participating RI, and the associated administrative overhead.

The ERIGrid JaNDER platform is based on a three-level architecture: this is actually useful both for modularity and flexibility, as well as being open to future extension via additional levels.

2.2 Basic Data Sharing via JaNDER-L0

JaNDER-L0 implements the base functionalities used by all the other layers and is therefore a fundamental building block for the whole architecture. In particular, its main purpose is to allow a basic mechanism for exchanging live data (i.e. typically measurements and controls) between different RIs.

The starting point for each RI is a real time repository based on Redis which is open source, in-memory data structure store, used as a database, cache and message broker. This real time repository is used to collect measurements and controls from the field (or more frequently, as shown in Fig. 3, from an already existing SCADA system): the reason for adopting this repository is decoupling the JaNDER platform from any specific automation solution already installed in the infrastructure. The idea is to have data points from each RI available in the same basic format by using a simple key-value repository. The remote connection of remote infrastructures is then implemented by deploying a common real time repository (which can be hosted in a cloud environment, for example) which is automatically synchronized with all the local real time databases of the partners. In other words, the common repository acts as a central broker for connecting the different local repositories of the partners and can be thought as a “virtual bus” connecting all authorized facilities. There is no handling of standardized protocols or complex interaction patterns above the exchange of data points through the virtual bus.

The fully open source nature of JaNDER-L0 makes it easy to extend the virtual research infrastructure community to new participants. However external users such as other research centres, academia or industries will also typically be interested in having a standardized protocol for interfacing: this is handled by the higher JaNDER levels.

Fig. 3
figure 3

JaNDER Level 0 architecture

2.3 IEC 61850-Based Communication Platform via JaNDER-L1

JaNDER-L1 is a software abstraction built on top of level 0 and its purpose is to provide an IEC 61850 interface on top of the very simple data structures defined in Redis.

The “Mapping” and “CID” files shown as inputs in Fig. 4 are the fundamental inputs needed by the IEC 61850 server in order to work. More in detail, the CID (Configured IED Description) is the standard IEC 61850 file used for configuring a device (an IED) and contains a data model representing (a subset of) the contents of the Redis repository in terms of IEC 61850 Logical Nodes. Apart from this file, it is of course necessary to link the data attributes defined inside it with the live values stored in Redis: this is done by means of a mapping file, which is a text file where each line contains an IEC 61850 data attribute name and a corresponding Redis data point name. The server will use this file in order to connect the IEC 61850 data model specified in the CID to Redis.

The implemented IEC 61850 connection is always local to the client (i.e., the IEC 61850 client actually runs also the server interface, on behalf of the real information producer) so that cyber security concerns are eliminated at this level.

Fig. 4
figure 4

JaNDER Level 1 architecture

2.4 CIM-based Communication Platform via JaNDER-L2

JaNDER-L2 is a software abstraction build on top of Level 0 (and not Level 1, even if this would be technically possible in principle) and its purpose is to enable the definition of a CIM-based service-oriented architecture on top of the basic live data exchange made possible by the lower JaNDER levels.

Fig. 5
figure 5

JaNDER Level 2 architecture

The client application (RI3), as indicated in Fig. 5, is an open source graphical interface for handling CIM network representations in conformance with the CGMES profile called CIMDraw, augmented with SCADA interfacing code developed specifically for ERIGrid. Apart from this SCADA interface, which is a different representation of the contents of the real time repository, the main interest for this level lies in the possibility of integrating with other CIM-based services like for example power flow calculation engines, state estimators, voltage control algorithms etc. which can take CIM representations, along with associated measurements, as inputs and produce calculated results as output.

3 Integrated Research Infrastructure

This section provides a description of new power system testing approaches enabled by the communication platform explained in Sect. 2. The demonstration of these approaches has been reported in Sect. 4.

3.1 Hardware/Software Integration Between Different Laboratories

Smart Grids require advanced functionalities in order to optimize their operation. Moving from simulations to actual field implementations could lead to different operational behaviour or could even jeopardize the system’s operation. This means that a software or a controller must be tested in relevant environment before the field testing. However, sometimes software developers could require a remote test in order to avoid intellectual property issues. Running the software in their own premises and exchanging data online with another RI where there are monitored, and eventually, controlled devices allows the software developer to protect their know-how. Indeed, not even an executable file is provided to the RI with the hardware. In this case the RI hosting the system under test (at the exception of the software/controller under test) sees only the input and output of the object under test such as a black-box.

3.2 Virtual Research Infrastructure

In order to exploit the synergies among the RIs, each one with its own characteristics, a laboratory coupling is needed. In particular, using the devices of different RIs at the same time enables the extension of the system under test without any further investment in new components. Extending hardware resources of a specific RI by using resources of other RIs allows to implement more test cases than a single RI.

The Virtual Research Infrastructure (VRI) is a combination of RIs coupled by means of a communication platform which combines them in an equivalent bigger laboratory. Hence, in this way a remote hardware can be integrated as a part of the system under test. The interconnection of the RIs avoids additional investments in new hardware and encourages sharing the components of the integrated RIs. The integrated research infrastructure created helps to test components in a real system behaviour also in RIs without a HIL capable simulator.

The technical possibility of conducting such joint experiments allows the application of control algorithms running in one research infrastructure for the remote control of devices which are physically located in other facilities. The advantage of the VRI concept is the possibility for one RI to access the resources located at a remote site - these resources can range from actual hardware devices to real time simulators or Supervisory Control and Data Acquisition (SCADA) systems. A typical VRI can be seen in Fig. 6. In this particular case one of the RIs (TUD) acts as a network simulator while other two RIs (DTU and VTT) participate in a closed loop experiment with their hardware resources.

Fig. 6
figure 6

Integrated research infrastructure involving virtual connection between three research infrastructures

The main objectives of setting up such an integrated RI is to enable new smart grid testing in a cost-effective solution.

4 Examples of Laboratory Couplings

In this section the demonstration of different approaches cited in Sect. 3, and implemented in ERIGrid, have been discussed. In particular, in order to demonstrate the approaches, the following use case has been taken into account: validation of a centralized voltage control. The implementation of JaNDER enables several test cases for the same use case. The following test cases demonstrate the potential of a communication platform, which allows to exchange data online between RIs, thus coupling multiple laboratories.

4.1 Integration of a Remote OLTC Controller via IEC 61850

This test case aims at characterizing a software of a RI (ICCS) located in Greece with power system equipment at a remote RI (OCT) located in Spain. The utilization of IEC 61850 through JaNDER-L1 offers the advantage to implement the test case with a widely accepted communication protocol which is used at actual field implementations. In OCT’s laboratory the OLTC controller communicates with the local Redis though Modbus protocol. The local Redis updates the cloud Redis which is synchronized also with the local Redis in ICCS RI. Where an IEC 61850 server maps the local Redis measurements and control signals to IEC 61850 logical nodes. Finally, an IEC 61850 client is used to update the control signals in the IEC 61850 server and acquires the measurements from it. The test case setup is depicted in Fig. 7.

Fig. 7
figure 7

JaNDER-L1 setup overview

In ICCS RI, a centralized voltage control algorithm (CVC) controls the real OLTC controller located in the UDEX Laboratory within OCT, in Spain. Through JaNDER platform, the controller receives the tap position measurement, performs the optimization and sends commands to increase or decrease the tap position of the OLTC. In ICCS RI, the DRTS is used also to simulate the LV benchmark network. The simulated network consists of 4 simulated PV systems, a Battery Energy Storage System (BESS) and a simulated transformer that changes its tap position according to the tap position signal provided by the OCT’s OLTC controller. The controller also receives measurements and sends commands in the simulated LV benchmark network as part of its operation. The test setup is presented in Fig. 8.

Fig. 8
figure 8

JaNDER-L1 test setup

In this test case the IEC 61850 interface adds a further delay of 7–8 ms compared to the JaNDER-L0 implementation. This amount of time delay is insignificant for the testing of this kind of controller (CVC), which has a time step of seconds. Therefore, because the tap change is not a delay critical operation, no negative effects have been observed due to these time delays during the experiments. Finally, it is safe to assume that JaNDER-L1 can be used at similar test cases with JaNDER-L0, since the difference in time delays is very small and it relies on the widely accepted communication protocol of IEC 61850.

4.2 State Estimator Web Service

One of the test cases performed in ERIGrid concerns the state estimation via web of a RI that publishes the measurements with JaNDER. In this case the state estimation is done using the Common Information Model (CIM) through JaNDER-L2. In particular one RI on which the web service state estimator has been demonstrated is Tecnalia’s smart grid laboratory. On the RI side, JaNDER-L0 was implemented. The system architecture is shown in Fig. 9.

Fig. 9
figure 9

Implementation of JaNDER-L0 in Tecnalia’s smart grid laboratory

In order to connect the physical devices in the laboratory (Inverter_1, Inverter_2, Load Bank_1, etc.) to JaNDER-L0, a set of communication protocol gateways have been developed. These gateways are software applications in charge of translating the communications between the device specific protocol (Modbus in this case) and the JaNDER-L0 protocol (based on Redis). The gateway applications perform basic tasks such as periodically polling devices for measurement and status values and executing commands published in JaNDER, this is accomplished by mapping Redis variables to Modbus registers and vice versa. The local Redis instance is connected to other applications such as Node Red used for data processing and Redis commander for accessing the data through a web client. In addition to this, the local Redis instance is connected to the RedisRepl application in charge of replicating the Redis keys into a Redis remote instance hosted in the cloud. This mechanism allows the integration of devices at Tecnalia’s laboratory with other research infrastructures since all the Redis local data can be accessed by JaNDER.

The implementation of JaNDER-L2 is based on the development of a CIM model containing the representation of the laboratory network. This file contains the link to the actual measurements as names of CIM analogue objects which are set to be the corresponding keys in JaNDER-L0. This allows an application using the CIM model to ask for needed measurements to JaNDER-L0 and updating their values on demand. A deployment example is shown in Fig. 10 where the state estimator has been integrated with JaNDER. The user of the state estimator uses the application through a web interface. This state estimator is connected to a Redis instance hosted in the cloud and obtains the real time measurement data that it needs from the Redis cloud instance.

Fig. 10
figure 10

Implementation of JaNDER-L2 in Tecnalia’s smart grid laboratory

4.3 Geographically Distributed Real-Time Simulation

While the previous two examples show the hardware/software integration between two RIs, the last examples aim at demonstrating the “Virtual Research Infrastructure” approach. The integration of the RIs can emulate the same setup of a SIL, CHIL or PHIL implementation.

An example of implementation where a large power system is split for simulation within two DRTS at two different laboratories was carried on within ERIGrid. The test setup utilized is shown in Fig. 11 and summarized in Table 1.

Fig. 11
figure 11

Geographically distributed real-time simulation setup

Table 1 Summary of GDRTS implementation

The objective of the study was to establish the feasibility of utilising a geographically distributed real-time simulation setup to analyse power systems dynamic phenomena, such as frequency events and incorporating controls at similar time scales. Due to the objective under consideration, JaNDER was not utilized as the communications interface between the two geographically separated laboratories.

The results obtained provide confidence in the feasibility of the approach and suggest to carry on further research activities on this topic.

4.4 Real-Time Geographically Distributed CHIL

An example implementation where a power system is simulated within a DRTS at one laboratory while the controller for the power system is implemented within another laboratory was undertaken within ERIGrid. The test setup utilized is shown in Fig. 12 and summarized in Table 2.

Fig. 12
figure 12

Real-time geographically distributed controller hardware-in-the-loop setup [4]

Table 2 Summary of RT-GD-CHIL implementation

The objective of this study in the context of ERIGrid was to establish the feasibility of conducting geographically separated CHIL experiments for power system dynamics studies. The results presented in [4] prove the capability of such setups to undertake real-time power system voltage and frequency secondary control studies.

4.5 Real-Time Geographically Distributed PHIL

An example of inter-laboratory coupling, where hardware resources from two laboratories have been utilized for mutual benefit to enable extended validation capability, is discussed in this sub-section. The utilized test setup is presented in Fig. 13 and summarised in Table 3.

Fig. 13
figure 13

Real-time geographically distributed power hardware-in-the-loop setup

Table 3 Summary of RT-GD-PHIL implementation

The objective is to validate a CVC algorithm. The test setup involves the simulation of a distribution LV network in real-time within the DRTS at DPSL, where the bus 11 of the network is represented by a lead-acid battery unit at RSE. The voltage magnitude and frequency from the point of common coupling (bus 11 in this case) are sent to RSE via JaNDER-L0 for reproduction within their microgrid using the back-to-back network emulator. The active and reactive power measured in response to the voltage are measured and sent back to DPSL for injection within DRTS bus 11 using controlled current sources. The CVC algorithm is implemented in a CHIL implementation. The CVC receives the active and reactive powers of the individual buses of the network, processes the inputs to find solution to mitigate any voltage issues identified and determines the new setpoints for the reactive power injection/absorption by the PVs and the active and reactive power setpoints for the BESS.

5 Conclusion

In this chapter different kinds of RI couplings have been presented with the goal of exploiting the synergies among them and making their resources available for external users avoiding additional investment costs. In order to integrate different RIs, a suitable communication platform is necessary. The tool developed in ERIGrid, JaNDER, is able to establish a real-time communication among several RIs. The very low communication latency allows to implement steady-state analysis of the extended system under test. Moreover, JaNDER is able to use also standard such as IES 61850 or CIM. This allows each type of user (e.g. academia using custom protocol, or industries using standard) to access to remote laboratories in a very simple way.

Some examples of test cases performed in ERIGrid using JaNDER concern the software/hardware integration in different RIs, a web service application and a real-time geographically separated PHIL. All these tests were successfully performed and proved that JaNDER satisfies the communication requirements for each test case. The results obtained are very encouraging for the future research activities on smart grid testing. In particular new laboratory coupling could be developed in the future, integrating different kinds of resources and developing a tool to manage the integration in a simple way.

These advanced testing methods could be used to enable new use cases and create a cooperative RI for smart grid system integration which allows the easily transfer of validated solutions into the “real world”.