Three points remain open in the definition of implementation and subtyping relationships:
how to define the ECA rules of a relationship,
how relationships deal with (1) receiving event occurrences/call notifications, (2) pattern matching of event occurrences and (3) instantiating and forwarding new event occurrences/call requests , namely the event management strategy, and
how call requests and notifications are linked to a given operational semantics implementation, namely the metalanguage integration strategy.
In this section, we first propose a possible strategy for event management (in Sect. 5.1), tackling points 1 and 2, and then detail one possible strategy for metalanguage integration (in Sect. 5.2), tackling point 3.
CEP-based event management
Implementation and subtyping relationships, as introduced in Sect. 4.2, require a concrete strategy to define and manage enclosed ECA rules. In this section, we present a strategy based on complex event processing (CEP) and more specifically on Esper’s event processing language (EPL). First, we mention the salient features of CEP that make it an interesting candidate for event management in our approach. Then, we introduce the event manager component acting as an ECA rule engine. Next, we detail how event occurrences are modeled in Esper, and finally, we explain the design process of relationships and their ECA rules.
Complex event processing
The goal of CEP is to identify meaningful events over streams of simpler events with queries on both the data carried out by the events and the before and after relationships between them. Essentially, CEP systems allow to perform temporal pattern matching over streams of events and produce a new stream of potentially overlapping complex events as a result. In this aspect, CEP is a paradigm that fits particularly well for the definition of event abstraction hierarchies , which are central to subtyping relationships between behavioral interfaces.
Esper is an open-source Java-based system for CEP that provides a DSL for event processing called EPL. This DSL allows to formulate queries, called EPL statements, which continuously analyze events within a stream to detect situations of interest and produce a new stream of events containing properties selected from the matching events. Java objects, called subscribers, can then subscribe to this new event stream to be notified each time an event is inserted into the stream.
As we defined the event part of the ECA rules of our relationships as temporal patterns over either a stream of event occurrences or a stream of call notifications, CEP is particularly fitting to the realization of relationships. Moreover, as Esper is Java-based and open-source, it integrates well with our existing model execution framework.
To streamline the integration of relationships into the architecture and avoid dependencies between behavioral interfaces, we define a component called the event manager. The event manager is implemented as an ECA rules engine configured by the active relationships. For each relationship, two streams are created: one for the environment-to-model direction and one for the model-to-environment direction. According to both the nature of its containing relationship (implementation or subtyping) and its direction (environment-to-model or model-to-environment), a stream contains either event occurrences or call notifications. Streams carrying event occurrences only accepts occurrences of events from the corresponding behavioral interface, based on its direction and on the nature of its containing relationship. The temporal patterns constituting the event part of accept and expose rules are registered to their corresponding stream, as defined by their relationship. The condition and action methods constituting the condition and action parts of the rules of the relationships are then hooked on their corresponding temporal pattern. Figure 10 illustrates the event manager, to which an implementation and a subtyping relationship have been registered.
At runtime, the event manager is responsible for dispatching event occurrences between relationships (that is, between their event streams). The event manager dispatches an event occurrence for translation to the event stream of a given relationship based on (1) the behavioral interfaces referenced by the relationship, (2) their supertype or subtype role in the relationship (for subtyping relationship only) and (3) the accepted or exposed nature of the event occurrences. Note that, if several registered relationships qualify for a given event occurrence, this occurrence is dispatched to each relationship.
For instance, the subtyping relationship between ActivatableInterface (the supertype) and ArduinoInterface (the subtype) shown in the upper part of Fig. 10 can receive occurrences of the activate and led_level_changed events, because activate is an accepted event from the supertype interface of the relationship, and led_level_changed is an exposed event from the subtype interface of the relationship. However, this relationship cannot receive occurrences of the activated event, as this is an exposed event from the supertype interface of the relationship: Such event occurrences can be emitted by the relationship but never received, as the relationship does not know how to handle occurrences of this event. For the same reason, this relationship cannot receive occurrences of the run event.
Additionally, the event manager is tasked with communicating with the other entities in the system. One such entity is the operational semantics of the xDSL, to which the event manager sends call requests and from which it receives call notifications. Precisely, how this is handled will be discussed in Sect. 5.2. Other possible entities are external tools sending accepted event occurrences to the event manager and/or being notified by the event manager of exposed event occurrences.
Note that, as it has been designed, this event manager is not specific to CEP-based relationships and can accommodate to any technology allowing relationships to offer the following two required services: (1) receiving event occurrences and (2) notifying of event occurrences (e.g., using runtime monitors). In fact, an envisioned approach to define relationships is to propose a dedicated, declarative event mapping language letting the language engineer define when and on what condition an event or sequence thereof should be mapped to another event or sequence thereof.
Modeling event occurrences in Esper
To use Esper, we need to map our event occurrences to event representations that can be processed by Esper. A range of possibilities are available, from Plain Java Objects (POJOs) to Maps to XML documents. We opted for modeling our events as POJOs, as we do not require the flexibility of Maps, and our implementation is exclusively Java-based, making XML both cumbersome and unnecessary. More specifically, we defined a wrapper class for EventOccurrence objects. This wrapper class declares two methods that are considered as event properties by the Esper runtime. The first method is the getEvent method, which returns the event of the occurrence. The second method is the getArgs method that takes an event parameter named as parameter and returns the value associated with that parameter. This allows Esper to access the different arguments of an event occurrence as a mapped property, by supplying the parameter name of the argument. For instance, the expression args(’someName’) returns the value provided for the event parameter named ’someName’.
As in our proposed strategy, call notifications are inserted into the event stream and manipulated by the Esper runtime, and we also need an Esper representation for them. Since call notifications are issued by the integration facade, which in our case is Java-based, the simplest solution for the proposed architecture is to model these call notifications as POJOs, as we do for event occurrences. Such POJOs point to the execution rule at the origin of the call notification, to a map associating the values supplied for each parameter of the execution rule in that particular call and for notifications of completed calls, to the value returned by the call.
With event occurrences and call notification made Esper-compatible, we can now look into the design process of implementation and subtyping relationships and their ECA rules, based on Esper and Java. We will then present a concrete example of the application of this process to our Arduino DSL running example.
Design process ECA rules are defined in the following manner. The event part of ECA rules is defined using EPL statements querying the stream corresponding to the nature of the rule (i.e., accept or expose). This allows to leverage the power of CEP to capture complex, potentially overlapping patterns of event occurrences. The condition and the action parts of a rule are written as Java methods to be called by the event manager when a complex event is detected by the EPL statement defined as the event part. The condition method takes complex events detected by the EPL statement as parameter and returns a Boolean value indicating whether the action method should be called or not. To be able to enforce domain-specific constraints, the condition method has access to the running model in addition to the triggering complex event to compute its result value. Conversely, the action method also takes as parameter the complex event that was detected by the EPL statement. The action method of accept rules returns either an array of event occurrences (for subtyping relationships) or an array of call requests (for implementation relationships), while the action method of expose rules always returns a single event occurrence. Access to the running model allows the action method to configure newly instantiated event occurrences (e.g., supplying event occurrences with parameters from the model).
Concrete example Figure 10 illustrates this design strategy by showing a more in-depth view of Fig. 9, which provides an overview of implementation and subtyping relationships between ActivatableInterface and ArduinoInterface. First, it highlights the fact that each relationship holds two event streams: a stream associated to accept ECA rules (next to labels 1 and 2) and another associated to expose ECA rules (next to labels 3 and 4).
Then, on the upper-left part of the figure (labeled 1), the OnActivateButton accept rule of the subtyping relationship between ActivatableInterface and ArduinoInterface is detailed. The event stream observed by this rule contains ActivatableInterface accepted event occurrences. The event part of the OnActivateButton rule is an EPL statement that notifies its subscribers (i.e., the registered rules) whenever an occurrence of an event named activate is inserted on the event stream. When this happens, the subscribers receive a notification that carries the id parameter value, selected by the EPL statement through the args(’id’) expression. In the example, there are two subscribers, one of which (OnActivateSketch) is not shown. The other subscriber is the OnActivateButton rule. When notified, the evaluateCondition method, whose implementation is required of subscribers, is called. This method checks that the condition of the rule is satisfied. In the example, the implementation of this condition method performs a query on the running model, using the value provided by the complex event pattern of the event part of the rule. This is achieved using a utility method findElement which finds an element of the provided class with the provided name (here buttonId) in the provided model (here the running model). Then, if the condition is satisfied, the subscriber performs the action of the rule by calling its execute method, which translates the triggering event occurrence into two new event occurrences. This is effectively done by instantiating the new occurrences (using dedicated utility methods in our example) and returning them in an array to be inserted in the correct event stream (in this case, the stream of ArduinoInterface event occurrences).
On the lower-left part of the figure (labeled 2), the design of the accept rules of the ArduinoInterface implementation relationship is detailed. It is very similar to that of the subtyping relationship, the event stream containing ArduinoInterface accepted event occurrences instead of ActivatableInterface ones. The accepted rules observing this event stream instantiate and return call requests for specific execution rules. The OnButtonPressed rule shown in the figure detects occurrences of the button_pressed event and converts them directly (as no condition is specified) into call requests for the PushButton.press execution rule of the operational semantics.
Then, on the lower-right part of the figure (labeled 3) is detailed the design of the expose rule of the implementation relationship. The event stream associated with this rule contains the call notifications issued by the operational semantics. The OnSetLed rule detects call notifications for the SetLed.execute execution rule on the event stream and converts them into occurrences of the led_level_changed event, to which it supplies the referred led and its new level.
Finally, on the upper-right part of the figure (labeled 4) is detailed the OnLEDOffOn expose rule of the subtyping relationship. This rule observes an event stream containing ArduinoInterface exposed event occurrences. The EPL statement constituting the event part of the rule specifies that it is triggered whenever a succession of two led_level_changed event occurrences with alternating level parameter values but identical led parameter values is observed in a sliding window of two events. The action part of the rule translates the triggering complex event into an occurrence of the activated event with the id of the LED as a parameter value.
In addition to providing a unified way to define the accepted and exposed events for any xDSL, our approach aims to be agnostic of the metalanguage used to define the operational semantics of an xDSL. This means that the behavioral interface language and the design of the event manager and relationships must work for any xDSL, regardless of the metalanguage used to define its operational semantics.
To achieve this, an integration facade for the event manager must be defined. This facade is tasked with translating call requests into actual behavior and behavior into call notifications, thereby bridging the gap between the event manager (and the implementation relationships therein) and the operational semantics.
In this section, we propose such an integration facade to enable the approach for xDSLs whose operational semantics is defined using an object-oriented metalanguage such as Java and orchestrated by an execution engine. Note that the proposed integration facade is intended for sequential model execution. Adapting the approach to concurrent model execution only requires to define an appropriate integration facade. First, we present what must be provided by this execution engine, which is considered as a prerequisite for the proposed facade. Next, we detail the inner workings of the integration facade. Finally, we show how this facade is interfaced with the aforementioned execution engine.
The proposed approach considers that a preexisting execution engine applies the operational semantics of the considered xDSL on the running model. Such an execution engine must be able to notify external components when it starts or stops and when it applies execution rules that alter the model state. More precisely, the engine only sends notifications for execution rules annotated as a stepping rule, which are executions rule producing an observable execution step when applied. Regarding the Arduino DSL presented in Sect. 2, only stepping rules are presented.
The state of the model is considered observable and alterable at the time notifications are made and handled; hence, the possible observable states reached during an execution are heavily dependent on the granularity of the declared stepping rules in a semantics. This notification mechanism can not only be used to attach interactive debuggers  and trace constructors  to the execution. We explain later how we leverage this notification mechanism to enable exposed events and run-to-completion call requests. The design of an execution engine is described in more detail in our previous work [5, 6] and can be summarized as the following operations:
start does the ensuing actions:
load the considered xDSL;
load the model to be executed;
register the execution observers;
prepare the initial model state;
set the running attribute of the engine to true;
notify registered observers that it is starting.
stop sets the running attribute to false and notifies execution observers that the engine is stopping.
callExecutionRule starts the application of a specific execution rule of the operational semantics. If it is a stepping rule, the engine notifies observers at the beginning and at the end of the execution of the rule. Note that depending on the metalanguage, an execution rule may trigger the nested execution of other stepping rules, in which case observers are also notified when the nested execution of these stepping rules begins or ends. For instance, in the Arduino model shown in Fig. 2, calls to SetLed.execute will be nested within calls to If.execute, which will in turn be nested within calls to Sketch.run. Note that no distinction is made between the notifications from nested and non-nested rule calls.
registerObserver registers a component as an observer that gets notified when the execution of a stepping rule begins or ends and when the engine starts or stops. When an observer gets registered, an associated priority policy needs to be supplied as well. Such a policy provides, for each kind of notification, the priority at which the observer must be notified. This operation is called by the execution engine during the initialization phase, to register a predefined set of execution observer, retrieved from a configuration file for instance, but it can also be called at any time.
The specification of this component is by design as generic as possible to be able to cover a wide range of metalanguages. As such, it provides an abstraction over the multiple execution engines dedicated to various metalanguages available in the GEMOC Studio (see Sect. 5.3). The implementation of this component is, however, heavily dependent on the metalanguage used to implement the operational semantics, especially regarding the procedure to dynamically call an arbitrary execution rule (e.g., using java.lang.reflect.Method.invoke if the semantics is implemented in Java).
Using these operations, a user (e.g., a modeler, a tool) is able to execute a model by starting the engine and then demanding the execution of one or several execution rules of the semantics (e.g., a run method responsible for the complete execution). In the following subsections, we explain how the integration facade can also use these operations for managing call requests and notifications.
Overview of the metalanguage integration facade
To bridge the gap between implementation relationships and the execution engine, we define an integration facade concentrating on the following two activities: (1) waiting for execution rule call requests from implementation relationships and performing the requested calls and (2) issue execution rule call notifications to implementation relationships.
To be able to perform these activities, the integration facade has two requirements that need to be fulfilled. First, it needs a mechanism to wait for execution rule call requests to arrive. To that effect, our approach relies on a blocking queue to store the call requests received from implementation relationships. Call requests can be retrieved from the queue using the poll and the take operations, which behave differently when the queue is empty: take suspends the execution and waits for an element to be available, while poll simply returns null. Second, the integration facade needs to be able to call execution rules defined as part of the operational semantics of an xDSL. This task is delegated to the execution engine and its callExecutionRule operation.
With these requirements fulfilled, an execution with the proposed integration facade unfolds as follows:
The integration facade is notified of the start of the execution by the execution engine. It enters its execution rule call scheduling loop: The execution is repeatedly suspended when the call request queue is empty and resumed when call requests are queued.
Implementation relationships send call requests to the integration facade, which are added to the call request queue.
The engine informs the integration facade when it is safe to process the queued call requests, i.e., when starting or ending stepping rule calls. In such cases, the integration facade first checks whether a run-to-completion call request is currently being executed. If that is the case, the call request queue is left untouched. Otherwise, the queued call requests are sequentially delegated to the execution engine.
The integration facade is notified that a stepping rule call is about to start or has ended and forwards this notification to implementation relationships.
Metalanguage integration facade operations
We hereby present how the integration facade achieves these different tasks through a set of operations.
startListeningandstopListening. These internal operations are used to start and stop the call request handling loop. Algorithm 1 shows startListening. As long as the execution engine is running, the first call request of the call request queue (lines 1–2 and 4 of Algorithm 1) is retrieved. When the take operation is called on the queue, the execution is suspended if the queue is empty—which only happens if no execution rule is currently executing—and resumes as soon as a request is added. Finally, the call request is processed using the processCallRequest operation (line 3 of Algorithm 1). The stopListening operation consists of inserting an instance of a special Stop call request into the call request queue (mechanism know as a poison pill ), thereby stopping the call request handling loop.
queueCallRequest. This operation is called by the event manager to insert a request to call the provided execution rule with the provided arguments into the call request queue. Note that, at the start of the execution, no actual execution takes place until a first call request is queued. For instance, in the case of the Arduino DSL, the execution only starts once a run event occurrence is received: This event occurrence enqueues, through a call to the queueCallRequest operation, a request for a call to Sketch.run on the sketch parameter of that event occurrence. This call request is then processed, which starts the execution.
manageCallRequests This internal operation is similar to startListening, except that it does not suspend the execution when the queue of call requests is empty. It is called when the integration facade is notified that the running model is in a consistent state, and thus, that pending call requests can be safely handled. As explained previously, this is the case before and after the execution of stepping rules. Algorithm 2 shows the behavior of this operation. When it is called, the integration facade first checks that the currently executed call request did not ask for run-to-completion behavior. For this, the call request on top of the stack is inspected (line 1) and two conditions are checked: if it should not be treated as run-to-completion and if its associated execution rule is different from the stepping rule that triggered the notification (line 2). The first condition prevents the processing of a call request, while a run-to-completion call request is being handled. The second condition prevents the processing of additional call requests before the processing of the current one gets to start, which would otherwise happen when the rule associated with the current call request is a stepping rule. If both conditions allow it, the non-blocking poll operation is used to iterate over all call requests in the queue and process them using the processCallRequest operation (lines 3–7), exiting the loop if the engine stops or if the Stop call request is encountered. Otherwise, the call request queue is left untouched, to be processed at a later time, as the operation returns immediately.
processCallRequest This internal operation, detailed in Algorithm 3, is used to process a single execution rule call request. First, the call request is pushed on a call stack (line 1). Then, the execution rule to call is retrieved from the call request (line 2), and the call is delegated to the execution engine (line 3). Once this call returns, the call request is popped from the call stack (line 4). This call stack keeps track of the call requests that are currently being handled and is used to enforce the potential run-to-completion nature of call requests by preventing the handling of other call requests while a run-to-completion one is being executed.
Integration with the execution engine
During its initialization phase, the execution engine instantiates and registers the integration facade as an observer from a configuration file. In the following, we detail how the integration facade reacts to the different notifications sent by the execution engine, combining the presented operations to achieve proper event handling.
notifyStart: The call request handling loop is started, using the startListening operation.
beforeStep: The manageCallRequests operation is called to process the call request queue, given that the call request currently under execution (if any) is not a run-to-completion call request.
afterStep: call notifications are forwarded to implementation relationships, which decide if they should result in an exposed event occurrence. The facade then behaves as for beforeStep notifications.
notifyStop: the stopListening operation is called to halt the call request handling loop.
In the event that all execution rule calls issued from the startListening operation terminate without a notifyStop notification being received, the call request handling loop suspends the execution, waiting for either a Stop request or a call request to be queued and thus instantly processed. Note that, when the integration facade is registered as an observer of the execution, an accompanying priority policy is supplied, specifying that it receives notifyStart and afterStep notifications last, but receives beforeStep notifications first. This allows the facade to work with other potential execution observers. For instance, a trace constructor needs to receive beforeStep notifications after the integration facade: Otherwise, it would record the start of an execution step when in fact another step could be triggered, given that there is a pending call request in the queue of the integration facade.
We implemented our approach as part of the GEMOC Studio , a language and modeling workbench atop the Eclipse platform . The metamodel of the behavioral interface language is defined using Ecore, and the event manager is written in both Java and Xtend. The source code is available on Github.Footnote 3
The language workbench of the GEMOC Studio offers multiple metaprogramming approaches to define the operational semantics of a DSL (e.g., Java/Kermeta , xMOF  or Henshin ), as well as one execution engine for each approach. Our implementation of the event manager is agnostic to the kind of execution engine that is used, in accordance with Sect. 5, and comes with a metalanguage integration facade for the Java/Kermeta-based execution engine.
In order to make use of the approach, a reflective event injection GUI was designed, allowing to send and receive event occurrences from running models and complementing the existing generic omniscient debugger for xDSLs defined in the language workbench . We detail this tool in the next section.
Language engineering scenarios
In this subsection, we describe several language engineering scenarios and the role played by our approach in their realization.
Specific tooling development Language engineers can leverage the definition of behavioral interfaces to develop tooling that is specific to one or more interfaces, such as a domain-specific graphical view of the event occurrences sent to and received from the model. Such tooling can then be used with any model conforming to a DSL implementing the supported interfaces.
Generic tooling development Language engineers can leverage the reflexive access provided by the behavioral interface metalanguage to develop generic interaction-centric tooling. At runtime, from the definition of the xDSL to which the running model conforms, such tools can retrieve the list of behavioral interface implemented (directly or transitively) by the DSL. Then, from these behavioral interfaces, generic tools can discover the events whose occurrences can be accepted or exposed by the running model, along with their parameters. Language engineers can then implement generic tooling revolving around events and their occurrences.
Tooling for multi-model interaction Going further than the previous scenario, language engineers can define tools that handle interaction with concurrently running models conforming to different DSLs. Both broadcasting event occurrences to eligible models and sending event occurrences to a single model can be supported. Conversely, language engineers can define tools to receive event occurrences from one or all eligible running models. This is very close to generic tooling development: The main difference is that instead of gathering the implemented behavioral interfaces from one running model, the tool lists the implemented behavioral interfaces from all running models. From there, the language engineer has all the information required to implement the desired event sending behavior. Alternatively, this capability can also be implemented for interface-specific tools, for a predefined set of behavioral interfaces.
Model coordination This scenario requires a complementary approach such as B-COoL  to actually coordinate models through occurrences of events from their respective behavioral interfaces. With such a complementary approach, modelers are able to leverage the behavioral interface defined for the involved xDSLs to define how a specific set of models conforming to these DSLs are coordinated.