Abstract
We introduce the concept of LabLinking: a technology-based interconnection of experimental laboratories across institutions, disciplines, cultures, languages, and time zones - in other words human studies and experiments without borders. In particular, we introduce a theoretical framework of LabLinking, describing multiple dimensions of conceptual, organizational, and technical aspects. The framework defines LabLinking levels (LLL), which describe the degree of tightness of empirical interconnection between labs. In several examples, we describe the technological infrastructure in terms of hard- and software required for the respective LLLs and share insights about the challenges and benefits. This comprises the connection of multiple labs in a collection of multiple synchronized biosignals (including an MRI scanner) for a decision making study, a human-robot interaction study to investigate attention-adaptive communication behavior, as well as an experiment for LabLinking through Virtual Reality in a virtual commerce setting, for an increased feeling of immersion. In sum, we argue that LabLinking provides a unique platform for a continuous exchange between scientists and experimenters, thereby enabling a time synchronous execution of experiments performed with and by decentralized users and researchers, allowing to establish novel experimental designs that would not be feasible without LabLinking, such as the investigation of high-resolution neural signals in everyday activity situations, which was realized by interconnecting a control participant in an fMRI with an execution participant in a kitchen environment.
Article Highlights
-
Provides a theoretical understanding, a structured framework, and practical solutions for the novel concept LabLinking.
-
Describes the technical, organizational, and conceptual dimensions of the framework along with LabLinking-Levels that define the tightness of integration.
-
Discusses real-world instantiations of LabLinking with concrete implementations and pointers to our open source tools.
-
Combines conceptual and practical perspectives which will enable readers to design and implement their own distributed LabLinking studies.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The study of human behavior, associated mental and physical activities, and decision making often requires laboratory experiments. They typically include continuous recordings of participants’ speech, motion, muscle-, eye- and brain activity, and other processes, which results in a variety of high-dimensional biosignals [1, 2]. Due to the range and complexity of signals, setups, devices, and research questions, researchers often specialize their laboratories in studying certain aspects of human behavior and behavioral responses [3,4,5] or in capturing a particular set of biosignals [6, 7], or both. While specialization is a prerequisite for mastering a task, it can reduce options in the sense that researchers are bound to the available hardware and software. This limits researchers to certain modalities and biosignals, to particular investigation and analytical approaches, to predetermined physical environments, to a set number of concurrently recordable participants, and to the available skills, features, and language of the participants. In addition, the establishment of new laboratories is often prevented by a lack of technical infrastructure and the high costs of equipment and maintenance.
With LabLinking, we propose a new methodological and experimental paradigm to overcome these limitations. LabLinking aims to advance methods to study human behavior and behavioral responses by integrating resources, practices, and expertise of distributed laboratories in joint experimental designs. We will show in several examples and use cases that LabLinking can improve the scalability, realism, and validity of experimental designs and facilitates innovative task-sensors combinations. Lablinking also enables the shared and distributed use of rare and expensive equipment as well as the teaching, education, and integration of specific practices and knowledge only accessible by certain labs or experts. In this paper, we start by describing three exemplary studies in Sect. 2 whose experimental design and methodology showcase the LabLinking paradigm. Next, we leverage our lessons learned from these and other studies to conceptualize LabLinking in terms of different dimensions and levels of integration in Sect. 3. Finally, in Sect. 4, we revisit the three studies and apply the developed technological, conceptual and organizational dimensions and levels of integration to demonstrate in detail how LabLinking projects are realized in practice.
1.1 Motivation
LabLinking is not a completely new idea but is inspired by groundbreaking inventions. The most prominent role model is the Internet, short for the global system of interconnected computer networks, which began as ARPANET created by Licklider to facilitate the time-sharing of computing resources among American universities and research institutes. In combination with Sir Berners-Lee’s invention of the World Wide Web, and a range of information resources, services, and applications, the Internet became the catalyst for the digital revolution. Licklider’s original motivation for developing the Internet was that he saw it as a crucial prerequisite for realizing his vision of a “man-computer symbiosis”, which he expected to evolve as a “cooperative interaction between men and electronic computers” [8].
Our motivation for developing LabLinking is to make experimental studies of human behavior and decision making more sustainable, more reproducible, and more flexible. We believe that this can be achieved by interconnecting laboratories into an “Interlab” and by providing “LabLinking technologies”, i.e., methods, tools, services, and applications which establish the technology-based interconnection of experimental laboratories. This belief is grounded in the observation that individual labs currently usually incorporate new methods in their empirical research by acquiring new components they are often unfamiliar with and which often become obsolete again, once the specific experiments are completed. Sustainability improves when software, data, sensors, and devices are reused or shared, or when experimental studies are carried out simultaneously in several local labs which saves time and energy resources by avoiding long-distance travel. When working with expensive and potentially dangerous devices such as robots, remote operation and interaction might benefit both sustainability and safety. Furthermore, robots are especially expensive and specialized, i.e., sharing them in joint experiments enables many research opportunities and chances for generalization. Reproducibility improves by sharing protocols, establishing common standards, and negotiating joint experimental designs across groups and disciplines, and by running experiments in simulations based on digital twins, or by complementing face-to-face interaction with hybrid interaction in virtual spaces. Flexibility and affordability of human-computer interaction experiments can be improved by leveraging real, augmented, and virtual realities.
Beyond research studies, we have experienced in recent years that LabLinking perfectly complements traditional concepts in education and training. It enables us to teach young scientists remotely and without borders in times when face-to-face meetings are not possible. We are currently applying LabLinking in a distributed research training group and have implemented virtual lab rotations. Thereby, the students experience several laboratories with different sensors, experimental setups, and scientific cultures and thus receive broader and deeper insights into experimental paradigms than in just a single laboratory. While we advocate LabLinking as a tool to allow concentration on respective strengths of participating labs, education of new generation of researchers should increase the exposure to many different methods to give an impression of available tools and their applicability in empirical research. Only through this, researchers can make informed choices about potential collaborations. We are well aware that LabLinking may have additional requirements compared to traditional approaches, e.g., in terms of technical demands such as Internet bandwidth or in terms of organizational demands like communication overhead. We therefore define LabLinking levels that describe the tightness of lab integration and classify them along technical, conceptual and organizational challenges. With this, we provide a structured overview of LabLinking advantages and describe the corresponding requirements of all LabLinking levels in the three dimensions, which range from very low to high.
1.2 Related work
The LabLinking concept, as outlined above, draws inspiration from related work. For example, a development in recent years has been the distribution of standardized, well-documented experiments across multiple labs to ensure the reproducibility of the reported results. The work by Lucking et al. [9] is an example from the robotics domain, while the EEGManyLabs initiative by Pavlov et al. [10] is a similar endeavor from the neuroscience perspective. This approach shares similarities with LabLinking in that it conducts experiments in multiple labs and requires a level of formalized documentation to enable replication of experiments. A key difference is that, for reproducibility efforts, every lab on their own needs to be able to perform the experiment independently and asynchronously. A similar role is played in the research landscape by multicentric studies (e.g., [11]), which also focus on the harmonization of experiment protocols, but with the goal of creating one unified data set. An alternative approach, which considers real-time interaction between robots and EEG setups at different sites, is teleoperation. Several examples [12,13,14] show that EEG-based Brain Computer Interfaces can be used to control robots across a distance, with real-time transmission of the detected control signals. In contrast to LabLinking, the focus in teleoperation is on establishing a tight control loop between EEG user and robot, while LabLinking supports a wide range of other scenarios (e.g., verbal HRI), basic research (e.g. analyzing biosignals), and biosignal-adaptive systems, where technical systems adapt to human signals [15]. Steed et al. [16] report lessons learnt from distributed and remote Virtual Reality experiments specifically and stress the importance of clearly defined experiment protocols and point out important technical challenges. They focus mostly on single-participant experiments that can be run at home or at distant sites outside the lab.
The cited literature shows that in recent years, the potential of human experiments has been explored in several studies, e.g., in the context of neural experiments and VR applications. However, available solutions focus on particular areas of application and do not address the general challenge of spatially distributed human experiments:
-
Approaches to reproducibility of studies and multi-centric data collection through common protocols and experiment documentation
-
Studies and technical prototypes for remote robot operation, often with a focus on brain-based user interfaces.
-
Theoretical and practical, VR-specific frameworks and best practices for remote studies in VR.
The following list summarizes what we consider research gaps in the current state-of-the-art in terms of LabLinking. From the mentioned related work above, none provide an approach which covers all these aspects at the same time. In this work, our research objective was to create a framework which closes this gap, and which provides the listed functionality simultaneously.
-
A concept which covers technological bandwidth beyond individual, application-specific solutions.
-
Support for immersive, real-time capable, linked experiments which collect multimodal, synchronized data streams.
-
A theoretical framework which allows flexibility to describe and conceptualize different types of studies in diverse constellations.
-
The discussion of multiple practical examples to identify the conceptual, organizational, and technical challenges of concrete LabLinking studies and how these can be addressed.
We think that all of them are vital in combination to advance the field of multi-location studies: A generalizable framework is important so that conceptual and technical advancements can be translated between experiments without reinventing the wheel every time. Supporting multiple modalities in real-time is what unlocks the full potential of many LabLinking experiments as they allow combination of sensors as well as interaction between participants. A theoretical framework makes it possible to formulate hypotheses and communicate unambiguously. Finally, practical examples are important to translate abstract concepts into concrete technical and organizational steps towards making LabLinking a reality.
In the next section, we will introduce the LabLinking experiment more concretely with three example studies. We start with example scenarios before we introduce a more formal framework to define the approach. In these examples, the reader will encounter technical (e.g., regarding data stream synchronization), organizational (e.g., regarding researcher responsibilities), and conceptual (e.g., regarding different participant roles) dimensions of LabLinking experiments. These three aspects will be the basis of the LabLinking theory.
2 Exemplary LabLinking studies
The studies briefly described in this section are taken from ongoing LabLinking collaborations of the Cognitive Systems Lab (CSL) at the University of Bremen that are currently being carried out with five different labs at four German universities funded by three projects. The aim of these studies is to investigate the advantages in terms of experimental setups that result from LabLinking and to identify the challenges of the method and technology. We first explore this with labs in the immediate vicinity and comparable technical infrastructure, then with labs from different disciplines and study cultures, before we move into less well-explored territory in the near future. The resulting software and tools are continuously improved with the collaborating labs and shared with the community.
2.1 Study 1: neural correlates of human decision-making in everyday activities
Study 1 aims to analyze neural correlates of human decision-making in everyday activities, for example setting the table for breakfast. Studying neural signals in detail is often done using functional magnetic resonance imaging (fMRI), which requires subjects to lie motionless in a scanner, prohibiting them from performing everyday activities. LabLinking offers a way to satisfy these two conflicting requirements. Namely, we leverage the paradigm of motor imagery [17], i.e., one participant, the executor, performs an everyday activity while a second participant, lying in the scanner, observes it from the first-person perspective and thinks she is doing the activity herself. This second participant, the controller, can make decisions on requested actions or activities that then get carried out by the executor. Technologically, this is made possible by streaming a real-time video of the point-of-view of the executor to the fMRI scanner, where the controller can communicate her decisions with simple button clicks. Figure 1 shows the corresponding LabLinking setup for the experiments which are conducted within the Collaborative Research Center EASE in collaboration between the CSL, the Department of Neuropsychology and Behavioral Neurobiology, and the Institute for Artifical Intelligence at the University of Bremen [18].
2.2 Study 2: the role of attention in human-robot interaction
Study 2 investigates the effect of distraction and hesitation as scaffolding strategy in human-robot interaction, building on the communicative function of hesitation in human-human interaction (see [19] for study details and results). To elucidate the underlying cognitive processes, we set up a human-robot interaction experiment in which the neural activities of a human participant interacting with a social robot are recorded. This setup requires specific expertise and equipment, i.e., on the one hand expertise in social aspects of human-robot communication along with the availability of a social robot such as Pepper or iCub, and on the other hand expertise in processing behavioral signals and the availability of state-of-the-art devices to record neural signals in combination with video and audio setup capable of creating a realistic acoustic scenery to synchronously capture communication behavior. While the Biosignals Lab @ CSL provides the latter, we found the former in the Center for Cognitive Interaction Technology (CITEC) at the University of Bielefeld, about 200 km apart. The LabLinking paradigm enabled us to conduct experiments without moving equipment or scientists to a common location. Rather, the human participant behavioral signals (EEG, speech, video) are captured in Bremen while interacting with Pepper in Bielefeld. For this purpose we established reliable high-quality video and audio streams from Pepper in Bielefeld that were presented to the participant in Bremen using a projector and a surround speaker system. Simultaneously, video and audio streams from Bremen were transmitted to Bielefeld such that the researchers maintaining and controlling the robot could monitor the actions of the participant in Bremen in real-time. Furthermore, all these streams needed to be recorded in a synchronized fashion. That is, for each recorded video frame, EEG signal, or utterance of the robot, we needed to obtain reliable timestamps such that all of the recorded data could be combined in a single coherent timeline. Finally, to better coordinate within the teams in Bremen and Bielefeld, we added a communication software tool to the LabLinking framework. Figure 2 shows the corresponding LabLinking setup for the experiments which are conducted in collaboration with the Center for Cognitive Interaction Technology (CITEC) at the University of Bielefeld [19].
2.3 Study 3: customer-salesperson interaction in virtual reality
Study 3 targets sales pitches in virtual reality (VR), a realistic future scenario. In particular, we want to better understand how a salesperson can best approach a potential customer in a VR showroom. For the customer, we use sensors for various biosignals such as eye movement, heart rate and skin conductance to monitor cognitive load and better understand the customer’s current interaction with the product. For the salesperson, we use a motion capture system to transfer leg-, arm-, and body movements to his/her avatar in VR, to support a maximally expressive sales pitch. Given the research [20,21,22] which shows the impact of realistic embodiment in VR on behavior and immersion, realistically rendering the salesperson’s full body, rather than just hand and face gestures, provides a much more realistic interaction with the customer and is therefore an important aspect of experimental design. This study is realized within the research training group KD2School in collaboration between the CSL and the Decisions in Immersive Systems Lab at University of Gießen, who are experts in using eye tracking as a tool to improve e-business applications.
Conducting such sales pitches in VR allows any potential customer who has access to a VR headset with eye-tracking capabilities and basic biosignal sensors to interact with a salesperson. Establishing this setup in LabLinking could thus not only increase the scalability of experiments, but also enable the inclusion of participants from all over the world into the experiment. Aside from improved penetration of customer-sales interaction, conducted studies could benefit from a better coverage of cultural diversity, increasing the generalizability beyond populations which are easily accessible in the respective lab’s vicinity.
3 The LabLinking paradigm
The LabLinking examples described above illustrate some challenges of interconnecting laboratories. All three studies require the researcher to maintain various data streams during the experiment and to trace reliable timestamps which facilitate the synchronized recordings of data streams. These aspects pose mainly technological challenges that can be tackled rather independently from the experimental design (although different experiments might bring different requirements for the technical platform, for example in terms of necessary communication latency or precision of synchronization). In Study 1, the contribution from a single participant was split into two distinct roles – the executor and the controller – to obtain a higher level of realism and validity in an experiment that involves monitoring a participant’s brain activity in an fMRI scanner. This aspect pertains with the conceptual part of the study. The examples also highlighted that the number and diversity of participants and collaborators may vary substantially, which concerns the organizational aspect of experimental study designs. Thus, based on our current experience with LabLinking studies and projects as described above, we discriminate three dimensions of challenges when interconnecting laboratories: The technological dimension, the conceptual dimension, and the organizational dimension. In addition, we introduce LabLinking levels that describe the tightness of lab integration and classify them along these dimensions, and thus provide a structured overview of the requirements and advantages of LabLinking.
3.1 Technological Dimension
To set up a LabLinking experiment, different technical functions and components need to be provided for a seamless integration. The specific solutions for an experiment depend on the technical ecosystems available at the different labs. However, certain basic recurring concepts exist that will be encountered in most LabLinking setups. The following list attempts to name and define these concepts.
-
1.
TD-1 Synchronization: A temporal synchronization between different signals and events is required to enable the distributed recording of multimodal signals in a way that allows a concerted evaluation. Section 4 outlines several possibilities to realize temporal synchronization. Independently of the chosen method, a synchronization of local clocks of the involved machines is an important step to ensure seamless communication during an experiment. Using redundant ways of synchronization increases the reliability and enables internal validation of the different approaches.
-
2.
TD-2 Streaming: Many LabLinking experiments require real-time, low-latency communication between the involved sites, for example for transmission of video data or 3D meshes in a virtual environment. There exist specialized protocols for different types of data (e.g., video streaming protocols or multiplayer frameworks in game engines which are typically used for Virtual/Augmented Reality environments), but also generic streaming services, such as Apache Kafka, which can be used for different types of data.
-
3.
TD-3 Message Passing / Remote Procedure Calls: To control the experiment flow (e.g., triggering different stages of the experiment), most LabLinking experiments need a message passing method which allows to send customized or parametrized messages to individual recipients or to a larger group, via a subscriber-based communication model. An advanced version of message passing implements remote procedure calls which are directly associated to executable procedures at the receiving client that are triggered by the corresponding messages.
-
4.
TD-4 Logging: The goal of most experiments is to collect data for further analysis. Therefore, it is important to persist all data, including raw streams, as well as events, messages, and timing information. Logging of information can be done in a distributed or centralized way (depending on bandwidth restrictions). Centralized logging keeps data on a single machine already during recording, which reduces the need for additional consolidation afterwards. However, a drawback is the need for sufficient bandwidth and the introduction of a single point of failure (i.e., a corrupted centralized storage will invalidate a complete recording). In terms of storage, data can be kept in a universal format for all data (e.g., h5) or in spezialized formats for individual types of data. Furthermore, one can decide between file-based storage and the use of a dedicated database. Regardless of the format, the logging process should be performed through a logging middleware which ensures consistent data structures, allows setting the level of logging detail and reduces programming overhead.
-
5.
TD-5 Discovery: When initializing a LabLinking experiment, many different components need to be interconnected. To avoid hard-coded connections which are susceptible to error due to changes of the network setup, it is advisable to set up a discovery mechanism through which components can communicate their presence and the data services they provide. Discovery can either work through network broadcasts by which components make themselves known to others, or through a dedicated discovery server, which becomes important when the network topology does not permit broadcasts.
-
6.
TD-6 Diagnostics: LabLinking experiments have specific requirements on network throughput, latency, and stability. As the experiments are complex and expensive to set up, it is important to test these requirements before the start of a study, and even before the start of each session, as conditions can change. A diagnostic component can consist of a generic measurement of transfer rate, but can also comprise more specific measurement of roundtrip times between specific components, for example to ensure that participants feel that their actions trigger immediate responses on the other side.
-
7.
TD-7 Monitoring: A complex, distributed system like a LabLinking setup consists of many different components which may fail at some point during the experiment. For example a mobile device running low on power, a sensor losing contact, or a computer program crashing. A LabLinking setup should therefore consist of an easy panel to monitor the availability of all necessary components. For data streams, the monitoring can consist of a live plotting of captured samples, which also allows the operators to inspect the data during the experiment. For other components, monitoring can make use of the message passing mechanism for regular status messages.
-
8.
TD-8 Interfacing: In most cases, a LabLinking experiment is not designed from scratch, but builds on existing experimental ecosystems that are being used at the different sites. The LabLinking-specific components need to interface with these existing tools. The most important categories of software to integrate are sensor drivers and experiment runners (such as PsychoPy or E-Prime). The interfacing usually requires the development of custom bridges between the different software ecosystems and technical platforms. It is therefore important that the existing software provides programmable interfaces through which it can be connected to the LabLinking environment.
-
9.
TD-9 Meta communication: Setting up and running a LabLinking experiment requires the ability for flexible communication not only between participants, but also between experiment operators. To enable this, we can rely on existing video conferencing software to open a communication channel during the experiment, which is independent from the actual LabLinking connections, and should run on independent hardware. To support on-the-fly debugging, a remote administration software is also recommended.
-
10.
TD-10 Transmission security: When setting up LabLinking experiments over the global network, data security becomes a major concern. On the one hand, personal data of participants should be passed through protected connections. On the other hand, security mechanisms might impede connections (i.e., the necessary communication ports need to be opened and applications need to be whitelisted in the respective operating systems). In this context, it is important to understand that not all functions need to be served by the same communication protocols. On the contrary, contradicting requirements (e.g., low latency for streaming vs. delivery guarantees for message passing) and the implications of existing software will often enforce the use of two or more ways of communication. In that case, it is important to establish the required coherence between the different ways. For example, to keep synchronization information (which uses one communication protocol) aligned with the data streams (which might use a different communication protocol).
3.2 Conceptual Dimension
Conducting a study within the LabLinking paradigm can have significant ramifications for the experimental design. In our experience, the two aspects of an experimental design that are most effected by LabLinking are the realism and validity of the design itself, and the role of the participants.
-
1.
CD-1 Realism and validity: LabLinking projects often involve the usage of audio or video streams to create an immersive experience for the participants. In an extreme case, LabLinking experiments can also be carried out completely in VR, making them almost fully independent of the geographical location of the participating labs. When conducting a LabLinking experiment, researchers have to be careful that the experimental design is still valid given that the experience of the participants is potentially limited by the technical solutions. For instance, in Study 2, the investigated human-robot interaction is limited to a situation where the human and the robot remain in fixed positions on the opposite sides of a table. For such a setup, transmitting a real-time audio and video streams can be sufficient to create an experience that closely resembles a situation where the robot would be in the same room with the participant. However, the findings from Study 2 would not necessarily extend to situation where the human and the robot move around in the same room as part of their interaction. Furthermore, additional steps can be important to make the LabLinking setup transparent to the participants. Consider an economic group experiment where spatially separated participants make real-time trades in the same artificial market. From the perspective of a single participant, the trading decisions of other participants during the experiment could have been previously recorded or faked by the researchers. In this case, it would be important to provide each participant with a real sense that the other market participants are actual human beings. This could, for instance, be achieved by setting up a brief video call prior to the actual experiment.
-
2.
CD-2 Participant roles: In Study 1, we showed how the roles of participants in a LabLinking study can significantly differ from participant roles in normal experiments that are conducted at a single site. Clearly defining those roles is important as it determines which kinds of experimental paradigms are possible, what data modalities can be acquired, and what kinds of requirements exist for the LabLinking infrastructure. Study 1 is an example where one of the participants is not making independent decisions but executes another participant's decision, i.e., the roles of both participants are not symmetric and the recorded data should be regarded as a single, fused information stream. In other LabLinking experiments, however, two or more participants might act autonomously and there could exist multidirectional communication channels, instead of a unidirectional one. Fully exploring the space of potential combinations of participant roles requires a systematic characterization of those roles. We first formulate three dimensions that will help us to differentiate the diverse roles of participants in LabLinking experiments:
-
(a)
Agency considers if participants have the ability to make their own decisions and have them influence the course of the experiment. A participant without agency can only observe the scene or execute actions decided by others.
-
(b)
Linkage considers if the participant has the ability to influence/be influenced by another participant or whether they are acting independently. Linkage can be implemented in different ways, e.g., by allowing communication or real-time observation.
-
(c)
Placement considers whether the participant is situated inside the experiment scene (i.e., is localized at a specific point and orientation within the scene, determining the perceptual input and action possibilities) or takes part from an outside perspective (i.e., is able to perceive the whole scene at once or can switch perspectives deliberately).
As illustrated in Fig. 3, different combinations of role characteristics can be combined into certain role prototypes. The depicted tree goes from relation of the participant to themselves (agency), to the relation to other participants (linkage), and finally to their relation to the environment (placement). We propose the following list of role prototypes for participant roles in LabLinking studies:
-
(a)
Reacting experimenter: Such a participant is observing another participant. S/he acts upon his/her own scene which is not influenced by the observed participant. For example, a child (reacting experimenter) learns to set the dinner table by watching and imitating an adult in a remote setting.
-
(b)
Parallel experimenter: Here, participants act independently of each other without the means of coordinating with each other. For example, two participants perform the same task in different labs and data is annotated according to a joint annotation scheme.
-
(c)
Interactor: In this role, the participant influences the experiment for others directly or indirectly. The interaction itself does not need to be symmetrical, i.e., the selection of actions, the times at which an action is possible, etc. can vary between participants. For example, the interactor sends recommendations to another participant through an microphone and earpiece.
-
(d)
Controller: Such participant can be considered to be in a “control room” influencing the scene, e.g., through triggered events, without being part of it and without being restricted to the perception capabilities of an agent in the scene. For example, the controller sends buttons to highlight task actions which another participant is instructed to execute.
-
(e)
Mirror: Here, the passive participant is asked to imagine being in the shoes of the active participant. This is the role configuration which was employed in the offline EASE data collection, see Sect. 4.3.2. For example, a mirror participant is watching a video feed from the point-of-view camera of another participant and imagines performing the same task.
-
(f)
Observer: In this configuration, the participant is still passive, but is asked to observe, judge, remember, or otherwise process the actions of the active participant in the other laboratory. For example, an observer watches other participants performing a task and judges their actions.
-
(g)
Executor: A participant in this role executes actions within the scene, but has no agency over them, e.g., because they are determined and communicated by another participant. For example, the other participant in the “Controller” example has the role of Executor.
-
(h)
State Feedback: A participant in this role influences the experiment scene she observes unconsciously, as it reacts to her measured (cognitive or affective) state. For example, a participant’s engagement level is measured while she observes another participant executing a task. If engagement falls below a threshold, the task speed is increased.
-
(a)
3.3 Organizational Dimension
From conceiving the original research question and conducting the experiment to condensing results and observations into a research paper, cross-laboratory research poses a range of specific organizational challenges that need to be addressed. In a LabLinking project, these challenges are typically related to the large number of collaborators as well as their diverse academic backgrounds and professional roles.
-
1.
OD-1 Quantity of collaborators: The number of collaborators in a project grows quickly with the involvement of different labs. This often complicates seemingly simple tasks like organizing a project meeting or keeping everyone up-to-date with the progress of the project. Due to the generally high employee turnover in academia, this also means that there is a high likelihood of collaborators departing the project before its completion. It is therefore imperative to maintain a workflow and a level of documentation that facilitates a frictionless on-boarding of new collaborators.
-
2.
Diversity of collaborators: Collaborators usually come from different academic backgrounds and fulfill different roles in their respective labs. To ensure the success of a LabLinking project, adequately addressing the typically high level of diversity is a crucial factor.
-
(a)
OD-2 Interdisciplinarity: Due to the involvement of multiple labs, LabLinking projects are often of an interdisciplinary nature. Interdisciplinarity is a trait that sounds good on paper but also poses significant challenges that can result in a long and tedious research process or the complete failure of a project. The key difficulty when working in an interdisciplinary environment is that in all fields of research, there exist large amounts of knowledge, practices and habits that are so well-known to every insider that they are usually never communicated explicitly. For researchers from other fields, however, much of this trade knowledge is typically unknown or underestimated. For example, most people outside of psychology will be oblivious to the difficulty of creating and maintaining a functioning study participant pool, while people outside of computer science often completely underestimate the significance of conference publications in this field. Unfortunately, there is no simple solution for addressing this issue. In any case, there should be established formats such that all collaborators can explicitly, openly and repetitively communicate their individual perspectives regarding the very basics of the research project (“What is our research question and why is it important?”, “What is the context of this project?”, “Which methods are we using and why?”, “How can we publish our results?”, “Which funds and equipment do we need?”, ...) throughout the complete project cycle.
-
(b)
OD-3 Professional roles: Within the labs that participate in a LabLinking project, there usually exist different professional roles that may or may not be structured in a hierarchical fashion. Some of the most common roles are professor, postdoctoral researcher, doctoral candidate and student assistant. Depending on their designated roles, the collaborators also assume different responsibilities within the LabLinking project. In a single lab, there typically exist established organizational patterns for the execution of a research project which also define the involvement of collaborators with different professional roles. It is important to understand that in a LabLinking project these patterns exist in parallel, and it is often neither clear nor trivial to create a frictionless organizational workflow that harmonizes the established processes from different labs. For instance, it might be common practice in one lab to integrate bachelor or master students in research projects as equal collaborators, while this could be very unusual for another lab. In particular for group leaders and professors, it is highly important to have a clear and common understanding of the main research goals, the applied methods, the strategic orientation as well as the necessary funding and equipment.
-
(c)
OD-4 Responsibilities: Depending on their respective role and expertise, collaborators can have a wide variety of responsibilities within a LabLinking project. Examples include the development of the experimental design, the communication with potential study participants, the development of data streaming and synchronization solutions, or analyzing the recorded data. The fact that these responsibilities are of a highly diverse nature and are typically spread across different labs and professional roles, makes it a challenging but important task to carefully monitor and organize the overall progress of the project. In this respect, it is imperative to maintain a continuous means of oversight and communication with regards to everyone involved in the project. In larger LabLinking consortia which conduct established experiment paradigms on a large scale, there might also be a requirement for a formal LabLinking governance structure that allocates resources, distributes responsibilities, and specifies formal communication paths. In the experiments which we conducted so far, a flat and flexible structure gave us the opportunity to respond to unexpected challenges quickly.
-
(a)
3.4 Levels of LabLinking
Along the technological, conceptual and organizational dimensions fleshed out above, labs can be linked on different levels of integration tightness. We use the notion of LabLinking-Level (LLL) to describe the respective degree of integration for a specific project. With increasing LLL, the linking quality improves with respect to conceptual richness and quality of participants’ interaction, task sharing, complexity of decisions to be taken, as well as time synchronicity. Consequently, the requirements on the linking technology will grow – as will the possibilities for synergies and benefits for the collaborating labs. In the following, we define five LLLs in terms of their respective characteristics and requirements for the three dimensions of integration.
- \({\textbf {LLL-1}}\):
-
Coordinated studies: Labs agree to collaborate by exchanging experience, best practices, and lessons learnt, by discussing protocols, and by using compatible equipment. They establish common terminology, data formats, annotation schemes, and documentations for the purpose of sharing (common or complementary) data and experimental paradigms. Typically, LLL-1 connected labs carry out joint projects with same scenario but separate experiments. Requirements: Willingness of partners to adjust their procedures to common protocols. Benefits: Leveraging complementary knowledge and insights, learning from each other and exchanging experience often results in synergies, which make the whole more than the sum of its parts.
- \({\textbf {LLL-2}}\):
-
Asynchronous data coupling: Beyond coordination, researchers from LLL-2 connected labs study the same experiment from different perspectives and equipment (e.g., different modalities, sensors). While data are recorded separately, for example when the same stimuli are presented to participants of two participating labs, the data will be synchronized afterwards through means of data synchronization. Consequently, data are treated as asynchronous but parallel recordings and analyzed accordingly. Requirements: Hardware and software to perform synchronization of data streams, using for example LabStreamingLayerFootnote 1 or hardware synchronization via light sensors or serial buses. Benefits: Details and insights from complementary data, modalities, and spatially distributed perspectives that show two sides of the same coin.
- \({\textbf {LLL-3}}\):
-
Synchronous bi-directional data coupling: Level LLL-3 allows moving from asynchronous to synchronous experiment designs, which combine one or more recordings within the same spatially distributed setup. A low-latency “real-time” synchronization via internet connection allows interaction between the involved participants and labs. Study 1 and Study 2 are examples of LLL-3 experiments. Requirements: Stable, high-throughput network connection, ability for synchronized and coordinated execution of experiments. Benefit: Allows performing interaction scenarios between labs (e.g., involving decision making or consensus finding).
- \({\textbf {LLL-4}}\):
-
Immersive synchronous interaction and collaboration: Level LLL-4 increases the degree of immersion and the interaction bandwidth between participants to create the impression of a co-located experiment, despite spatial separation. This is achieved through Virtual and Augmented Reality as well as multimodal tracking and interpretation of the participants’ behavior and state. Study 2.3 is an example for LLL-4. Requirements: Complex experiment setup for tracking and measuring participants as well as for creating the virtual laboratory environment. Benefits: Spatially distributed real-time experiments feel similar to co-located experiments and yield valid results.
- \({\textbf {LLL-5}}{\hbox { (and beyond)}}\):
-
LabLinking based on future emerging technologies: Higher LLLs are expected in the future as the tightness level of LabLinking will likely benefit from future emerging technologies, for example through distributed physical interaction which feels almost real. Thus, spatial distance will become less noticeable and boundaries between labs will start to blur. Such developments will pave the way for large-scale LabLinking between numerous labs across the world.
4 LabLinking in practice
In this section, we will discuss concrete LabLinking scenarios from different perspectives. First, we introduce the main involved labs in Sect. 4.1. Then, in Sect. 4.2, we revisit the introductory examples from Sect. 2 and describe concrete solutions to the technical, conceptual, and organizational challenges of the scenarios. Finally, in Section 4.3, we revisit and extend some of the example studies, but this time from the perspective of the different LabLinking-Levels (LLLs) and how associated challenges can be addressed.
4.1 Participating labs
In this first phase of LabLinking, we have successfully linked two participating labs (and are getting ready to link in another two labs soon), where one lab focuses on AR/VR and another is experienced in synchronized acquisition of data from large subject groups. We begin this section by introducing the two labs at the core of our long-term LabLinking strategy.
4.1.1 Biosignals Lab @ CSL
The Biosignals Lab at the Cognitive Systems Lab (CSL) consists of an interaction space (5x4m) which allows to blend real with virtual reality interactions (see Fig. 4). The Biosignals lab is fully equipped with a range of sensory devices to capture biosignals resulting from human behaviour like speech, motion, eye gaze, muscle and brain activities under both controlled and less restricted open-space conditions. The sensors, devices, and equipment available include Hololenses, stationary and head-mounted cameras, near- and far-field microphones for speech and audio event recording, a marker-based 9-camera OptiTrack motion capture system, wireless motion tracking based on PLUXFootnote 2 inertial sensors, electrodermal activity (EDA) sensors, mobile eye-tracking with Pupil LabsFootnote 3 headsets, muscle activity acquisition with stationary 256-channel and mobile 4-channel electromyography (EMG) devices from OT BioelettronicaFootnote 4 and PLUX, brain activity recording based on a BrainProductsFootnote 5 actiCHamp 64-channel electroencephalography (EEG), and mobile EEGs based on OpenBCIFootnote 6 and g.Tec’s g.Nautilus.Footnote 7 See [7] for more details on the Biosignals Lab, the hard- and software setup as well as the various devices. Furthermore, the Biosignals Lab comprises a large shielding cabin to record high-quality biosignals in clean and controlled conditions.
4.1.2 The NeuroImaging and EEG-Lab
The NeuroImaging and EEG-Lab is hosted by the Department of Neuropsychology and Behavioral Neurobiology and forms part of the Center of Advanced Imaging project (CAI). The CAI is equipped with a 3 Tesla Siemens MAGNETOM VidaFit®MRI scanner in a joined initiative with Fraunhofer MEVIS. The research focus lies on the neural correlates of executive control and conflict processing and interference resolution in complex decision making both in laboratory and semi-natural context [23]. Most investigations are performed using both fMRI (see Fig. 5) and EEG devices [24].
4.2 Implementing LabLinking studies
In this section, we mention concrete solutions to the challenges in the three dimensions as introduced in Sect. 3 for the studies which we described as introductory examples. It will become clear that while we re-use multiple components and central concepts, we have to acknowledge the fact that different research fields (e.g., neuroscience vs. robotics) have different software and hardware ecosystems which have to be incorporated into a LabLinking infrastructure, as already discussed in Sect. 3. Furthermore, different types of experiments impose different requirements. Therefore, every study will be accompanied by a customized LabLinking solution.
4.2.1 Solutions for Study 1 (neural correlates of human decision-making in everyday activities)
For the technical realization of Study 1, we need to solve three main challenges: 1) Low-latency transmission of video data with time stamps. 2) The synchronization of video streams with other data, such as fMRI and user control signals, and 3) a solution for the orchestration of the different components at multiple sites. For 1), we use the vidgear packageFootnote 8 for robust and low-latency video streaming with a transmission of timestamps via TCP/IP. Regarding 2), timestamps are synchronized with the other modalities, including fMRI trigger signals and user input via the LabStreamingLayer middleware.Footnote 9 The fMRI data itself is stored locally on the scanner computer due to the proprietary nature of the fMRI setup, but can at any point be aligned with the other data via the synchronized triggers. For 3), we created the Python-based LabCommander software (available at the repositoryFootnote 10 which acts as a middleware to enable the network-based control on multiple computers for starting, pausing, stopping, and monitoring of data recording and experiment flow control (implementing Remote Procedure Calls on remote machines, addressing TD-3). LabCommander provides an easy Python interface to expose functions to other sites through remote procedure calls, a simple scoping mechanism, and dynamic auto-discovery of instances based on UDP-broadcast, so that no fixed network setup needs to be provided addressing TD-5).
4.2.2 Solutions for Study 2 (the role of attention in human-robot interaction)
By using LabLinking, it is possible for the robot Pepper to remain at its usual location at Bielefeld University while interacting in real-time with a human participant at the BioSignals Lab at the University of Bremen. To realize this setup, we implemented a technical infrastructure with the following main capabilities: (1) Streaming of audio, video and other data for human-robot interaction, (2) Synchronized recording of multimodal data streams, and (3) Control of experiment flow in a multi-site experiment.
For the cross-site communication of events, we used the Robot Operating System (ROS) [25], which was also employed to control the robot Pepper at the Bielefeld lab. ROS provides a flexible messaging interface that allowed us to establish a consistent and robust data flow between multiple machines across the two sites. A graphical user interface at each lab allowed the respective experimenters to communicate the state of the experiment (e.g., whether a trial was completed successfully) to the other side. Furthermore, all events were logged in ROS bags (the universal format for ROS logging and playback) for later analysis of the temporal structure of the experiment (e.g., to identify trial beginnings). This way, the communication middleware immediately provides logging (TD-4 ) capabilities which avoid tedious and error-prone manual implementation through low-level functionality. To address temporal synchronization, all involved machines at both sites were synchronized to the same Network Time Protocol (NTP) server to ensure a reliable alignment of timestamps. For reproducibility, the code for controlling the robot and parts of the lab linking can be installed using a distribution in the cognitive interaction toolkit (CITK) [26].
Streaming of video and audio data (TF-2) requires solutions which, in contrast to multimedia streaming protocols for conferencing or entertainment, focus on reliable, low-latency transmission without sample drops. For streaming video data, we used OpenCV-based plugins in ROS (video_stream_opencv, image_view) [27]. For streaming audio data, we used the GStreamer software [28], which supports highly configurable, low-latency streaming pipelines. A video stream capturing visual from the robot in Bielefeld was streamed to Bremen, while video and audio of the participant and a video of the table were streamed back to Bielefeld for the operators of the robot. We furthermore implemented a custom GStreamer plugin to store accurate timestamps of the beginning and end of audio recordings. All video frames and other event data were assigned timestamps within the ROS framework. This allowed us to precisely align all collected data types and modalities during analysis. The EEG recorder used a different middleware (Lab Streaming Layer, see Sect. 4.2.1), for which we implemented a custom bridge component to convert the respective data packages into ROS messages (relating to the issue of interfacing between different software ecosystems, see TD-1). To ensure secure transmission (TD-10) of personal data and to simplify the discovery process (TD-5) from different machines, we connected all computers involved into one Virtual Private Network (VPN). A VPN secures the connection over the internet and also creates homogeneous and replicable computer naming conventions. For meta communication, a separate video conference session was maintained throughout the whole session. Furthermore, we decided to stream all data, even if it was meant to be stored and processed locally. This served as a way to improve joint attention, but also, through real-time visualization of the streamed signals, as a way of data monitoring (TD-7) to avoid missing or corrupted data. With regards to diagnostics (TD-6) of meta-parameters of the network connection, we mostly relied on out-of-the-box tools, such as ping. In the future, a dedicated diagnostics tool with appropriate visualization and warning markers could further improve the usability and effectiveness.
4.2.3 Solutions for study 3 customer-salesperson interaction in virtual reality
For study 3, which takes place in Virtual Reality, we benefit from the fact that game engines, which are the technical framework on which Virtual Reality applications are developed, already provide mechanisms for low-latency, high-throughput network communication to support multiplayer games. We can make use of these mechanisms to implement LabLinking communication between multiple sites. These support replication, i.e., the synchronization of virtual objects between the different instances, and discovery, i.e., methods for (re-)establishing connections between instances over the Internet. Concretely, we use Unity 2022 with Normcore as the networking framework. In Unity, modular plugins make it possible to run the application on different VR devices. For the consumer VR headset, we use a Varjo VR-3 because of the high-end eyetracking capability. For the sales agent, we utilize an HTC Vive Pro Eye with a Vive facial tracker. Motion tracking for the sales agent is realized using a Motive Optitrack System which offers a Unity integration. The Optitrack skeleton is constituted by 54 passive markers on a full-body suite. These 54 marker positions are first transferred via LAN from the motive PC to the agent’s running Unity instance and then further relayed via Normcore over the Internet to the consumer’s Unity instance. To control the 3D avatar in the virtual space, we included real-time tracking of participants via marker-based motion capturing. The implementation can be found in a public repository.Footnote 11
4.3 Through the LabLinking-Levels
In this section, we revisit two of the example studies from the perspective of LabLinking-Levels specifically: In parts 4.3.1 to 4.3.3, we develop different iterations of study 1 which span LLL-1 to LLL-3 and show how study design and employed technology evolve with increasing LabLinking-Level. In part 4.3.4, we discuss LLL-4 using the example of study 3 and additional examples from ongoing research. We deliberately chose similar experimental scenarios for the different LabLinking-Levels to see how related research questions can be tackled differently when higher levels are available.
4.3.1 A case of LLL-1 - human everyday activities
Our first example describes LabLinking Level 1 (LLL-1), which is applied to the first phase of the DFG collaborative research center 1320 “Everyday Science and Engineering (EASE)” (http://ease-crc.org). EASE focuses on facilitation of robotic mastery of everyday activities as its unifying mission. For this purpose, we observe humans’ performance of everyday activities using a multitude of sensors and devices. The resulting biosignals are derived from the corresponding brain, muscle, and speech activities. Their analyses provide insights into complementary aspects of behavior required for humans to masterfully perform everyday activities with little effort or attention. Several labs, including the Biosignals and the NeuroImaging Labs, jointly investigate a common scenario, the table setting task. The participating labs study a variety of aspects through a panoply of modalities. A common annotation standard and an ontology form the means to integrate different levels of abstraction and diverse experiments. Selected manual annotation of data is applied to bootstrap semi-automatic annotation procedures based on the human-in-the-loop concept. Joint toolkits like EaseLAN (based on the ELAN Framework, [29]) were created to support data visualization, annotation, and time-aligned arrangement of all recorded data. Time series data and annotations are presented in a single score files coined “Partitur”, as shown in Fig. 6. In the partitur, all data, irrespective of its origin, can be regarded synchronized against a global clock and in a common format which supports video, audio, text annotations, and any kind of time-series data. The original format was defined by the developers of ELAN and can be extended through plugins to support other data types. By integrating results from a wide range of data sources and complex analyses using a multitude of complementary methods, we envision to effectively transfer an extensive contextually-dense reserve of human everyday activity experiences and problem solving approaches, to robotic agents. A partitur is accessible to both humans and systems and allows the synchronized exploration and analysis of data which was collected in a distributed way, to identify alignment, interaction, and contradictions. A collection of high-dimensional biosignals data from about 100 participants, along with rich and time-aligned annotations, will be made available open-source to support common standards of data storage, synchronization, and annotation. Future collaborations can use this available data to already establish compatibility to those standards before a LabLinking experiment, facilitating a quick experiment implementation and setup of a data analysis pipeline.
4.3.2 A case of LLL-2 - everyday activity in MRI
Brain activity measurements using methods such as fMRI or EEG allow inspection of what people pay attention to as they perform tasks, how they may adapt to ambiguous situations or unforeseen obstacles, what might influence their decision making processes, and how their own motor imagery when viewing performance of activities compares with in-situ motor execution. In the Biosignals Lab, we captured a 16-channel EEG with a mobile device attached to the head of the person executing the activity. This gives us important information, e.g., to infer the person’s workload or to analyze motor activation during physical activities. However, physical activity impacts the EEG signal, also the spatial resolution of EEG is limited and the number of available channels does not allow robust source localization of brain activity during task execution. A higher density of electrodes would extend setup time beyond practicable limits.
On LLL-2, prerecorded performances of everyday activities can therefore be used in neuroimaging studies at the NeuroImaging Lab to achieve a more comprehensive picture of brain activity. For this purpose, videos from the perspective of the acting person are recorded in Biosignals Lab via head mounted camera by an experimenter who acts out scenarios that resemble those of participants of table setting experiments. Scenarios encompass a variety of confidently finished runs as well as runs causing erroneous behavior induced by missing or misplaced objects, or plans prevented from execution. Visible actions consist of articulate and easily traceable movements of the arms and hands as well as smooth camera pans of the head to establish and validate a standardized set of videos. This standardized set is then presented to participants of EEG and fMRI studies at the NeuroImaging lab, who employ motor imagery [17] to actively put themselves into the perceived scenes. Semantically unique episodes within the videos (e.g., pick, place) are manually annotated and the recorded brain activity of the participants is correlated with those episodes.
EEG and fMRI are used as complementary tools in these viewing scenarios in order to take advantage of the higher spatiotemporal signal information stemming from the integration of their individual data and to introduce a combined fMRI constrained source analysis [24]. Results reveal a wide range of activated brain networks with a high temporal and spatial resolution, which are further analyzed in terms of dimensionality of involved networks during the planning and execution of complex everyday activities and the handling of errors and ambiguous situations. ...
The result of this approach will be a multimodal partitur, i.e., a sequence of time-aligned signals and annotations from various sources. In the partitur, all signals appear in relation to the same activity of one person, although they were created in different labs. Through this approach, we are not only able to combine the available sensors in the different labs, but more importantly, we are also able to align data which could not have been recorded in a single lab due to the fundamental characteristics of the sensor technology (e.g., execution of an everyday activity is not possible in an fMRI scanner, while high-resolution brain reading is not possible with a mobile setup).
To perform such a study by linking multiple labs, there are several technical prerequisites which we had to establish to perform this stage of LabLinking. These prerequisites involve different aspects, such as: 1) Temporal synchronization, 2) Format synchronization, and 3) Semantic synchronization. The aspect of temporal synchronization was already discussed with technical solutions in Sect. 4. The other two aspects emerge from the embedding of this research in a large-scale project with many stakeholders, which require additional measures:
Format synchronization means that all data is archived in a format that is accessible to all researchers, independent of discipline-specific conventions, proprietary formats, etc. To achieve format synchronization, we need to consider common, open, and documented ways of storing and transmitting the generated data. For this purpose, we rely on an automated data processing pipeline which enforces desired data characteristics, such as video codec, sampling rate, channel ordering, etc. and thus guarantees a heterogeneous data quality. The pipeline code automatically serves as a documentation for the data genesis. For storing data, we use the NEEM-Hub of EASE, which is a distributed storage for heterogeneous data. The NEEM-Hub provides a common representation of data behind a web-accessible front-end, supporting version control and the combined storage of symbolic and subsymbolic information.
Semantic synchronization means that the data annotation corresponds to each other, i.e., that a common vocabulary is used to annotate events, in which the same terminology is used for the same concepts. If the vocabulary is defined through the means of a proper ontology, it also prepares the dataset for reasoning and semantic queries. For semantic synchronization, we need joint annotation schemes for manual and automatic annotation of the data streams (e.g., segments, classes, etc.). Semantic synchronization can be enforced by linking the annotations to an ontology, like SOMAFootnote 12 [30], which precisely defines each concept used for annotation (as seen in Fig. 7).
...
4.3.3 A case of LLL-3 - real-time LabLinking
In this section, we describe how we created a setup for performing low-latency (real-time) LabLinking between the Biosignals Lab (or the EASE apartment kitchen, which is a realistic kitchen environment used as a testbed for cognitive robots) and the Neuroimaging Lab, as shown in Fig. 8. For this purpose, we set up a video stream from one or more cameras in the Biosignals Lab to the Neuroimaging Lab where a second participant observed these videos projected into the MRI scanner through a mirrored monitor screen inside the MRI tube.
To demonstrate a real-time interaction paradigm through LLL-3, we created a variant of the table setting scenario. Here, the decisions for which items to set on the table is relegated to the decision maker in the scanner. The participant (the “avatar”) in the Biosignals Lab or in the EASE apartment kitchen is physically manipulating and inspecting the potential tableware and food items. The participant in the fMRI can perceive the same information through the video from head-mounted camera. By using response buttons inside the fMRI scanner, they can backchannel decisions on which item to take to the avatar via an acoustic channel. Note that the roles of the two participants in this setup are different than for the allegedly similar LLL-2 example: The participant in the MRI scanner is no longer asked to mirror the activities of the active person, but can actively influence the course of the experiment. This relates to the dimension CD-2, the different participant roles. We need to considers this aspect in experiment design, as different research questions require different roles and those in turn influence the requirements for the LabLinking setup. The role taxonomy in CD-2 helps to conceptualize which roles might occur. By using this setup, we are able to study brain activity patterns of the decision-making participant in the scanner aligned to data of complex manipulations in the real world. This way, we can study the dynamics of decision processes, for example by putting a decision and the associated neural process into the perspective of actions and attentional focus leading up to it. This scenario shows that LLL-3 supports different role distributions between participants, depending on the desired measurement. This experiment also showed the challenges of interdisciplinary collaboration (OD-2), as real-time experiments are unusual for neuroscience labs, while MRI and the associated requirements on the experiment flow are new territory for most computer science researchers. This project also showed the challenges resulting form different professional roles (OD-3) in the different labs: Hierarchical relationships (i.e., a senior post-doc advising a doctoral student) are often replaced by collaborations on the same level, requiring more explicit negotiation of responsibilities (OD-4: who designs the experiment, who operates it during the experiment?) and expectations. This can be challenging in interdisciplinary contexts, where different aspects of an experiment might get different levels of attention and are valued differently between disciplines (e.g., neuroscience researchers might prioritize strict experimental control, while computer science researchers focus on collecting complete raw data for post-hoc analysis.).
4.3.4 A case of LLL-4 - augmented and virtual realities
The next paramount and timely extension of LabLinking concerns the establishment of a maximally immersive experience to ameliorate the spatial distance between participants. A high degree of immersion is crucial to ensure that the behavior and involved cognitive processes are as comparable as possible between a conventional and a LabLinking experiment. CD-1 makes the point that some experiments require a high degree of immersion and a study of human interaction in which gestures or joint attention play a large role belongs in this category. Recently, computer-generated interactive environments have been established in research and development in which users can view and perceive their reality together with physical properties - so-called virtual realities (VR). Virtual realities can be mixed with physical reality, which is then referred to as mixed or augmented (AR). AR/VR technologies make it possible to embed users in different worlds (immersion) and let them carry out actions that have an impact on the virtual world (interactivity). This may give AR/VR users the illusion that their interactions have an impact on the real world, i.e., that what seems to happen in the virtual world actually happens [32]. AR/VR thus provide fundamental tools and mechanisms for LLL-4 to enable users to interact with each other in common worlds while operating and manipulating machines and their environment.
An example of an LLL-4 experiment is a collaboration with researchers on information systems in economic decision making at the University of Gießen. In this project, we use VR-based LabLinking to conduct an experiment in which a sales person advises a potential customer on the differences between similar products of one category. The role of that project is two-fold: On the one hand, it shows how we can use LLL-4 LabLinking to create immersive, distributed experiments, which make optimal use of the respective equipment and expertise at the respective labs (multimodal tracking of motion and cognition of the sales person at CSL, detailed eye tracking of the customer at Gießen). On the other hand, it also shows how the LabLinking setup itself can be a matter of investigation, as a remote, VR-based showroom could be a point-of-sale in the future for rare or highly customizable products. See also 4.2.3 for technical details behind this study.
Another example of LLL-4 LabLinking is the CARL system (see Fig. 9), which is available in a public repository.Footnote 13 CARL allows to study everyday activities, similarly to the example given in the previous section. Instead of VR, CARL employs Augmented Reality, which gives two people the opportunity to jointly (as equal actors) set a shared table. Both can manipulate their physical objects, which get tracked and displayed virtually on the other participant’s table. Furthermore, AR-based communication (e.g., visual pings) can be used to establish joint attention. CARL supports spatial synchronization via QR-Codes placed anywhere in the environment to synchronize virtual and physical world. Hand and head position as well as discrete events are automatically logged via LabStreamingLayer in a synchronized fashion. The CARL framework is extensible, e.g., it supports the mixture of AR and VR interaction. CARL’s Network Architecture utilizes the Netcode for GameObjects plugin for the Unity game engine. One dedicated server on a Windows machine provides a synchronization hub for any amount of clients in the same network. The clients are standalone applications that connect to the server and provide different visualizations and interactions to users, and types of the data to the server. Synchronization between clients and the server happens at a set tick rate of 30Hz, and only sends values that have changed, so network-load stays as small as possible. A bandwidth of ca. 3kB/s is needed per constantly moving object. Even a slow connection of ca. 500kB/s could therefore support dozens of moving objects, and keep them synchronized.
Both AR and VR require tracking of the participants to register their location and movements. Tracking is important to simulate the person’s motion in the view of the other person or to position virtual elements relative to the participant. As the recording environment for studying everyday activities already contains the necessary sensors for tracking a person’s movement, we can reuse these components to provide tracking for LabLinking without additional costs. For example, we can employ marker-based (OptiTrack) tracking or markerless tracking (with depth cameras). Both types of tracking systems are already in place in the Biosignals Lab setup.
Despite the great possibilities, AR/VR has an important shortcoming, i.e., it lacks the social context of local interaction. In particular, the virtual world only sees the projection of users and their immediate actions but not those social events that take place in parallel, such as incoming phone calls or conversations with bystanders. Technical interaction systems therefore require a robust assessment of the users’ behavior. Systems need to decide if the observed behavior (or its correlates) relates to the interaction represented in AR/VR. The lack of social context in AR/VR technologies is not only a shortcoming but also an opportunity: AR/VR enable to decouple individual components of the interaction, i.e. social signals could be systematically varied in virtual worlds. This paves the road to validate and predict the impact of social signals on interaction in a more rigorous, systematic, and ecological way as was possible in the past. An important tool to establish this research is the multimodal observation of participants and the processing of the recorded biosignal data through machine learning models. Previous work has shown the feasibility of real-time classification of workload, attention, or affect and the available sensors in the LabLinking setup already provide the necessary data. Augmenting the real or virtual experiment setup with such information in LLL-4 can support taking the other person’s perspective or to interact with them naturally [33]. Figure 10 shows two examples of using AR and VR together with multimodal observation of participants which will be integrated in planned LLL-4 setups [34, 35]. In future iterations of LLL-4, these information channels will be incorporated into the mentioned scenarios.
5 Conclusion & outlook
LabLinking is a novel method of empirical research that has a number of important advantages over single-lab studies: It allows experimenters to leverage equipment and expertise at multiple sites, and it facilitates interdisciplinary collaboration across large distances without the need of expensive travel, making collaboration more sustainable and achievable for researchers with limited resources. It thus enables investigations that individual labs would not have been able to conduct independently. Some challenges in the current approach remain: Running an experiment in parallel at multiple sites increases the technical and organizational complexity of the setup and requires well-planned communication, coordination, and structure, as outlined in this paper. Furthermore, the aspiration of integrating LabLinking naturally into already existing experiment ecosystems implies that LabLinking solutions cannot be applied out-of-the-box but have to be adopted from a toolbox of available solutions (as described in the different use cases in this paper) and thus require a certain level of technical expertise.
The descriptions of the individual LabLinking examples illustrate how all the technical, organizational, and conceptional dimensions are relevant for running successful LabLinking experiments. With regards to the technical dimensions, we needed to find individual solutions for unique problems in each experiment, but over multiple scenarios: a LabLinking toolbox emerged which could be leveraged for re-use in later installments. This implies that the different aspects are not completely independent, but follow certain recurring patterns. For example, highly distributed setups have high requirements in terms of Discovery, Interfacing, and Monitoring, while a setup with high data throughput depends strongly on reliable Streaming and Diagnostics. To reduce the initial hurdle of setting up such a toolbox (especially in labs which lack resources or knowledge for manual creation of custom hardware or software), more LabLinking software should be released as generic software packages for these recurring patterns, with convenient user interfaces and documentation. Conceptual and organizational challenges proved to be more difficult to resolve. LabLinking forces the immediate explication and documentation of many processes, decisions, requirements which would otherwise have been left implicit if only a single lab was involved. While this increases the initial workload for the involved researchers and requires a lot of initial communication, the overall robustness and validity of the experiment can benefit from this process. This also means that onboarding of new collaboration partners in a LabLinking-ready lab becomes increasingly easy, as more technical interfaces and documented processes are available.
LabLinking is not limited to the constellations described in this paper, but rather we can think of various other line-ups: For example, LabLinking can be employed to overcome bottlenecks in the form of available sensors. Well-equipped laboratories like the KD2LabFootnote 14 which features 40 cabins each fitted with various biosignal sensors for large-scale interaction experiments may benefit from the option to include specialized devices and experience, such as EEG or stationary equipment like fMRI. Such setups could also enable mass LabLinking scenarios with more than two participants to study group dynamics. Another opportunity is the real-time use of the biosignal data during the experiment (e.g., as visualization or for adjusting participants’ views) instead of just capturing it for offline processing. Sharing of research resources through LabLinking allows a more sustainable use of expensive sensor equipment and expert time: Individual labs can concentrate on their core strengths and link in needed additional modalities. Furthermore, LabLinking facilitates inter-cultural research as it makes it easier to connect labs in different countries or on different continents. Another way in which LabLinking can benefit the research community is that it encourages scientists to provide and support open and universal formats for data storage and data transmission, and it enables to expose students to latest technologies and equipment, and to integrate them into collaborative studies. These open source and sharing strategies are contributions towards effective open science practices.
Data availibility
The data of the mentioned studies is available through the respective publications. In case of pilot experiments, the data is available by request.
Notes
References
Schultz T, Amma C, Heger D, Putze F, Wand M. Human-machine interfaces based on biosignals. At-Automatisierungstechnik. 2013;61(11):760–9.
Schultz T, Wand M, Hueber T, Krusienski DJ, Herff C, Brumberg JS. Biosignal-based spoken communication: a survey. IEEE/ACM Trans Audio, Speech Lang Process. 2017;25(12):2257–71. https://doi.org/10.1109/TASLP.2017.2752365.
Marsh W.E, Hantel T, Zetzsche C, Schill K. Is the user trained? Assessing performance and cognitive resource demands in the Virtusphere, in 2013 IEEE Symposium on 3D User Interfaces (3DUI) (2013), pp. 15–22. https://doi.org/10.1109/3DUI.2013.6550191
Fehr T, Staniloiu A, Markowitsch HJ, Erhard P, Herrmann M. Neural correlates of free recall of “famous events” in a “hypermnestic” individual as compared to an age- and education-matched reference group. BMC Neurosci. 2018;19(1):35. https://doi.org/10.1186/s12868-018-0435-y.
Peukert C, Pfeiffer J, Meissner M, Pfeiffer T, Weinhardt C. Acceptance of Imagined Versus Experienced Virtual Reality Shopping Environments: Insights from Two Experiments, in 27th European conference on information systems European conference on information systems (ScholarSpace/AIS Electronic Library (AISeL), Stockholm/Uppsala, Sweden, 2019), pp. 1–16
Haidu A, Beetz M. Automated models of human everyday activity based on game and virtual reality technology, in 2019 International conference on robotics and automation (ICRA) (IEEE, Montreal, Canada, 2019), pp. 2606–2612
Meier M, Mason C, Porzel R, Putze F, Schultz T. Synchronized multimodal recording of a table setting dataset, in IROS 2018: Workshop on Latest Advances in Big Activity Data Sources for Robotics & New Challenges, Madrid, Spain (2018)
Licklider J. Man-computer symbiosis. IRE Trans Human Factors Electron, HFE. 1960;1:4–11.
Lücking P, Lier F, Bernotat J, Wachsmuth S, Ŝabanović S, Eyssel F. Geographically distributed deployment of reproducible HRI experiments in an interdisciplinary research context, in Companion of the 2018 ACM/IEEE international conference on human-robot interaction (2018), pp. 181–182
Pavlov YG, Adamian N, Appelhoff S, Arvaneh M, Benwell CS, Beste C, Bland AR, Bradford DE, Bublatzky F, Busch NA, et al. #EEGManyLabs: investigating the replicability of influential EEG experiments. Cortex. 2021;144:213–29.
Prado P, Birba A, Cruzat J, Santamaría-García H, Parra M, Moguilner S, Tagliazucchi E, Ibáñez A. Dementia conneegtome: towards multicentric harmonization of EEG connectivity in neurodegeneration. Int J Psychophysiol. 2022;172:24–38.
Li J, Li Z, Feng Y, Liu Y, Shi G. Development of a human-robot hybrid intelligent system based on brain teleoperation and deep learning slam. IEEE Trans Autom Sci Eng. 2019;16(4):1664–74.
Liu Y, Habibnezhad M, Jebelli H. Brain-computer interface for hands-free teleoperation of construction robots. Autom Constr. 2021;123: 103523.
Beraldo G, Tonin L, Millán JDR, Menegatti E. Shared intelligence for robot teleoperation via bmi. IEEE Trans Human-Mach Syst. 2022;52(3):400–9.
Schultz T, Maedche A. Biosignals meet adaptive systems. SN Appl Sci. 2023;5(9):2523–3971.
Steed A, Archer D, Brandstätter K, Congdon BJ, Friston S, Ganapathi P, Giunchi D, Izzouzi L, Park GWW, Swapp D, et al. Lessons learnt running distributed and remote mixed reality experiments. Front Comput Sci. 2023;4: 966319.
Jeannerod M. Mental imagery in the motor context. Neuropsychologia. 1995;33(11):1419–32.
Mason C, Meier M, Ahrens F, Fehr T, Hermann M, Putze F, et al. Human Activities Data Collection and Labeling using a Think-aloud Protocol in a Table Setting Scenario, in IROS 2018: Workshop on Latest Advances in Big Activity Data Sources for Robotics & New Challenges, Madrid, Spain; 2018. https://www.csl.uni-bremen.de/cms/images/documents/publications/mason_iros_2018.pdf.
Richter B, Putze F, Ivucic G, Brandt M, Schütze C, Reisenhofer R, Wrede B, Schultz T. EEG correlates of distractions and hesitations in human-robot interaction: A LabLinking pilot study. Multimodal Technol Interact. 2023;7(4):37.
Sra M, Schmandt C. Full-body tracking for immersive multiperson virtual reality, in Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (2015), pp. 47–48
Döllinger N, Wolf E, Mal D, Erdmannsdörfer N, Botsch M, Latoschik M.E, Wienrich C. Virtual reality for mind and body: Does the sense of embodiment towards a virtual body affect physical body awareness?, in CHI Conference on Human Factors in Computing Systems Extended Abstracts (2022), pp. 1–8
Latoschik ME, Roth D, Gall D, Achenbach J, Waltemate T, Botsch M. The effect of avatar realism in immersive social virtual realities, in Proceedings of the 23rd ACM symposium on virtual reality software and technology (2017), pp. 1–10
Gloy K, Herrmann M, Fehr T. Decision making under uncertainty in a quasi realistic binary decision task-an fmri study. Brain Cogn. 2020;140: 105549.
Trautmann-Lengsfeld SA, Domínguez-Borràs J, Escera C, Herrmann M, Fehr T. The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis. PLoS ONE. 2013;8(6): e66997. https://doi.org/10.1371/journal.pone.0066997.
Quigley M, Gerkey B, Conley K, Faust J, Foote T, Leibs J, Berger E, Wheeler R, Ng A. ROS: an open-source robot operating system, in Proceedings of the IEEE international confconference on robotics and automation (ICRA) workshop on open source Robotics (Kobe, Japan, 2009)
Lier F, Wienke J, Nordmann A, Wachsmuth S, Wrede S. The cognitive interaction toolkit–improving reproducibility of robotic systems experiments, in Simulation, Modeling, and Programming for Autonomous Robots: 4th international conference, SIMPAR 2014, Bergamo, Italy, October 20-23, 2014. Proceedings 4 (Springer, 2014), pp. 400–411
Ros-Drivers. Ros-drivers/video_stream_opencv: A package to open video streams and publish them in ros using the opencv videocapture mechanism. https://github.com/ros-drivers/video_stream_opencv
GStreamer. Gstreamer/gstreamer: Gstreamer open-source multimedia framework. https://github.com/GStreamer/gstreamer
Wittenburg P, Brugman H, Russel A, Klassmann A, Sloetjes H. ELAN: a professional framework for multimodality research, in Proceedings of the fifth international conference on language resources and evaluation (LREC’06) (2020)
Beßler D, Porzel R, Pomarlan M, Vyas A, Höffner S, Beetz M, Malaka R, Bateman J. Foundations of the socio-physical model of activities (soma) for autonomous robotic agents (2020)
Mason C, Gadzicki K, Meier M, Ahrens F, Kluss T, Maldonado J, Putze F, Fehr T, Zetzsche C, Herrmann M, Schill K, Schultz T. From human to robot everyday activity, in IROS 2020 (IEEE, Las Vegas, USA, 2020). https://www.csl.uni-bremen.de/cms/images/documents/publications/mason_iros2020.pdf
Slater M, Sanchez-Vives MV. Enhancing our lives with immersive virtual reality. Front Robot AI. 2016;3:74.
Oh CS, Bailenson JN, Welch GF. A systematic review of social presence: definition, antecedents, and implications. Front Robot AI. 2018;5:114.
Vortmann LM, Kroll F, Putze F. EEG-based classification of internally- and externally-directed attention in an augmented reality paradigm. Front Human Neurosci. 2019. https://doi.org/10.3389/fnhum.2019.00348.
Putze F, Herff C, Tremmel C, Schultz T, Krusienski D.J. Decoding Mental Workload in Virtual Environments: A fNIRS Study using an Immersive n-back Task, in 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC) (2019), pp. 3103–3106. https://doi.org/10.1109/EMBC.2019.8856386
Acknowledgements
The research reported in this paper has been partially supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 Project-ID 329551904 “EASE - Everyday Activity Science and Engineering”, University of Bremen http://www.ease-crc.org/. The research was conducted in subprojects H03 and H04. It has also been partially supported as part of the Research Training Group (Graduiertenkolleg) 2739 “KD2 School - Designing Adaptive Systems for Economic Decision-Making” which interconnects the three German universities KIT, University of Bremen, and University of Gießen http://kd2school.info. We thank Tobias Weiß and Jella Pfeiffer from University of Giessen for implementing the VR scene and avatar for Study 3. Furthermore, the paper has been partially supported by the Bremen Excellence Initiative Fonds and the High-Profile Area Minds, Media, Machines http://mindsmediamachines.de, at the University of Bremen.
Funding
Open Access funding enabled and organized by Projekt DEAL. The research reported in this paper has been partially supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 Project-ID 329551904 “EASE - Everyday Activity Science and Engineering”, University of Bremen http://www.ease-crc.org/. The research was conducted in subprojects H03 and H04. It has also been partially supported as part of the Research Training Group (Graduiertenkolleg) 2739 “KD2 School - Designing Adaptive Systems for Economic Decision-Making” which interconnects the three German universities KIT, University of Bremen, and University of Gießen http://kd2school.info. Furthermore, the paper has been partially supported by the Bremen Excellence Initiative Fonds and the High-Profile Area Minds, Media, Machines http://mindsmediamachines.de, at the University of Bremen.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Ethical approval and consent to participate
All experimental protocols mentioned in this paper were approved by the ethics committee of the University of Bremen or the ethics committee of the University of Giessen. All participants gave their written, informed consent before the start of each experiment and all experiments were conducted in accordance with the Declaration of Helsinki and other relevant guidelines and regulations.
Competing Interests
The authors have not disclosed any competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schultz, T., Putze, F., Reisenhofer, R. et al. LabLinking: theory, framework, and solutions of connecting laboratories for distributed human experiments. Discov Appl Sci 6, 448 (2024). https://doi.org/10.1007/s42452-024-06122-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42452-024-06122-7