1 Introduction

Augmented construction in filament-laying for architectural material systems using wearable technology allows for not only tracing partially ordered material systems in the making but also harnessing the potential of human intuition in the construction procedure. As anthropologist Tim Ingold writes in his article The textility of making: “(...)” practitioners bind their own pathways or lines of becoming into the texture of material flows “(...).” (Ingold 2010). Ingold emphasizes making as a process of following the material, a process that takes into account movements and forces, but also agency and embodiment. Terry Knight and George Stiny propose a similar argument in their texts on “making grammar”, namely the notion of a “computational theory of making” which is based on the concepts of “shape grammar” and “shape algebra” (Knight and Stiny 2015; Knight 2018). This computational design approach encompasses both the constructive and sensory aspect of making. These threads of research contextualise the project’s approach, which was to shape the development of the partially ordered system and the machinic-computational aspect around making by a craftsperson.

In this context, the research project aims to develop a framework for augmented reality (AR) for filament-construction in architecture and to test it using demonstrator experiments on the architectural scale of a room to explore the basic relevant types of human–machine communication (Fig. 1).

Fig. 1
figure 1

Minimal Machines AR framework. The proposed AR framework uses a glove which allows for tracking motions and for inputting and outputting signals to and from an operator. A The AR framework is tested on a partially ordered system consisting of a monofilament that is spun in a space through knotting. B The tracked data show the movement of the operator's hand during knotting as well as the recorded syntax points and polyline graph. C A research demonstrator has been implemented using the motion tracking and point output functions of the AR framework

In a definition by art historian and media theorist Horea Avram, AR is understood as the superimposition of the sensory system of humans with virtually generated data in a real-time feedback loop (Avram 2014). For architectural construction, AR is therefore used to establish a virtually generated real-time feedback loop between craftspeople and the architectural structure they are constructing (Mitterberger 2022). In the research presented here AR is implemented through the use of so-called minimal machines, which in this context are considered devices that comprise minimal digital technology to record and inform the crafting process. The term minimal machine is used in computer science, where it denotes “an abstract machine possessing no redundant states” (Butterfield et al. 2016). This project uses the term in a broader sense than its original connotation.

The research presented here applies AR for architectural construction to partially ordered systems defined as systems “which do not have full long-range spatial or orientational order” (Lam et al. 1994). As an example of a partially ordered system, the spinning behaviour of the Bombyx mori silkworm is translated into instructions for a human-made architectural structure (Sect. 4.2.1).

The relevance of AR for construction within architectural design is twofold. First, AR permits working with partially ordered material systems rather than with highly ordered ones. It enables the craftsperson to respond to emergent–non-pre-determined–behaviours of such a partially ordered system (Dierichs and Menges 2021b; de Wolf and Holvoet 2005). Partially ordered material systems bear this potential of adaptation to both internal and external factors of an architectural structure. This capacity for adaptation is highly pertinent within the context of a more sustainable approach to architecture: for example, one and the same material system may be adapted for reuse on a different building site, or material may be sourced locally, the rules of the partially ordered system adjusted to match available material elements. Secondly, AR supports the integration of human sensory intelligence into the construction process of an architectural structure (Mitterberger 2022). While machinic production frequently entails a high level of predetermination and precision, human handcraft enables an intimate and intuitive interrelation between the maker–especially the maker’s bodily experience–and the work-piece, a quality developed in the work by architect Juhani Pallasmaa, among others (Pallasmaa 2009, 2012). Filament-based construction in architecture is a field of research increasingly explored in architectural robotics ranging from six-axis articulated robots to distributed mobile robots (Menges and Knippers 2015; Mirjan et al. 2016; Yablonina and Menges 2019). A large branch of architectural robotics aims to imbue the machine with human-like sensory capacities, often referred to as behavioural robotics (Arkin 1998; Dörfler et al. 2014; Menges 2015). However, the inverse approach of equipping humans with wearable technology and thus merely recording and enhancing their skills has become an equally growing field (Wang 2009; Chi et al. 2013; Mitterberger 2022). This project is a contribution to the latter, advocating for a reconsideration of the value of human handcraft in architecture through AR. The following sections provide a review of the current state of research of AR technology and systems in architecture and outline the project’s novel contribution (Sect. 2). Then the methods of developing the framework for AR for filament-construction and for the experiments are explained (Sect. 3). The results are presented in two sections: section one focuses on the framework itself, section two describes the experiments testing human-to-machine, machine-to-human and machine-to-human-to-machine interaction (Sects. 4.1 and 4.2), showing the framework’s application in a larger-scale architectural demonstrator. The conclusion discusses the results and outlines areas of further research (Sect. 5).

2 Current state

Since the project Minimal Machines integrates architectural construction knowledge from AR and partially ordered systems, the current state review encompasses both fields (Fig. 2). Sect. 2.3 outlines the project’s contributions within the current state of AR for filament-construction in architectural material systems.

Fig. 2
figure 2

Current state overview. The current state overview shows combined information from Sect. 2.1 on AR in full-scale architectural construction and Sect. 2.2 on partially ordered systems in architecture

2.1 AR in full-scale architectural construction

AR in architecture is a growing field of research (Wang 2009; Chi et al. 2013; Mitterberger 2022). The current state review presented here focuses on the implementation of AR in connection with 1–1 scale material systems, meaning visualisations and scale models using AR are not included. The projects are evaluated based on the type of user input and output in the AR framework, and the material system which they implement using AR. Some of these material systems display partial order, which will be indicated, but the research emphasis of the projects in this section is the AR framework.

Six groups of projects can be distinguished based on the type of AR system used: sensing of human actions, head-mounted visual displays, hand-held visual displays, arm-mounted visual displays, inertial measuring in combination with other AR devices and audio-visual user directives. These systems can be deployed individually or in combination with each other.

Fundamental research conducted in test scenarios made sensors capture human actions and transfer them to a six-axis articulated robot. In 2014, Kathrin Dörfler first-authored a project based on craftspeople interacting with a robot equipped with sensing and actuating devices to investigate material systems with no predefined arrangement. The material systems used for this project were sticks and blocks which displayed partial order (Dorfler 2014). Research first-authored by Giulio Brugnaro in 2019 integrates a six-axis articulated robot and craftspeople using artificial intelligence (AI). Recorded from a craftsperson carving wood, manufacturing data were contained in structured data collections used to train an “Artificial Neural Network”, which was in turn deployed to run the six-axis articulated robot. The user output was traced through a motion tracking system and sensors, which were not hand-based but applied to material and tool (Brugnaro and Hanna 2019). The project “Soft Office” published in 2021 and first-authored by Maria Yablonina uses a similar approach. Here, instead of a six-axis articulated robot, humans interacted with a team of distributed small robots building a frame-based filament structure. Human interaction with the system would occur either through interaction with the digital system interface or by placing new poles for filament-construction in the physical construction space and deciding on the positioning of a robotic rail (Yablonina et al. 2021).

A larger group of projects deployed head-mounted AR devices, such as the Microsoft HoloLens (2023). In 2018, Gwyllim Jahn first-authored an article on a structure made from intervowen steel tubes. The craftspersons wore Microsoft HoloLenses for fabricating and assembling the steel elements based on a digital model (Jahn et al. 2018). In a project first-authored by Ondřej Kyjánek in 2019, a craftsperson wore an AR head-mounted device, a Microsoft HoloLens, thus acting as a mediator between a six-axis articulated robot and timber-beam structure. The system worked on the basis of a precise digital model where the craftspeople conducted “tasks requiring process knowledge and dexterity”, which was combined with the robot’s capacity for precision (Kyjanek 2019). In an art installation shown at the Royal Academy of Arts in 2019 titled “Invisible Landscapes”, Gilles Retsin used the Microsoft HoloLens to direct people during the installation’s construction process. The digital design model was superimposed on the installation space and adjustments to the design made in real-time (Block 2019; Royal 2019). A publication first-authored by Jahn in 2020 involves combining Fologram software for mobile phones with the Microsoft HoloLens and 3D modelling software with a visual programming interface. The setup was used to send an “interactive holographic instruction set from parametric models” to a group of skilled bricklayers (Instant mixed reality experiences from Rhino and Grasshopper 2023; Jahn et al. 2020). Two articles first-authored by Jahn in 2019 and 2022, introduce AR for steam-bent timber boards. The resulting large-scale demonstrator “Steam Punk Pavilion” shown at the Tallinn Architecture Biennale deployed a Microsoft HoloLens and conventional manual crafting tools. The material system was based on digital models that were transferred into the AR headset as directives for construction, thus enabling craftspeople to react to the material’s behaviour while making (Jahn et al. 2019, 2022). Another paper, first-authored by Xiliu Yang in 2022, introduces “Vizor”, a framework for the collaboration of humans and robots in architectural construction based on a visual programming plugin as well as a Microsoft HoloLens. These were deployed to enable a craftsperson to perform assignments during the construction process of a structure consisting of timber panels via visualisation of the structure’s geometry with precise description, positioning and timing of an action (Yang et al. 2022). The project is part of a larger platform on “instructive Human Robot Collaboration (iHRC)” (Amtsberg et al. 2021).

Another group of projects explores hand-held mobile devices as AR inter- faces. Research presented in two articles first-authored by Ryan Luke Johns in 2014 aims at passing fabrication information not only from the human to the machine, but also from the machine to the human. A six-axis articulated robot was equipped with sensing devices to interact both with the material and with a human operator via the material. The human operator used a hand-held visual display with a touchscreen to interact with this system. A series of experiments explored the framework, focusing on materials with non-determinate behaviour such as liquids or wax. The authors propose the concept of “augmented materiality” and of placing the “material in the loop” (Johns 2014; Johns et al. 2014). In 2020, research first-authored by Lidia Atanasova was based on a six-axis articulated robot collaborating with two humans, one of which deployed a hand-held mobile device with a touchscreen for visualising and tracking the building components, while the material system used to implement this AR framework was a wood-frame structure (Atanasova et al. 2020). An article first-authored by Daniela Mitterberger presents a framework for two humans with two mobile six-axis articulated robots, the material system used as a demonstrator consisting of wood sticks joined by wool yarn. Starting from a digital model, the construction workers could place elements as they chose based on several design principles. Here, humans placed initial elements while the mobile six-axis articulated robots positioned balancing ones, which were in turn fastened by the humans through knotting, the placed elements traced by a hand-held visual display with a touchscreen. This material system displays partial order as not all elements’ positions and orientations may be calculated a priori (Mitterberger et al. 2022). An AR application for hand-held visual displays with touchscreens presented in a project first-authored by Atanasova in 2023 allows several craftspersons to engage collectively in the construction process. To test this set-up with a material system, interlocking elements, custom-made from wooden struts, were assembled into a spatial structure. The construction sequence of the entire structure was not fully known in advance and the input of each individual craftsperson was continuously fed into an “Assembly Information Model (AIM)” (Atanasova et al. 2023).

Published in 2020, a project first-authored by Garvin Goepel explores the combination of head-mounted AR devices and hand-held visual displays with touchscreens in an installation titled “ARgan”, which was made from bent bamboo rods, five Microsoft HoloLenses and over a dozen hand-held visual displays with touchscreens. All components were combined in one digital model, pre-modelled in a 3D modelling software package with a visual programming interface and the fabrication data consequently translated in the multi-agent AR fabrication setup. The resulting material system displayed partial order (Goepel and Crolla 2020).

Wearable, arm-mounted visual displays offer an alternative to hand-held visual displays with touchscreens and have been deployed in combination with other sensors and actuators in a project published in 2016, first-authored by Benjamin Lafreniere. It established a crowd-sourced process for architectural construction of a tensegrity structure made from rods and string. The system used arm-mounted visual displays of smartwatches, position tracking and indexed construction elements. These were deployed to give workers directions in real-time through a central computing unit which implemented a previously designed digital model (Lafreniere et al. 2016).

Cameras and inertial measuring have also been explored in combination with other AR devices. Timothy Sandy first-authored an article in 2018 presenting an “inertial measurement unit (IMU)” combined with a camera and an AR visualisation. This system is tested with manually building a brick structure and comparing it to a digital model (Sandy and Buchli 2018). Research in another article first-authored by Mitterberger in 2020 operates with custom-developed AR. The entire system consists of an operating person, a bricklaying person, a handheld camera, an “inertial measurement unit (IMU)” and an on-body input device, all connected remotely to two laptops–one for computing and one for error correction and visualisation. The “inertial measurement unit (IMU)” is an addition to established AR holographic displays, allowing for dynamic optical direction and precise registration. The system was used for bricklaying, in which the bricklaying person focused on the bricklaying process and the operating person integrated the brick with a pre-designed computational goal model (Mitterberger et al. 2020). A project first-authored by Andrea Settimi was published in 2022. The team developed a drill equipped with a camera and an “inertial measurement unit (IMU)” in combination with a head-mounted AR device. The goal was to enable craftspeople to conduct their work process based on computational feedback rather than hand-drawing and props. The framework was tested with drilling operations on timber (Settimi et al. 2022).

The development of audio-visual directives for the human is another research direction. In 2015, Hironori Yoshida first-authored an article presenting a project that uses scanning and projections to guide craftspersons in the assembly of a structure made from manually poured sticks and glue (Yoshida et al. 2015). A paper first-authored by Mitterberger and published in 2022 explores a framework for “Interactive Robotic Plastering (IRoP)” departing from the observation that plastering is based on implicit knowledge of the maker. The team developed an interactive system that translates the motions of a plasterer into control-paths of a six-axis articulated robot via motion tracking of and audio-visual directives for the maker (Mitterberger et al. 2022).

While the emphasis of this section’s projects has been on AR in architectural construction some of them also begin to investigate systems which display partial order (Dorfler 2014; Goepel and Crolla 2020; Mitterberger et al. 2022). A conclusive review of the aforementioned projects as well as an outline of the contributions of the research presented in this article are given in Sect. 2.3.

2.2 Partially ordered systems in architecture

The following section provides a review on partially ordered systems in contemporary architecture from the realm of computational design and construction. Vernacular structures are another field which may be a good resource for reviewing partially ordered systems in architecture. Here, these were not yet considered since the use of a digital notation system on a computer in combination with a partially ordered system was central to this specific project. The reviewed projects are evaluated based on the type of material system used in combination with the resulting partial order logic, as well as the integration of AR into the construction process.

Four different groups of material systems can be distinguished in the reviewed projects: poured or thrown elements, interlocked elements, joined elements and woven or laid elements, some projects using a combination of elements. Partial order in these material systems may derive from a minimum distance or a known location and orientation between neighbouring elements.

In systems consisting of poured or thrown elements, a building unit is not deliberately placed, but finds its position in relation to its neighbours. From 1996 to 1997, Kentaro Tsubaki conducted a Master’s thesis supervised by Dan Hoffman and Peter Lynch and published several years later, in which he developed material systems from non-convex elements loosely poured onto each other (Tsubaki 2012). Several projects under the supervision of Michael Hensel and Achim Menges conducted studies on granular materials consisting of designed particles. The common principle of this research is the geometric interlocking of artificially produced particles which are loosely poured, either with or without formwork. In this context, Eiichi Matsuda developed non-convex particles designed from linear elements (Hensel and Menges 2006a). Anne Hawkins and Catie Newell produced these from sheet-materials (Hensel and Menges 2006b). Selim Bayer and Kyle Shertzing further developed Eiichi Matsuda’s system on a larger scale (Hensel et al. 2010). In an installation entitled “Remote Material Deposition”, developed and published in 2014 in two articles first-authored by Kathrin Dörfler and Luka Piškorec, loam projectiles were deposited by a six-axis articulated robot using remote sensing (Dörfler et al. 2014; Piskorec 2014). Kieran A. Murphy first-authored research on granular materials made from designed particles in the realm of granular physics, a project developed for an application in the field of architecture and thus relevant to this overview. The team developed stiff, non-convex particles with hook-like geometries, which permitted entangling between the elements. Granular materials consisting of these particles allowed for the formation of stable columns with high aspect ratios (Murphy et al. 2016). Petrus Aejmelaeus-Lindström introduces a material system named “Jammed Architectural Structures (JAS)”, published in two consecutive articles. This architectural material system used a granular material–rock–in combination with string placed in a geometrically defined and pre-planned manner acting as a reinforcing element (Aejmelaeus-Lindström et al. 2016, 2020). Research and teaching projects on granular materials consisting of designed particles, first-authored by the last author and published in 2016 and 2021, explore how the behaviour of a granular material could be calibrated through the geometry and materiality of its component particles. Particle shapes included convex, non-convex and double non-convex geometries. Shape-changing particles were also investigated (Dierichs and Menges 2016, 2021a). In material systems consisting of poured or thrown elements, the minimum element-to-element distance and the geometry of the particles induce partial order. Adding string may induce a secondary layer of order, as does the directing of the ballistic trajectory of a thrown element.

Working with interlocked elements denotes the deliberate placing of geometrically defined building blocks–the elements. In the aforementioned Master’s thesis by Tsubaki, non-convex elements were also stacked one by one, gaining structural stability via geometric interlocking (Tsubaki 2012). A project first-authored by Arielle Blonder in 2017 assembled a wall of longitudinal elements, which had designed protrusions allowing for interlocking. The structure mimicked a bird’s nest (Blonder 2017). Maria Larsson first-authored an article in 2019 which presents structures made from tree branches. These are scanned and pre-arranged in a digital model. Using audio-visual directives for the craftsperson, the elements are then manually oriented into place, cut for interlocking and assembled (Larsson et al. 2019). Within these systems of interlocked elements, partial order is induced through element-to-element distance as well as through the location and orientation of each successive element. The latter may change throughout the construction process, since the interlocked joints are friction-based and may shift.

In material systems consisting of joined elements, building elements are fixed together with a mechanical part, such as a screw. Pradeep Devadass first-authored an article in 2016 on the development of the “Wood Chip Barn”. The roof structure was made from locally harvested beech branches, which were 3D scanned and the structure’s design based on the scanning results being fed into a sorting algorithm (Devadass et al. 2016). In 2019, Kaicong Wu first-authored a design research project in which he investigated the assembly of birch stems by a six-axis articulated robot driven by machine learning algorithms (Wu and Kilian 2019). In these material systems, partial order may be established through element-to-element distance, as well as through location and orientation of each successive element.

Woven or laid material systems consist of longitudinal elements which are also referred to as fibres, threads, yarns or filaments. In 2013, a group led by Neri Oxman built the “Silk Pavilion”, a structure made by silkworms that were placed on silk threads laid out on frames with a six-axis articulated robot (Oxman et al. 2014). In terms of describing this project as a partially ordered system, the robot-laid thread can be regarded as an ordered system, since the pathways are calculated based on input of defined data sets. Yet, the silkworms induce non-order in the pavilion, since only their location of placement and general spinning behaviour are known, while the actual local and global geometry of the spun structure is not. A project first-authored by Giulio Brugnaro in 2016, developed a robotically woven structure made of rattan based on the example of a weaverbird’s nest using an agent-based model with a six-axis articulated robot (Brugnaro et al. 2016). Partial order in both projects emerges from the minimum distance between elements, and from the location and orientation of each successive element.

Fundamental research from several projects discussed in this section points to the implementation of AR for partially ordered systems, such as remote sensing or audio–visual guidance (Dörfler et al. 2014; Piskorec 2014; Brugnaro et al. 2016; Dierichs and Menges 2016, 2021a; Wu and Kilian 2019; Larsson et al. 2019). The following Sect. 2.3 offers a conclusive review of research from this section as well as an outline of project contributions presented in this article.

2.3 Contributions to the current state

The majority of the projects presented in Sect. 2.1 uses either head-mounted or hand-held visual devices frequently combined with motion tracking. In the field of AR for architectural construction, the contribution of the project presented in this paper lies in the use of non-vision-based feedback to and from the user. In addition, a new insight is the integration of these non-vision-based sensors and actuators into a wearable–the augmented glove–which keeps hands free for crafting.

Filaments as a material system have not been widely explored in AR for architectural construction, nor as partially ordered architectural material systems as presented in Sects. 2.1 and 2.2. Therefore, this project is also a valuable contribution in the area of filamentous architectural structures with partial order using AR.

Finally, the combination of construction through AR with partially ordered material systems is rarely explored. A majority of the projects work with definite or predefined adjustable goal models. Therefore, Minimal Machines also contributes to the exploration of developing and recording a set of operational steps, which may be transferred to the craftsperson via AR in the future and which in turn may be adjusted through making.

3 Methods

The project was developed through both teaching and research. Initially, it involved two design research studios at weißensee school of art and design berlin with students at the Bachelor and Master levels within the department of textile and surface design (Fig. 3). Students were introduced to the biological role model–silk cocoons. Then, they explored filaments and filament laying patterns mimicking the material makeup of silk cocoons as well as their making by silkworms to develop material systems and construction processes at an architectural scale (Bonavia and Dierichs 2023).

The course results were evaluated and further developed as a research project at the postgraduate level. At both stages, two modes of investigation were used: the study of analogue and digital models of animal-made structures and the extraction of their construction principles. This process is summarized in Sect. 4.2.1. In parallel, analogue and digital prototypes of human-made structures were created, emulating and expanding on these construction principles. The design process was not linear, but required continuous feedback between animal- and human-made structures.

Fig. 3
figure 3

Scale modelling. A scale model conducted by Sara Hassoune, a student of textile and surface design at weißensee school of art and design berlin, emulates the filament-laying process of a silkworm. The process mimics how a silk cocoon is spun from the inside rather than the outside and shows similar stages to those of the biological role model. These results are partly implemented in the larger scale experiment series

3.1 Analogue and digital models of animal-made structures

The animal-made structures—cocoons—were studied regarding the spinning procedure of the silkworm, gathering information from videos available online and a literature review, both largely based on the Bombyx mori species. The videos were analysed in terms of the stages of spinning the silkworm undergoes as well as the body movements it makes in space (The Caterpillar Lab 2023). The literature review was based on several articles investigating the motion of the Bombyx mori silkworm’s movement while spinning. Both the analysis of the videos and the results of the literature review were translated into spinning instructions for an architectural scale (Sect. 4.2.1).

The approach of drawing data from the literature and material available online has proven useful in regards to the Bombyx mori silkworm, which is arguably the most-studied species. If considering lesser-known species, the inclusion of on-site observation of the spinning behaviour would be essential.

3.2 Analogue and digital models of human-made structures

The project was carried out in two stages. The first stage integrated in teaching served to rapidly test filament types, filament laying patterns as well as a first version of the AR framework. The initial prototyping of human-made structures was conducted through analogue scale models at a 1–10 scale. They were recorded in 3D digital models and written instructions considered pseudocode.

Initial prototyping on a 1–1 scale was carried out through oral instructions between two craftspeople who had experience with 1–10 scale models. These 1–1 scale prototypes were then combined with the AR framework to test initial recordings of the points placed and the craftspeople’s motion patterns. The second stage consisted of a more controlled set of experiments presented in this paper. All experiments used so-called structured yarn with a sinusoidal geometry made from a high percentage of wool (Fig. 4).

Fig. 4
figure 4

Initial prototyping. A Based on the results of the design research studio, several 1–1 scale room-size prototypes were constructed to test anchoring solutions, construction sequence and filament types. B Eventually, a structured yarn with a sinusoidal geometry was found, allowing for expansion and elastic behaviour

All experiments used the AR framework, which included a glove with integrated AR technology, a positional tracking system consisting of two tracking devices and a laptop computer with a custom-written AR interface. Each experiment was conducted by an operator and a craftsperson. The operator monitored the AR framework and the recording and sending of data to and from the laptop computer. The craftsperson wore the augmented glove to construct the analogue filament structure on a given site.

Two digital single-lens reflex (DSLR) cameras were used for recording construction. In addition, points and motions were recorded during making. The experiments were conducted with an increasing level of complexity in the data flow: the first investigated human-to-machine data transfer, the second machine-to-human data transfer and the third machine-to-human-to-machine data transfer.

4 Results

The following two sections detail the results of the Minimal Machines project. Section  4.1.1 outlines its framework from a technical and a system point of view. Section  4.1.2 compares partially ordered systems in animal- and human-made filament architecture. Section  4.2 describes three experiments on human-to-machine, machine-to-human and on machine-to-human-to-machine data transfer.

Fig. 5
figure 5

Workflow overview. The AR framework consists of two lighthouses, the augmented glove and a laptop computer for simulation with live feedback. The operator runs the simulation on the laptop computer while the craftsperson wears the augmented glove. The architectural filament structure is spun in the space, fastened by anchor points

4.1 AR framework

The project’s framework comprises the digital modelling environment as well as the analogue–digital input–output (IO) interface, which is referred to as the augmented glove in the following sections (Fig. 5). These system components are described in the section explaining the technical outline of the framework. Another aspect of the Minimal Machine framework is its performance as an integrated system of input and output variables and algorithms used, presented in the section on the framework’s system outline.

Fig. 6
figure 6

Augmented glove. The input–output (IO) interface is integrated into wearables—a wrist-strap and a glove—together referred to as an augmented glove. These are worn during construction and can be used either for continuous recording and sending of data or selectively for distinct phases of construction. (i) The inside-out body tracker is attached to the wrist-strap. It renders an output of 3D plane data in the digital modelling environment. (ii) A button and vibration motors are embedded in the glove. The button allows for sending output data of boolean states to the digital modelling environment. The vibration motors receive input data in the form of haptic pulses from the digital modelling environment. (iii) The electronic Bluetooth module is the switching point for data flow

4.1.1 Technical outline of the AR framework

The digital modelling environment and the analogue–digital input–output (IO) interface form the AR framework for filament-construction in architectural material systems. The digital modelling environment encompasses all the programmed interfaces of the AR framework, while the analogue–digital input–output (IO) interface comprises all hardware elements which form the connection between the digital and the analogue realm. The following sections describe both aspects of the AR framework in greater detail.

Digital modelling environment: The digital modelling environment is based on a computer-aided design modelling software with a visual programming interface and custom-written C# components which were combined with existing modules, most importantly the Vivetrack Grasshopper plugin (Chen 2023).

Analogue–digital input–output (IO) interface: The analogue–digital input–output (IO) interface consists of (i) an inside-out positional tracking system, (ii) a glove with an integrated output button and haptic input signal and (iii) an open-source electronic Bluetooth module (Fig. 6). The augmented glove has been made combining the commercially available Vive tracker system with a self-developed wearable textile integrating electronic sensors and actuators (Vive 2023).

4.1.2 System outline of the AR framework

In this section, the AR framework is presented as an integrated system of input and output variables as well as algorithms used (Fig. 7).

Fig. 7
figure 7

Unified Modeling Language (UML) diagram of the AR framework. Initially, the variable and fixed inputs are defined. These are passed onto the simulation which renders position and actuator updates and passes them to the human operator. The human operator has two forms of interaction: interaction 1 is receiving input from the model, interaction 2 is giving input to the model. The latter are considered outputs which are in turn fed back into the simulation model

The project’s digital workflow uses open-source software for single-board microcontrollers—Arduino IDE—and a commercial computer-aided design software—Rhinoceros 3D (Rhinoceros 2023; Arduino 2023). Rhinoceros 3D includes a visual programming package–Grasshopper–to develop scripts using open-source components as well as custom C# code (Grasshopper 2023).

Information is sent to and received by the augmented glove via Bluetooth with Arduino IDE. Rhinoceros 3D is used to model and simulate the environment, Grasshopper serving as an interface between the augmented glove and the simulated model. The plugins and custom C# scripts capture and translate the sensor input from the augmented glove into a 3D geometric model. The data from the 3D geometric model are used to control the actuators of the augmented glove.

4.2 Experiments

The experiments are described in two parts. Part 1 comprises a literature review of the spinning behaviour of silkworms, a record of the implicit textile knowledge for imitating this behaviour as well as a comparison between the two (Bayne et al. 2009). Part 2 consists of a set of three room-scale architectural experiments moving from machine-to-human, human-to-machine and machine-to-human-to-machine interaction.

4.2.1 Part 1: Partially ordered systems in animal- and human-made architecture

The project adopts the logic of an animal-made filament structure—the cocoon of the Bombyx mori silkworm—into a human-made filament structure. The following section will present a literature review of the production of cocoons by the Bombyx mori silkworm. This is followed by a comparison between this literature review and the implicit textile knowledge which has been used for the architectural demonstrator to show how these two processes are similar and how they are different.

Literature review of the Bombyx mori silkworm’s spinning behaviour: Silk is a biomaterial that is mostly associated with spider nets and the cocoon of the silkworm Bombyx mori, but is found in many other Lepidoptera species that build cocoons or cocoon-like structures as well as other insect species with diverse functions: similar structures are made by web spinners as tunnel coating, or by water beetles as silken rafts for their eggs (Sutherland et al. 2010). The geometries of the silk cocoons of Lepidoptera are diverse, yet many of them show the movement which the silkworms perform to make the cocoon as a figure-of-eight turn of the head (Streng 1974). While Miura et al. also reference patterns of v- and s-shapes on the cocoon, the eight-figure movement is the dominant form and the one most commonly described in the literature (Miura et al. 1994).

When looking at the figure-of-eight turn more closely, like Streng has, one will notice there are two basic ways to draw eight-figures in a continuous motion—either in a positive or negative direction. Streng defines “positive” as the eight-figure turn progressing to the right in a Cartesian coordinate plane and “negative” as progressing to the left (Wiedbrauck 1955). Modelling these two oscillatory movements, he is able to show that the resulting figures look different depending only on their direction, and that this directly impacts how efficiently a spinning animal-agent is able to cover gaps on a substrate. Simulations show the positive eight-figure pattern covering the gaps more evenly, and both spinning directions are found in Lepidoptera, with Bombyx mori described as spinning in a negative eight-figure tour (Streng 1974). While spinning in an eight-figure, the amplitude of the movement decreases in Bombyx mori from about 35 mm in the outer silk structures to about 2 mm while spinning the cocoon (Miura et al. 1990, 1999).

The movement of the silkworm spinning a cocoon is not limited to movement of its head, however. While spinning the cocoon, the silkworm periodically reverses direction along its body axis, fixing its abdominal part, spinning in one direction, turning, fixing the abdominal body part and spinning again. The frequency of this movement increases during the spinning process, likely because the space inside the silk structure becomes more and more confined (Miura et al. 1990). During the spinning period, the larva spins from the end of the cocoon it is facing to the middle of cocoon and back multiple times before turning around. Miura et al. found the thread spun from end to the middle of the cocoon to be twice as long as in the reverse (Miura et al. 1990).

As the silk cocoon is usually the focus of research due to its economic relevance, the fact that the silk structure created during metamorphosis comprises more than the cocoon itself is often overlooked. Wiedbrauck differentiates seven substructures of the “Puppenhäutungsgespinst” (chrysalis shedding cocoon) of Bombyx mori (Wiedbrauck 1955). First, it creates a silken “Sitzspiegel” (base) on the substrate, which functions as a base for the “Stützpunkte” (supports). “Spannseile” (tension cables) attached to those create a frame within the spinning space for the filament “Wände” (walls). The now-defined spinning space is filled with loose silk, “Flockseide” (flock silk), until a roughly cocoon-shaped void remains. Within this void, the cocoon develops, which consists of a tough “Außenkokon” (outer cocoon) and a commonly thin silk layer, the “Puppenbett” (pupal bedding).

Translation of the Bombyx mori silkworm’s spinning behaviour: The human-made structure translates some principles of the Bombyx mori silkworm cocoon’s spinning process into construction sequences:

(i) The human-made structure was made in progressive construction from the outside inwards and resembles the animal-made role model in this aspect. (ii) The human spinner—referred to as craftsperson in the following—emulates the shimmying movement for all layers except layer 1, where filaments are laid to create straight and diagonal lines to anchor them by going up and down rather than back and forth. In the animal-made structure, the created scaffold is more loose, followed by progressively denser layers. (iii) In the silk cocoon, more connections can be observed within the layers rather than between them. The human-made structure uses the same principle as much as possible, making connections merely between one layer and the next, for example between layers 2 and 3 as opposed to 1 and 4, to create a gradual progression of structure and to limit the flattening of the walls. On occasion, correctional work was done to adjust the tension and refine the shape, in which case the craftsperson travelled to the outside of the structure and then worked their way back in—something the silkworm also does. (iv) In the silk cocoon, sericin is used like a natural glue to fasten onto the surrounding space and lay new silk onto the cocoon walls. By contrast, the human-made structure used eyelet anchor points and Carabiners instead of glue to anchor filament to walls, and knots instead of glue to create connections from filament to filament. (v) For the Bombyx mori cocoon, silk can be entirely unreeled. In the human-made structure, a variation of the Super Munter Hitch knot was used, as it is easy to undo to regain filament, and theoretically the entire structure can easily be un-knotted (Animated Knots 2023) (Fig. 8).

Fig. 8
figure 8

Super Munter Hitch knot. The Super Munter Hitch knot was used to create connections that were steadfast—especially in combination with the textured filament surface—and easily undone. Hitches made during the first layers featured two full turns, whereas those made during the densification process featured only one

The basic assumptions which the implicit textile knowledge is built on are as follows:

  1. (i)

    The partially-ordered architectural filament structures are built in a series of layers.

  2. (ii)

    The primary differentiation between layers is the average distance between the successive knot points: the first layers are characterized by larger average distances between successive knot points and the average distance decreases with each new layer.

  3. (iii)

    Connections between layers exist but are kept as much as possible between each working layer, for example layer 3, and that preceding it, for example layer 2. This prevents flattening of the structure and promotes gradual densification of filament around the inner structure, especially during the first layers for the creation of an outer scaffold and a distinct inner spatial enclosure.

  4. (iv)

    The spinning process for the inner layers starting from the second is characterized by a shimmying to-and-fro movement carried out by the craftsperson–zigzagging left and right and up and down to gradually fill in a section of the structure before moving to the next section. This allows for the gradual building of a spatial enclosure, which is one of the main reasons for inducing partial order into the system.

  5. (v)

    Layers are complete when the empty space between connecting points appears to be roughly the same so that no one section of the working layer is more dense than the others.

The sequence of operational steps implements the basic assumptions in chronological order following through the layers (Fig. 9):

Fig. 9
figure 9

Operational sequence. The operational sequence comprises four different stages or layers. Layer 1 establishes the outer scaffold, layer 2 the rough shape of inner structure, layer 3 the refined shape of inner structure and layer 4 the preliminary surface of inner structure. All subsequent layers serve the densification of surface

In the first layer, the anchor filaments are placed by laying long filament lines between the defined anchor points on the construction site. This is the most basic part of the outer scaffold. It requires the largest gestures and movements in the room by the craftsperson in the entire construction process and comprises fewer points than the other layers. The long lines of filament are used as textile anchors in the following layers. No knots are tied at this point to enable the filament anchors to adjust under tension. The vertical filaments create an initial frame which is gradually pulled in during construction process. Diagonal lines create crosses. Horizontal filaments may also be laid as textile anchor lines which later help to maintain height in the central part of the structure at later stages.

The second layer defines the rough inner spatial volume. Here, the distance between points is significantly shorter than in layer 1. The connection points are created on the anchor lines laid in layer 1. The spinning motion is not linear clockwise but characterized by the shimmying to-and-fro movement, zigzagging from left to right and up and down while moving in a clockwise direction. In this layer, the basic shape of the structure is outlined. Here, it is important not to produce too small a space too quickly, as it will continue to get smaller as the layers progress.

The third layer is used to define the refined inner spatial volume. The same shimmying to-and-fro movement as in layer 2 is used for this process. The distances between successive knots are shorter than in layer 2. In this layer, some spinning also occurs from the outside of the structure to adjust for local tension as well as the shape of the inner structure. This process of spinning from the outside is kept to a minimum so as not to flatten layers or over-densify the outer scaffold to keep the effect of a suspended inner volume. At this stage, the height of the spatial enclosure may be adjusted by pulling through top and bottom anchor points.

In the fourth layer, the surfaces of the inner spatial volume are created. At this point, the structure’s shape has been defined. Surface creation refers to the filling in of those parts of the structure which enclose that refined inner spatial volume created in the previous layer. This is done gradually using the shimmying to-and-fro movement.

After layer 4, each layer is characterized by a shorter distance between successive knots which eventually leads to a densification of the structure. A gradient in density can be achieved by lowering the height of each new layer.

4.2.2 Part 2: Experiment series

General experiment layout: The experiments successively test human-to- machine, machine-to-human and machine-to-human-to-machine interaction, thus increasing in technical and systemic complexity. The first two experiments are conducted with an equal amount of input points and the filament structure for each of these experiments is started afresh. In the third, only the fourth layer is redone (Figs. 10 and 13). While carrying out the experiments, the craftsperson’s movement in the process of spinning is continuously recorded, information which is received as a series of point and spatial positioning data. In addition to the points themselves, the time between point-setting is recorded by motion-tracking indicating the speed of spinning. Thus, all points and motions as well as speeds are recorded.

Fig. 10
figure 10

Experiments 1 and 2. Layers 1–4 of experiments 1 and 2 are represented from top to bottom with the analogue model on the left and the digital model on the right where grey lines indicate the craftsperson’s motions and black lines the direct connections between knot-locations. A Experiment 1 tested human-to-machine interaction. B Experiment 2 investigated machine-to-human interaction

Experiment 1: human-to-machine: The human sends a button signal to the simulation, registering a point in 3D space. A point is registered after each new connection between filaments. The filament syntax is thus human-generated and not pre-given. The points are then passed on to experiment 2.

Fig. 11
figure 11

Toroidal polyhedron. The toroidal polyhedron formed during the construction process is displayed in transparent red. Poloidal and toroidal directions are indicated as red lines with arrows (Weisstein 2023). This formation allowed for testing the construction of spacial enclosures and surfaces using the AR framework

Experiment 1 started with intuitive decisions and knowledge gained from previous smaller-scale models. Sketches led to the rough shape of a toroidal polyhedron (Fig. 11) (Weisstein 2023). The structure was defined within eight corner anchor-points which formed a volume of 5.1 by 3.1 by 3.1 m. Four layers in total were laid and recorded. In layer 1, a rectangular scaffold was formed from the mono-filament. Here, the filament was laid by crossing every surface of the anchor-point volume with diagonal lines and two vertical filaments at each corner. In layer 2, a looser scaffold was built up within the first scaffold by moving in vacillating motions between left and right in a counter clockwise direction. This served to bring the outer scaffold towards the center and to create the rough oval shape and size of the final structure. Layer 3 required the most intricate operational steps. First, the inner oval of the toroidal structure was defined by repeating similar vacillating motions from layer 2, but at smaller distances and only from the inside of the structure. Meanwhile, the internal surface was created in the toroidal direction. Subsequently, curvature in the poloidal direction was defined, simultaneously tensioning any looser filaments to maintain and increase stability. Towards the end of spinning this layer, any scaffold filaments left behind were tensioned from the outside. In layer 4, a surface on the inner oval was created with the mono-filament, the oval’s left side rendered more densely.

The following qualitative observations were made during the physical part of the prototype construction in experiment 1. Points were marked with a button click after making a knot, with the exception of layer 1, which did not include knots, but points were marked at the anchor positions. Once the knot was made, the craftsperson pointed the tracker directly at the knot and clicked the button. Most of the structure was spun from the inside with the exception of a small part of layer 3 towards the end. For the length of the filament, the measurements recorded for each layer are approximate, as estimations of how much filament was left on the bobbin were possible only at the end of each layer. After unravelling experiment 1, all of the filament was collected onto a single bobbin, which could be measured to compare accuracy. The following qualitative observations were made during the digital part of the prototype construction in experiment 1. For layers 1 and 2, the number of points recorded corresponds to the number of points marked by clicking. Starting from layer 3, errors in the recording process were sometimes incurred due to several different issues such as human fatigue, lack of concentration or machine fatigue, particularly due to the prototype nature of the augmented glove. Such errors were recorded as extra points and later removed to maintain a clean data set that corresponded to the physical installation. Despite having a calibration station and wall-mounted tracking devices, the tracker and tracking device positions still changed slightly each time the setup was restarted. This generally occurred at the start of a work-day or in the case of a system crash. For each experiment, this was corrected internally, first by taking recorded tracker calibration positions each time a calibration was made and overlapping them. Then, the two experiments were laid on top of each other, again via tracker calibration position as the reference point. For movement recording, it was necessary to batch movement points in groups of 2000 to prevent the digital model from crashing.

The following quantitative observations were made during prototype construction in experiment 1. A total of 252 knots were recorded, of which 207 were valid and 45, which means 17.86 percent, were culled. This could be due to the over-sensitivity of the augmented glove which also required some time to get used to spinning and knotting with. Some knots were accidentally marked 3 times, simply due to the the hand clenching into a fist. Roughly 192.60 m of filament were used and the entire experiment took circa 150 min.

Experiment 2–machine-to-human: The points of experiment 1 are used as input for experiment 2. The simulation sends a series of vibration signals to guide the human to the points one by one following the filament-syntax from experiment 1. Since exact point coordinates are hard to replicate from one installation to the next, the human is guided into potential zones for making a new filament-to-filament connection (Fig. 12). To this end, a sphere is created around the point already tracked in experiment 1. Three vibration motors for left, centre and right indicating the direction of travel as a haptic compass guide the human into the diameter of the first sphere–denoted as the tolerance zone. Once the human enters the tolerance zone, the vibration signals change from a compass, leaving only the central motor to vibrate. The intensity of this vibration increases the closer the human gets to the second sphere—denoted as the goal zone—and therefore, the anchor point. Once this is reached, motors vibrate and a new connection is made from this point. This way, information may be passed on from installation to installation while taking the emerging properties of each individual structure into account.

Fig. 12
figure 12

Zones of the construction space. The haptic compass on the augmented glove is first used for spatial orientation in the room towards a tolerance zone where the point is located. Once within this zone, the craftsperson searches for a more precise location of the goal point. This is found once the craftsperson intersects the goal zone, where a new point is created

The structure from experiment 1 was taken down and any extra recorded points from layers 3 and 4 were deleted. The fabrication of layers 1 and 2 was straightforward replicating the points from experiment 1 by the signals sent to the augmented glove. The process took longer than during experiment 1 and the movements were clearly more meandering, as can be seen in the digital recordings of the process (Fig. 10). In layer 1, the very regular configuration of points added to the ease and speed of construction since the sequence was easily memorized: in the next instance of the experiment series, the individuals conducting the experiments might need to differ between one experiment and the next to distinguish between memory and machine input. As of layer 2 it was no longer possible to remember the syntax. Thus, it was necessary to rely on the vibration compass to locate the points. Despite being a relatively straightforward process, it was sometimes difficult to identify where to make a connection when two or more filaments were present within the indicated zone. In layer 3, it became slightly more complex to locate the recorded points and determine where connections should be made. Thus, the tolerance zone indicating potential target points was reduced in radius to make it clearer where points might be located. Regardless, sometimes two or more filaments were present within the indicated zone. In this case, it was not possible to discern on which filament a connection had been made during experiment 1. Additionally, some points were indicated in empty space, floating above other filaments, for example. This could be either due to a calibration error, errors in the syntax, or due to the difference in manual tension between the two experiments.

As a result, some of the points recorded during layers 3 and 4 were made above or below, to the left or to the right of the original points recorded during experiment 1, as per the discretion of the craftsperson. This ultimately impacted the overall structure, with the walls of the inner oval becoming slightly shorter during experiment 2. This implies a difference in tension between experiments 1 and 2, with more loose-hanging filaments during the latter. In layer 4 even more points were indicated as hovering above the structure because the walls of the inner oval were already considerably shorter by this point. In such cases, the closest filament was chosen to anchor onto, keeping the curve of the wall in mind. The new, altered syntax created in layers 3 and 4 resulted in a generally similar oval-shaped structure with curved walls, with the opening of the curve facing the outside of the structure. However, the oval was slightly shorter and wider than in experiment 1. The filaments were also generally tenser within the surface of the structure. This changed the material system on a local scale, causing some filaments to twirl around each other and others to hang loose.

The following qualitative observations were made during the construction of experiment 2. After longer periods, it became heavy to maintain an outstretched arm. This was mainly a concern during layers 3 and 4 which took longer to record as a result of more points and increased complexity in establishing their location. The craftsperson began to lose sensitivity in the palm of their hand and would have to touch the motor with the other to confirm it was actually vibrating. The augmented glove became damaged twice due to wear and tear. Despite this, it performed very well for the duration of both experiments.

The following quantitative observations were made during the construction of experiment 2. A total of 210 knots were made. As in the previous experiment some knots were recorded accidentally due to human error, such as clicking twice. As a consequence, there were 6 points culled in total, and 1 point was found missing. Roughly 200.11 m of filament were used. The construction process took circa 349 min. The longer duration compared to experiment 1 was due to technical issues with the augmented glove in layer 3 and the fact that it took longer to find the goal point, whereas there was no goal point to be found in the first experiment.

Experiment 3–machine-to-human-to-machine:

Experiment 3 is a correctional experiment (Figs. 13 and 14). After unwinding layer 4 of experiment 2, the craftsperson was directed to the same points in the structure recorded during layer 4 of experiment 1. This process was similar to experiment 2. However, this time, the craftsperson either chose to accept the indicated point or to pick a new location for a point. In case of a new point, it was recorded by using the button on the augmented glove and the indicated point was culled and replaced with the newly recorded point. There were two factors influencing the decision to create a new point. First, the appearance of floating points: Sometimes points were indicated in a zone where there was no filament to anchor onto. In this scenario, the craftsperson chose the closest filament. Second, corrections of distance: Sometimes the points indicated were too close or too far from the previous point due to the structure changing from experiment 1 to experiment 2. A new location was chosen to keep with the laying logic of the layer which in this case was wall creation. This experiment thus allowed for the intuition of the craftsperson to correct or adapt the structure as necessary.

Fig. 13
figure 13

Experiment 3. A.1 and A.2 Layer 3 of experiment 2 is the starting point of experiment 3. Figure A.1 shows the analogue and A.2 the digital model of this starting point. B.1 and B.2 The new layer 4 is made on top of the existing structure of layer 3 from experiment 2. Figure B.1 shows the analogue and figure B.2 the digital model of this layer 4 of experiment 3

The following qualitative observations were made during the construction of experiment 3. Whereas the craftsperson in experiment 2 chose the closest possible filament when there was no filament to anchor onto where a point was indicated, the craftsperson in experiment 3 had more opportunity to correct or adapt the structure. In experiment 2, the walls became shorter and demonstrated an increase in dangling filament in the toroidal polyhedron region. This could be due to the craftsperson having to guess or choose the closest possible filament within the target zone and to changes in the manual tension applied. In comparison, experiment 3 showed fewer dangling filaments in the toroidal polyhedron region. While the ability to correct points by choosing a new location for indicated points was a positive overall, sometimes the new point was very close to the next point in the sequence, resulting in a slightly shifted sequence. Still, the overall distribution of points was more even.

The following quantitative observations were made during the construction of experiment 3. A total number of 86 points was recorded. A crash occurred at the 17th point. 2 points were culled in comparison to experiment 1. The total amount of filament in experiment 3 was roughly 200.55 m, which includes that used in layers 1 to 3. The recorded time for experiment 3 was 108 min.

Fig. 14
figure 14

Comparison of layer 4 of experiments 1, 2 and 3. (A.1A.3) The side views of layer 4 of experiments 1, 2 and 3 show the similarities and differences in movement pattern in all three experiments. B.1 and B.2 The point recording of layer 4 in experiment 1 in side and top view is shown with the clicked points displayed in blue and the cleaned points in red. C.1 In an overlay of layer 4 experiment 1 is shown in red and experiment 2 in yellow. C.2 In an overlay of layer 4, experiment 1 is shown in red and experiment 3 in grey

5 Discussion and outlook

The project Minimal Machines presents an AR framework for partially ordered material systems in architecture, tested on structures made from filaments (Fig. 15). The contributions and limitations of the project will be discussed separately for the realms of AR in architectural construction and partially ordered material systems. An outlook for further research for both areas is provided individually.

Fig. 15
figure 15

Construction process of a public installation. The AR framework has been deployed in the realization of a public installation. A The spatial structure was constructed pertaining to the implicit textile knowledge gained through the prototyping and experiment phases. B AR was used to record the process. This recording will serve as a data base for following iterations of the same filamentous material system

The main contribution to the field of AR for architectural construction is the development of a merely haptic interface–the augmented glove. This use of a non-visual and non-auditory device for architectural construction is crucial since uninhibited eye and ear contact with surroundings is considered essential to a craft-driven process. In other words: the work remains in-hand, conducted manually with all information flowing to and from the hand.

Limitations to the proposed AR framework are mainly observed on a technical level: hardware devices are still rather large compared to the human hand. This inhibits, alters and slows down the hand motions compared to a non-augmented manual process.

Further research in the area of AR for architectural construction with haptic interfaces will thus be directed to sensors and actuators which are very small in dimensions, enabling merely recording and supporting the manual process without interference.

The project’s main contribution to the field of partially ordered material systems lies in establishing a logic for the construction of a human-made partially ordered filament structure deriving from the construction processes observed in animal-made silk cocoons. The framework can be transferred to other partially ordered material systems in architecture, such as structures made from poured granular materials or nest-like structures made from longitudinal non-continuous elements.

In terms of limitations, this logic has been passed on verbally as well as through drawings and models, and then recorded through the proposed AR framework. Initial translations of these logics into algorithmic principles have been implemented, for example by using a sphere indicating a field of operation rather than precise points for knotting in experiment 2. Rather than regarding this manual-to-digital translation of processes as a limitation, it is an inevitable process of development in a craft-driven process.

Future iterations of the same partially ordered material system will be preceded by an analysis of the data recorded as well as their translation into an artificial intelligence (AI)-driven model of recommendation for the craftsperson, thus focusing on sending data input to the user rather than collecting construction data during the process of manual construction.