AI & SOCIETY

, Volume 21, Issue 4, pp 549–566

Socializing artifacts as a half mirror of the mind

Authors

    • Department of Intelligence Science and Technology, Graduate School of InformaticsKyoto University
  • Ryosuke Nishida
    • Keio Research Institute at SFCKeio University
Original Article

DOI: 10.1007/s00146-007-0107-4

Cite this article as:
Nishida, T. & Nishida, R. AI & Soc (2007) 21: 549. doi:10.1007/s00146-007-0107-4

Abstract

In the near future, our life will normally be surrounded with fairly complicated artifacts, enabled by the autonomous robot and brain–machine interface technologies. In this paper, we argue that what we call the responsibility flaw problem and the inappropriate use problem need to be overcome in order for us to benefit from complicated artifacts. In order to solve these problems, we propose an approach to endowing artifacts with an ability of socially communicating with other agents based on the artifact-as-a-half-mirror metaphor. The idea is to have future artifacts behave according to the hybrid intention composed of the owner’s intention and the social rules. We outline the approach and discuss its feasibility together with preliminary work.

1 Artifacts—from tools to autonomous robots

Conventional characterization of artifacts is to view them as tools for extending humans. Before the last century, the extension was made mostly limited to physical aspects. For example, horse-drawn carriages, steam locomotors, and automobiles remarkably extended our capability of moving and carrying.

In the second half of the last century, invention of computers and the Internet significantly changed the way humans are extended. First, the introduction of computers brought about artifacts with information-processing capability. It brought about automation everywhere. Aircrafts and other complex transportation systems are mostly controlled and monitored with computers, industrial robots are introduced to pursue complex assembling tasks very rapidly in factories, and sushi robots make various kinds of sushi at sushi bars, to name just a few. Second, personal computers and the Internet extended our mental world by substituting part of our mental functions or allowing us to communicate with each other beyond space and time.

The artificial intelligence (AI) technologies accelerate the sophistication of artifacts. In the early days, AI researchers mostly aimed at realizing intelligence by heuristic search. In the 1970s, AI researchers tried to invent knowledge representation language to explicitly represent experts’ knowledge. After the 1980s, AI researchers put much emphasis on machine learning and evolutionary computing, trying to invent artifacts that could improve their behaviors through experiences.

As a result, AI researchers succeeded, though still limited, in realizing autonomous robots that could cope with novel situations without human assistance. Autonomous robots range from intelligent vehicles—most notably NASA’s Mars Exploration Rovers—to autonomous rescue robots that can search and aid victims in disastrous situations. Autonomous robots can be metaphorically characterized as a full mirror that reflects a programmer’s mind in the sense that an autonomous robot behaves as the programmer does in a given situation (Fig. 1).
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig1_HTML.gif
Fig. 1

Autonomous robots as a full mirror of the mind

However, the technology of autonomous robots still remains in its infancy. It is not likely that we will succeed in completing autonomous robots that are both versatile and flexible like humans in a near future, even though we might be able to build smart robots that can operate in a narrow domain or versatile robots that are less flexible.

It is quite unlikely that we can develop autonomous robots that satisfy Asimov’s three laws of robotics: (1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders given to it by human beings except where such orders would conflict with the first law. (3) A robot must protect its own existence as long as such protection does not conflict with the first or second law (Clarke 1994)). The major reason for this is our wisdom is quite limited. Due to the unpredictability innate in the real world, we have to give up a classic view that the more science and technology advances, the less uncertain the world becomes. Although we might be able to assume that the more science and technologies advance, the more knowledge we will gain about the world as it was, the advance also expands our world and introduces more frontiers than the civilized area.

In order to put autonomous artifacts into practical use in the human society, we have to solve what might be called the responsibilityflaw problem, which is caused by the opaqueness of the function to be played by complex artifacts. On the one hand, the user cannot take the responsibility for what is caused by the artifacts, for the artifacts may be much more complex beyond her/his intellectual capabilities. Indeed, numerous human-made disasters have been caused, such as accidents in large plants (Perrow 1984) or airplanes. On the other hand, it will become considerably difficult for a product maker to take full responsibility for complex artifacts they manufacture, for it is quite difficult to eliminate the possibility that the user misuses artifacts, even though it might be possible to manufacture error-free artifacts. Human intelligence has a huge space of tacit dimension (Polanyi 1967). It appears that the intelligence exhibited by autonomous robots may be quite different from that by humans, even when autonomous robots would have come to maturity some day. In this vein, it appears impossible to implement Asimov’s three laws of robotics in the autonomous robots, for the robot designers cannot think about all situations her/his robot would be faced with.

We need to understand that humans have only a limited capability of expressing her/his own ideas or understanding the world. The capability of both the designers and the users is limited compared with the sophistication of the function artifacts may come to bear. As a result, the designers of artifacts may have greater difficulty in communicating functions of artifacts to the users no matter how wonderful they might be, as they become more complex. Writing a good manual is a fairly difficult task. Even though the manual is well written, the user may not be able to comprehend it in a short time or s/he might simply refuse to read it, as many computer users do.

Although some authors warn that artifacts should be well designed (Norman 2004), it appears to be almost impossible to find simple and understandable interfaces to complex functions, as artifacts become more complex as a result of the advancement of the technology. For example, it takes a huge amount of time to teach novices how to use computers. It is not because operating systems are not well designed, but simply because computers themselves are a very complex artifact.1

2 Brain–machine interface

Brain–machine interface (BMI) or Brain–computer interface (Santhanam 2006) is a new technology for embedding artifacts in the human body through a biological interface so that the artifacts can be directly controlled by the brain or biological signals. Although it is still at an experimental stage, it will eventually allow one to control artifacts as if artifacts were part of her/his body without learning symbolic commands (Fig. 2).
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig2_HTML.gif
Fig. 2

Brain–machine interfaces

Cyborg is a person whose physiological functioning is aided by artifacts embedded in the body (Mann and Niedzviecki 2001) (Fig. 3). Unlike traditional robotics, cyborgs directly connect artifacts with human body by plugged in electrodes into nerve. In contrast, the BMI allows more intimate and natural extension of the human body.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig3_HTML.gif
Fig. 3

Cyborgs

In contrast to the artifact-as-a-full-mirror-of-mind approach, we can characterize BMIs and cyborgs as those based on the artifact-as-a-transparent-glass approach, for the ultimate goal of the BMI research is to achieve a faithful projection of human thoughts.

Unfortunately, a serious problem remains unsolved, which might be called the inappropriate use problem. Even though the artifacts may not behave in an unintended fashion, they might bring about disastrous outcomes if they are applied to illegal or malicious purposes. Ironically, the more faithfully artifacts can augment humans’ intention, it is more likely that they are utilized by those with a malicious intent, possibly resulting in producing disastrous monsters.

A typical example is automobiles that are driven by drunken drivers. Although automobiles significantly extend human’s capability of moving and carrying, drunken drivers may cause serious traffic accidents that s/he might not intend to cause in normal conditions. The worst example is soldiers empowered by BMI. Even though not to that extreme, artifacts amenable to humans’ naive intention might easily turn humans’ innocent mischief, happening from time to time, into a disastrous outcome. It is a real threat, for the AI technology may allow one to create highly complex artifacts that stand beyond our understanding and might hurt the human society without being noticed.

After all, we need to take into account the entire human society even when we discuss the human-artifact relationship for not only humans, but also artifacts are social beings that may deeply affect the inter-human relationship. It leads to the idea of socializing artifacts to be discussed in the next section.

3 Artifacts as a half mirror

A half mirror is a quasi-opaque object that can both pass light from the back and reflect the real world image. In augmented reality, half mirrors are often used to create an augmented real world where a virtual image is overlaid on the real world (Fig. 4) (Wellner et al. 1993).
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig4_HTML.gif
Fig. 4

A half mirror used to augment the real world with a virtual image

In this paper, we use the concept of half mirror as a metaphor of an artifact that can augment the reality with virtual information. The artifact-as-a-half-mirror metaphor implies that artifacts are not transparent in the sense that they are not completely amenable to the intention of the owner, as in BMIs. Instead, they will show an augmented image of the real world annotated with supportive information and affect the real world in such a way as considered to be social according to the social rules encoded in their knowledge bases.

The artifacts based on the artifact-as-a-half-mirror metaphor may not be completely autonomous. Basically, they act autonomously according to social rules. Their behavior may reflect the owner’s intention as long as it is consistent with social rules. When asked, they will be able to explain to the owner why they are doing the current action. On occasions when artifacts are faced with the situations they cannot handle, they will make a safe stop and explain their difficulty to the owner. The owner in turn might want to take over the role of the artifact to solve the situation by manually controlling the artifacts, if s/he wants.

Thus, the artifact-as-a-half-mirror approach characterizes artifacts as social agents that mediate social relations. In this new paradigm, each human will not directly play social functions any more; instead, people interact with each other through their social agents (Fig. 5). Inter-human interaction is realized by coupling the human–agent communication between the owner and her/his social agent and the social communication among social agents. Each person’s intention is communicated to her/his social agent in the human–agent communication. Social agents interact with each other on behalf of the owner. Each social agent tries to maximize the satisfaction of the owner’s intention so long as it is compliant with social rules. It is very much like people negotiating with each other through their artificial attorneys not only in legal negotiations, but also in daily communications, except with people with intimate relations.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig5_HTML.gif
Fig. 5

The computational framework of the artifact-as-a-half-mirror metaphor

Consider how automobiles might be improved as a “socialized automobile” using the artifact-as-a-half-mirror approach. The driver cannot directly drive the socialized automobile any more. Instead, the socialized automobile can take the passengers to the destination. The passengers’ intentions such as route selection or time constraint may be passed to the socialized automobile and will be reflected on its behavior so long as it is consistent with traffic rules. In contrast, automobiles based on the artifact-as-a-full-mirror approach are completely automatic vehicles where the passenger can only specify goals, while those based on the artifact-as-a-transparent-glass approach might be a power suit that allows the user to run as fast as automobiles. Although it may definitely be fun, it might be extremely dangerous.

The responsibility flaw problem will be solved in principle in this framework, for artifacts are designed to comply with the social rules from the beginning and, hence, are transparent at the granularity specified by the social rules. The inappropriate use problem will be solved, for people can affect other people only through social agents that comply with the social rules.

The artifact-as-a-half-mirror metaphor extends the recent trends of building a “computer butler” that can execute better solutions on behalf of the user. For example, numerous facilities in a modern automobile such as the automatic route recommendation, alcohol lock-out facility, the automatic parking can be regarded as various forms of computer butlers that mediate the driver’s intention to the automobile either in a positive or negative way. In the near future, more advanced functions such as drowsiness warning or obstacle detection will be introduced in the concept of advanced safety vehicle. Some of those artifacts are just concerned with individual users, while others may mediate social interactions with multiple persons. The most extreme case might be autonomous weapons that reflect the intention of offenders against defenders. Artifacts that merely act on behalf of the owner extend both the good and evil wills of the owner. The artifact-as-a-half-mirror approach is an attempt to socialize the functions of a computer butler and embed them into complex artifacts.

In the next two sections, we look into the details of social communications and human–agent communication and survey early work.

4 Public relations in the computer-mediated society

The purpose of social communication is to coordinate the behaviors of individuals (Fig. 6). We attempt to sustain social interactions among people by introducing social artifacts for mediating social interaction among people.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig6_HTML.gif
Fig. 6

Social artifacts as mediators

Social communication may be realized in a hierarchical fashion. The role of social communication at the base level is to dynamically allocate computation resources to achieve the maximal utility by taking into account the fairness under the priority settings. Numerous techniques of automatic resource allocation have been developed and deployed for implementing operating systems or controlling network traffic.

Social communications at the higher level are for more abstract social interactions including information sharing, collaboration, negotiation, contract making, coalition, arbitration, and so on. Numerous intelligent algorithms have been developed in the research on Multi Agent Systems to negotiate agents possibly with conflicting goals, such as distributed search, problem solving and planning, distributed rational decision making including dynamic resource allocation or coalition formation, multi-agent learning (Weiss 1999), robust auction protocol (Yokoo et al. 2005), and so forth.

At the philosophical level, negotiation of conflicting intentions has long been discussed as an important subject of social science, but in much less computational terms. Philosophers such as Thomas Hobbes discussed the negotiation between individuals and the government as a social contract problem of arbitration of conflicting benefits in a world governed by natural laws (Macpherson 1962). The question of how social order is possible is still deemed an important question in social science.

We should respect the original positions such as “each person is to have an equal right to the most extensive basic liberty compatible with a similar liberty of others” and “social and economic inequalities are to be arranged so that they are both (a) reasonably expected to be to everybody’s advantage, and (b) attached to positions and offices open to all” proposed as a part of a theory of justice by John Rawls (Rawls 1999). However, these positions should be taken only as desiderata, not rigorous rules, and implemented in an approximate way into the artifacts and artificial society. Such best effort attitude is significant in the Internet age in the sense that the providers only promise to make best efforts to offer a good service and the customers have to get used to it. Great challenges are left for the future concerning this aspect.

Conventionally, social design has been made through institutional design. Representatives discuss, create, and establish the institution, and the government executes it. However, the execution of the institution is, by commission or omission, achieved only partially due to the cost and ability for doing it. For example, even though the government specifies the speed limit on the road, it is only limited to places where the police watch the traffic, possibly placing radar speed traps. As a result, the speed limit institution is only partially implemented.

In contrast, when social artifacts come into play, such allowance will be significantly reduced, for social artifacts embedded in the society will rigorously constrain the way people actually behave so that the their behaviors may be significantly more legitimate than before, even though not perfect. The good news is that automobiles may run in a much safer way than before even if the driver is in panic or reckless. However, it might not only be the case that the speed limit institute will be completely followed, but also even the route might be specified so that the minimal energy may be consumed or traffic jam may not take place. Even though all such social rules and preferences might be carefully and equally designed, people might feel uncomfortable since they are completely controlled and no allowance is left.

Fortunately, it seems that the implementation of the philosophical level of social communication will be left for a long while. Although there remains a space for uncontrollable features that might become a potential cause of undesirable outcomes, the human society appears to have become used to the complex world. The proposed approach is very ad hoc in the sense that it lacks a principle. However, it seems as an exact nature of the future society. Nothing is systematic and principled; it is the way quite opposite to the world envisioned by the scientists and engineers of the previous century. The lack of principles appears to be the very nature of the artificial world and probably we need to abandon the illusion of the principles world. Instead, we need to find out a way of sustaining coherent and consistent and most importantly human attitude to the world.

5 Human–agent communication

A key for the artifacts as a half mirror approach to succeed is to establish the human–agent communication between the owner and her/his artifacts. To some degree, it is like establishing the trust relationship with a lawyer who is an expert in legal activities. Although a human lawyer can account for her/his activities to the client in detail, it is pretty hard to expect the same capability from the artifact as an artificial attorney, for the intelligence of artifacts is quite limited and may significantly lack accountability. Even though the artifact may possess sufficient knowledge about the problem, it might not be able to explain it to the owner in understandable terms.

What will happen to humans if her/his brain is directly coupled to artifacts? As brain activities are highly parallel and full with noises and inconsistent temporary thoughts, humans cannot control their brain in a secure way. It appears pretty hard for a human to communicate with an artifact.

Instead, humans appear to be good at communicating with each other with embodiment. McNeill suggests that gesture and language emerge concurrently from the mind as a growth point (McNeill 2005). It also conforms to Damasio’s arguments for emotion as an interface between the body and the mind (Damasio 1994). Rather than trying to establish the owner–artifact communication channel at the verbal level, it might be promising to do it at the nonverbal communication level.

We consider it is feasible to implement artifacts that can communicate with people with nonverbal communication means. The ability of forming and sustaining intention shared by participants (joint intention) is considered to be a primary goal in nonverbal communication. The communication schema we have in mind allows two or more participants to repeat observations and reactions at varying speeds to form and maintain joint intentions to coordinate behavior, which may be called a “coordination search loop” (Nishida et al. 2006). Figure 7 shows the architecture consisting of layers to deal with interactions at different speeds to achieve this coordination search loop.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig7_HTML.gif
Fig. 7

The hierarchical architecture of the artifact communicating at different speeds

The lowest layer is responsible for fast interaction. The design of this level is based on affordance (Gibson 1979), which refers to the bundle of cues the environment provides the actor. It relies on people’s capabilities of utilizing various kinds of affordances, even though these are subtle. The layer at this level is designed so that a robot could suggest its capabilities to the human, coordinate its behavior with her/him, establish a joint intention, and provide the required service. The intermediate layer is responsible for interactions at medium-speed. An entrainment-based alignment mechanism is introduced to enable the robots to coordinate their behaviors with the interaction partner by varying rhythms of nonverbal behaviors.

The upper layer is responsible for slow and deliberate interactions such as those based on social conventions and knowledge to communicate more complex ideas based on the shared background. We may introduce defeasible interaction patterns to describe typical sequences of behaviors actors are expected to undertake in conversational situations. A probabilistic description copes with the vagueness of the communication protocol used in human society.

An autonomous mobile chair is an artifact toward this direction that can dynamically produce the means of allowing a person to get a place to sit down (Terada and Nishida 2002). The autonomous mobile chair perceives the relation between the surface of the actor’s body and the surface of the environment, i.e., a measure called the affordance distance that is characterized as the minimal distance between the surface of the autonomous mobile chair and human body. The affordance distance decreases as the autonomous mobile chair approaches the human. The optimal action sequence depends on multiple factors such as the shape and locomotive ability of the autonomous mobile agent and the relative angle of the two surfaces.

Terada et al (Terada and Nishida 2002) designed the autonomous mobile chair so that it could learn to move to a configuration where the affordance distance was minimal. They implemented the autonomous mobile chair. Its shape, the utility function, and typical behaviors are shown in Fig. 8. They carried out several experiments. Interactions with several users are shown in Fig. 9. Although the users were all able to sit down on the chair as a result of coordinating behaviors, some users pointed out that the autonomous mobile chair should have communicated its intentions more explicitly.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig8_HTML.jpg
Fig. 8

Autonomous chair (Terada 2002)

https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig9_HTML.jpg
Fig. 9

Autonomous mobile chair interacting with people (Terada 2002)

At the middle level, we consider the joint intention formation and sustention schema to be applicable to many situations. Consider drawing. We have difficulties with drawing diagrams with the mouse, for the computer mouse often picks out various kinds of noise and in addition it is hard to move our hand in an intended way. However, difficulty still remains even though we use classic drawing tools such as brushes or pencils. Why is drawing difficult, even though we feel as if we come up with images? Although it is hard to manipulate natural brushes or pencils due to their physical properties, why is it difficult to manipulate computerized drawing tools?

We suspect the reason is that the computer drawing tool only passively senses, or at best sloppily interprets, the user’s intention. Computer drawing tools might be of much help by taking much more active role in communicating intentions to establish and sustain the joint intention with the user. By giving the computer’s interpretation of the user input quickly and effectively, a computer drawing tool might be able to successfully show a drawing as a result of joint intention with the user.

Mohammad and Nishida implemented the idea as a novel drawing tool called NaturalDraw (Mohammad and Nishida 2006). The interactive perception paradigm was implemented integrating signal-processing methods. A preliminary evaluation was made by comparing NaturalDraw with a conventional drawing tool. The most annoying problem for the users to use conventional drawing tools was the difficulty in controlling the shape of the drawing using the control points. It was observed that the users of NaturalDraw used the stroke deletion commands less frequently than the conventional drawing tool (at least by a ratio of 60%), for the repetition-processing mechanism gave the users better control over the final shape of the drawing, as shown in Fig. 10. It was also observed that the novice users increasingly relied on the repetition-detection function, as they became more confident that using repetition they could do any required modification, and this allowed them to draw worse initial drawings, assuming that they would be corrected later.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig10_HTML.gif
Fig. 10

Drawing with NaturalDraw (Mohammad 2006)

The notion of repetition may be generalized to the entrainment framework. Entrainment or coordination of behavior using shared rhythm might be effective at this level. Entrainment-based interaction allows a joint intention to be established in two steps (Fig. 11) (Nishida et al. 2006). The first step is called the synchronization phase. Assume one actor A wants to establish a joint intention with another actor B. First, A engages in rhythmic behavior, signaling an intention to establish a joint intention with B. When B recognizes this, B will change behavior so as to synchronize with the observed rhythm.
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig11_HTML.gif
Fig. 11

Outline of entrainment-based interaction (Ogasawara 2005)

The second step is called the modulation phase. Once A observes that B is acting with the same rhythm, A may gradually change her/his rhythm so that B’s behavior may become more desirable to A. This will cause B to modify her/his intention to converge to A’s behavior.

At the upper and more deliberate level, we may give robots the ability of behaving according to the surface discourse of the conversation to capture or present information content, rather than exchanging meaning based on deep understanding. In other words, it appears feasible to aim at building robots that can mimic conversational behavior at least on the surface and act quickly to meet temporal requirements in nonverbal communication. For example, our robots will move eyes and gaze on the object when the partner has been recognized as paying attention to that object, successfully creating joint attention, which is considered to be very important to establish communication. The media equation theory (Reeves and Nass 1996) suggests that superficial similarities might allow people to coordinate behavior. In addition, it is also reasonable to expect that a robot will be able to infer that an object has a respective role in the conversation, which will enable it to add a proper discourse label to the record of the object.

Entrainment-based interaction and joint attention using defeasible interaction patterns have been successfully used to build listener and presenter robots, aiming at prototyping the idea of robots as embodied knowledge media. The pair of robots serves as a means of communicating embodied knowledge (Fig. 12) (Nishida et al. 2006).
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig12_HTML.gif
Fig. 12

Listener and presenter robots as embodied knowledge media (Nishida 2006)

The listener robot interacts with an instructor to acquire knowledge by videotaping important scenes of her/his activities (e.g., assembling/disassembling a machine). The presenter robot, equipped with a small display, will then interact with a novice to show the appropriate video clip in appropriate situations where this knowledge is considered needed during her/his work (e.g., trying to assemble/disassemble a machine).

The listener robot was designed to undertake appropriate nonverbal interactive behavior produce a series of video clips as records of the conversation (Ogasawara et al. 2005). Human–human interactions were videotaped where a person as instructor explained the assembling task to another person as listener and analyzed the video in detail using a video annotation tool. The listener robot built on the analysis is able to achieve joint attention when the instructor points to some portion of the subject and starts to talk about it, while it looks at the instructor on the face when s/he starts to talk to the robot (Fig. 13).
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig13_HTML.gif
Fig. 13

Listener robot interacting with instructor (Ogasawara 2005). a attention by gaze and head orientation; b joint attention by instructor pointing

The presenter robot was prototyped using similar techniques. A small display is attached on the left hand as shown in Fig. 14a. The presenter robot senses the user’s status. When it detects a situation where the user appears to need information, it will adjust its position and show the user a relevant video clip, as shown in Fig. 14b (Ohya et al. 2006).
https://static-content.springer.com/image/art%3A10.1007%2Fs00146-007-0107-4/MediaObjects/146_2007_107_Fig14_HTML.gif
Fig. 14

The presenter robot (Ohya 2006). a the presenter robot b the presenter robots show the user a video clip

Rather than programming the alignment between the human and the robot, we can introduce the mutual adaptation learning scheme. The idea is to make use of human’s powerful learning capabilty. When a robot tries to adapt to the human and learn her/his behaviors, the human may also improve their behavior patterns simultaneously to adapt to the robot. If the robot’s learning schema is easily inferred by the human, s/he will be motivated and significantly change her/his behavior to facilitate the establishment of the common protocol. In order to develop a mutual adaptive human–robot interface, an experimental environment was built to observe how a mutual adaptation takes place in human–human communication. Interesting findings were obtained. For example, an actor changed his speed and/or steps in order to keep pace with the speed and/or width of the instructor’s gesture and vice versa (alignment-based action); the instructor used gestures such as symbol-like instructions when he interacted with the actor. The observed symbol-like instructions included gestures such as “stop,” “a bit,” and “keep going.” (symbol-emergent learning), and so on (Xu et al. 2006).

6 Discussions

The ultimate goal of the artifact-as-a-half-mirror approach is to realize full-fledged social artifacts with a full set of perception and motor capabilities. Definitely, the goal is extremely challenging. We need to draw a roadmap from the current state-of-the-art technology level. Certainly, we need a roadmap that permits researchers to make a steady progress.

The difficulty of the approach depends on how much the artifact relies on the human’s ability of cognition. The weakest approach is to fully rely on the human’s cognitive capability. In this case, humans may make perception or execute motor commands, while social artifacts just make suggestions. Although the weakest approach is very limited in the sense that misconception or user’s intention might hinder the effect of social artifacts, it is already implemented as various kinds of “computer butlers.”

A more advanced approach is to build social artifacts with a limited capability of cognition. As we have already discussed in previous sections, we are witnessing new attempts at least at the laboratory level. The key to success is to explicitly share the concept of social artifacts and build a consensus in society. Field trials are considered to be an important step, for it will allow people to think concretely about social intelligence through experience. Preliminary attempts as those introduced in the previous section should be tested in a wide and open social context.

Even though the inappropriate use problem may not be completely solved, the BMI approach is considered to be useful and may become popular in coming years. Although their potential danger will be recognized, society will cope with the problems by sharing the experiences and caveats in society.

In the forthcoming history of social artifacts, varying degrees of the opaqueness of the half mirror will be evaluated. Hundred percent opaque or 0% transparent autonomous agents will be used in the environment that can be considered as completely controllable. Zero percent opaque or 100% transparent BMIs will be mostly used to compensate an individual’s basic functions for living such as eyesight, hearing, walking, cognition, and so on. There are infinitely many possibilities to explore in between. After trial and errors of tuning, society will gradually learn the appropriate degree of opaqueness/transparency of the “half mirror”, depending on the nature of inter-human relationship. It is quite likely that computer-mediated communication by social agents will be mostly applied to third parties and much less to intimate persons.

In order to make progress through actual use, we need to establish a methodology for making assessment of social artifacts. A conventional questionnaire-based approach is not enough in this respect, for it will not give a real-time reaction. Instead, we need to invent a technology for enabling the practitioners to gain the feedback on the spot. Assessment of mental states by combining physiological and multi-media measurement appears to be promising in this respect, provided that we can succeed in establishing a well-defined theory about the relationship between physiological measures and the mental state.

7 Conclusion

In this article, we have pointed out that the responsibility flaw problem and inappropriate use problems need to be solved in order for the human society to benefit from complex artifacts. Since we consider that these problems need to be discussed in the social context, we extend this scope to the role of artifacts in the human society. We have proposed the artifact-as-a-half-mirror metaphor as a design principle of complex artifacts and discussed its feasibility together with our preliminary work. We have also introduced the artifact-as-a-full-mirror and artifact-as-a-transparent-glass metaphors to characterize autonomous artifacts and BMI. Finally, we have shown a roadmap toward the realization of our proposal.

Footnotes
1

It should be noted that we are not claiming that computer users should be happy with the current operating systems. On the contrary, we claim that the operating systems need to be continuously improved, but the user needs to understand that computers are quite complex artifacts and the interface design is essentially a hard problem.

 

Copyright information

© Springer-Verlag London Limited 2007