1 Introduction

Multi-agent systems are one of the most promising technologies to emerge in recent decades, considering the applications of several fields such as distributed systems, economics, and social science. Many researchers have drawn a vision in which many tasks humans perform now are delegated to intelligent, autonomous, and proactive programs, generally called software agents [1]. Multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. MAS can be used to solve difficult or impossible problems for an individual agent or a monolithic system. Intelligence may include some methodic, functional, procedural, or algorithmic Research, find, and process approach. In MAS, intelligent agents need to interact with one another to achieve their individual objectives or manage the dependencies that follow from being situated in a common environment [2]. These interactions can vary from simple information interchanges to requests for particular actions to be performed and on to cooperation (working together to achieve a common objective) and coordination (arranging for related activities to be performed coherently).

AAMAS (Autonomous Agents and Multi-Agent Systems) is one of the prominent top conferences for research related to MAS. In addition, many research achievements related to MAS are presented in AAAI (Annual AAAI Conference on Artificial Intelligence) and IJCAI (International Joint Conference on Artificial Intelligence), which are the top conferences in Artificial Intelligence. Journal of Artificial Intelligence Research (JAIR) and the Journal of Artificial Intelligence (AIJ) are open-access journals covering a wide range of AI topics, including multi-agent systems. Autonomous Agents and Multi-Agent Systems (JAAMAS) is a journal associated with the IFAAMAS that publishes research on autonomous agents and MAS.

One of the relevant interactions in MAS is negotiation (the process by which a group of agents comes to a mutually acceptable agreement on some matter). Negotiation examines whether the agents (both artificial and human agents) should cooperate and is required when the agents are self-interested and cooperative. In other words, negotiation is a significant method of competitive (or partially cooperative) allocation of goods, resources, or tasks among agents. Negotiation is also an essential aspect of daily life and an important topic. They can be simple and ordinary, as in haggling over a price in the market or deciding on a meeting time, or international disputes and nuclear disarmament [3] issues that affect the well-being of millions. While the ability to negotiate successfully is critical for much social interaction, negotiation is an essential and challenging task. Something that might be perceived as a “simple” case of single-issue bilateral bargaining over a price in the marketplace can demonstrate the difficulties that arise during the negotiation process.

It is a subject that has been extensively discussed in game-theoretic, economic, and management research fields for decades (e.g. [4,5,6,7,8,9,10,11]). Although we already have more recent activities in this field [12,13,14,15], the key contributions done were in the field of automated negotiation systems that consist of intelligent software agents [16,17,18]. There has been extensive work in the area of automated negotiation, that is, where agents negotiate with other agents in such contexts as e-commerce [19,20,21,22], large-scale argumentation [23, 24], collaborative design [25, 26], and service-oriented computing [27, 28]. The model of the multi-agent system is necessary for cooperative work between agents, and automated negotiations between agents are required when they have conflicts. In addition, most researchers in multi-agent systems regard automated negotiation as the most critical topic for theoretical analysis or practical applications of agent-based systems. Thus, success in developing automated negotiation capabilities has excellent advantages and implications.

1.1 Main Flow of Automated Negotiations

The main flow of accomplishing the automated negotiations are Negotiation Environment, Preference Elicitation and Negotiation Strategy, and Negotiation Protocol.  

Negotiation Environment::

The negotiation environment defines the specific settings of the negotiation. Based on these settings, the researcher should take different considerations. The environment determines several parameters that dictate the number of negotiators taking part in the negotiation, the time frame, and the issues on which the negotiation is being conducted. The number of parties participating in the negotiation can be two (bilateral negotiations) or more (multilateral negotiations). The negotiation environment also consists of objectives and issues to be resolved. Various issues can be involved, including discrete enumerated value sets, integer-value sets, and real-value sets. Negotiations involving multi-attribute issues allow making complex decisions while considering multiple factors [29].

Preference Elicitation and Negotiation Strategy::

Preference elicitation techniques attempt to collect as much information on users’ preferences as possible to find efficient solutions [30, 31]. However, because users’ preferences are always incomplete initially and tend to change in different contexts, in addition to user’s cognitive and emotional limitations of information processing, preference elicitation methods must also be able to avoid preference reversals, discover hidden preferences, and assist users in making tradeoffs when confronted with competing objectives. In addition, negotiation agents should have an effective negotiation strategy to achieve significant agreements.

Negotiation Protocol::

Automated negotiation protocol defines the formal interaction between the decision makers (Agents) in the negotiation environments -whether the negotiation is done only once (one-shot) or repeatedly- and how the exchange of offers between the agents is conducted. In addition, according to Jennings et al. [32], a negotiation protocol is a set of rules that govern the interaction and cover the permissible types of participants (e.g., the negotiators and any relevant third parties), the negotiation states (e.g., accepting bids, negotiation closed), the events that cause negotiation states to change (e.g., no more bidders, bid accepted) and the valid actions of the participants in particular conditions (e.g., which messages can be sent by whom, to whom, at what stage). The agents in the negotiations can be non-cooperative or cooperative. Generally, cooperative agents try to maximize their social welfare (see Zhang [33]), while non-cooperative agents try to maximize their utilities regardless of the other side’s utilities. These kinds of issues are focued, which have been widely studied in different research areas, such as game theory [8, 10], distributed artificial intelligence [34,35,36] and economics [9].

Fig. 1
A chart defines how automated negotiations are accomplished in designing a car model. The negotiation environment includes issues, agents, and objectives. The preference election and strategy includes a few data collected from the users. The consensus building includes a meeting of a few members.

Main flow of accomplishing the automated negotiations. The automated negotiation is composed as negotiation environment, preference elicitation and negotiation strategy, and negotiation protocol

Figure 1 shows the main flow in accomplishing automated negotiations. This figure shows the example of designing a simple car among car designers:

  • The negotiation environment, including negotiation issues, agents’ actions, and objectives, is based on real-life negotiation.

  • The preference of the users should be collected using some preference elicitation techniques. In addition, the negotiation agent has a strategy.

  • Agents negotiate the car designs automatically based on the negotiation protocol.

One of the most critical parts of automated negotiation is the negotiation protocol, which has been extensively discussed in game-theoretic, economic, and management science literature for decades. In addition, many problems remain unsolved in the negotiation protocol, and these problems constitute the leading research theme in the multi-agent system field. The automated negotiation protocol to accomplish automated negotiations is focused. Finally, agents build a consensus for designing the car.

1.2 Complex Multi-issue Negotiation with Highly Nonlinear Utility Functions

In this chapter, the automated negotiation protocols between cooperative agents are focused on. While there has been a lot of previous work in this area [37,38,39], these efforts have, to date, dealt almost exclusively with simple negotiations involving multiple independent issues and, therefore, linear (single optimum) utility functions. An example of such representations widely used in the negotiation literature is linear-additive utility functions [35], which allow the modeling of independent issues.

Many real-world negotiation problems, however, involve multiple interdependent issues that are highly nonlinear. Adding such interdependencies complicates the agent’s utility functions, making them nonlinear, with multiple optima. For example, interdependence between attributes in agent preferences can be described by using different categories of functions, like K-additive utility functions [40, 41], bidding languages [42] or constraints [43,44,45].

In the context of a multi-attribute negotiation, the complexity depends on the number of issues, the number of agents, and the level of interdependency between the preferences on the issues and the domain of the issues. The method to describe the agent’s utility spaces also represents a fundamental measure of the complexity of the negotiation scenario.

Some studies have focused on negotiation with nonlinear utility functions. Klein et al. [36] present the first negotiation protocols specifically for complex preference spaces. They focus on the nonlinear utility function and describe a simulated annealing-based approach appropriate for negotiating complex contracts that achieves near-optimal social welfare for negotiations with binary issue dependencies. The important points in this work are the positive results regarding the use of simulated annealing to regulate agent decision-making and the use of agent expressiveness to allow the mediator to improve its proposals. In addition, most existing negotiation protocols, like a method based on Hill-climbing, which is well-suited for linear utility functions, work poor when applied to nonlinear problems. However, it was not applied to multilateral negotiations with higher-order dependencies. Higher-order dependencies and continuous-valued issues, common in many real-world contexts, generate more challenging utility landscapes that are not considered in their work.

One of the most relevant approaches focusing on the complex utility space is Ito et al. [43, 46]. They proposed the original constraint-based utility functions, which assume highly nonlinear and bumpy utility functions. Therefore, scalable and efficient negotiation protocols are required if the complexity of the negotiation environment is high. Also, they proposed a bidding-based protocol. In this protocol, agents generate bids by sampling their own utility functions to find local optima and then use constraint-based bids to compactly describe regions that have large utility values for that agent. A mediator considers then a combination of bids that maximizes social welfare. This protocol also had an impact on the automated negotiation field because many existing works didn’t consider the highly nonlinear utility of agents.

In this chapter, the constraint-based nonlinear utility function is focused on. There are many multi-issue negotiation models except the use of constraints; however, there are several reasons in favor of using constraints in negotiation models. First, they implement efficient methods of preference elicitation. Moreover, constraints allow the expression of dependencies between the possible values of the different attributes. Finally, the use of constraints for offer expression allows for limiting the region of the solution space that must be explored in a given negotiation step. Reducing the area of the utility space under exploration according to the constraints exchanged by agents is a widely used technique in automated negotiation [47, 48], since it searches for agreements a more efficient process than when using positional bargaining, especially in complex negotiation scenarios.

1.3 Main Contributions of This Chapter

In the complex multi-issue automated negotiation protocol, the existing studies have some unsolved issues. This chapter deals with the followings aims.

Aim 1: Scalable and Efficient Negotiation Protocols

A significant problem is scalability for the number of agents and issues. In the negotiation setting, the utility space becomes highly nonlinear, making finding the optimal agreement point very difficult. For example, the bidding-based negotiation protocol does not have high scalability for the number of agents, and the mediator needs to find the optimum combination of submitted bids from the agents. However, the computational complexity for finding solutions is too large.

A Issue-grouping based negotiation protocol is proposed by decomposing the contract space based on issue interdependencies. A new protocol in which a mediator tries to reorganize a highly complex utility space into several tractable utility subspaces is proposed in order to reduce the computational cost. Issue groupings are generated by a mediator based on an examination of the issue interdependencies. First, a measure for the degree of interdependency between issues is defined. Next, a weighted non-directed interdependency graph is generated based on this information. By analyzing the interdependency graph, a mediator can identify issue subgroups. Note that while others have discussed issue interdependencies in utility theory [49,50,51], this previous work doesn’t identify optimal issue groups. Finally, the experimental results demonstrate that the protocol has higher scalability than previous efforts and the impact on the optimality of the negotiation outcomes based on issue groups.

Aim 2: Negotiation Protocols Concerning Agents’ Private Information

A negotiation protocol should concern about agents’ private information (privacy). Such private information should be protected as much as possible in a negotiation because users generally want to keep their privacy in real life. For example, suppose several companies collaboratively design and develop a new car model. If one company reveals more private information than the other companies, the other companies will know more of that company’s important information, such as utility information. As a result, the company will be at a disadvantage in subsequent negotiations, and the mediator might leak the agent’s utility information. Therefore, this chapter aims to accomplish the negotiation protocols without revealing the agents’ private information to others.

A threshold-adjusting mechanism was proposed. First, agents make bids that produce more utility than the common threshold value based on the bidding-based protocol proposed in [43]. Then, the mediator asks each agent to reduce its threshold based on how much each agent opens its private information to the others. Each agent makes bids again above the threshold. This process continues iteratively until an agreement is reached or there is no solution. The experimental results show that the method substantially outperforms the existing negotiation methods on the point of how much agents have to open their own utility space.

In addition, secure protocols are proposed to conceal all private information: the Distributed Mediator Protocol (DMP) and the Take it or Leave it (TOL) Protocol. They make agreements and conceal agent utility values. When searching in their search space, they employ Secure Gathering, with which they can simultaneously calculate the sum of the per agent utility value and conceal it. Furthermore, Distributed Mediator Protocol (DMP) improves the scalability for the complexity of the utility space by dividing the search space toward the mediators. In the Take it or Leave it (TOL) Protocol, the mediator searches using the hill-climbing search algorithm. The evaluation value is decided by responses that agents either take or leave, moving from the current state to the neighboring state. The Hybrid Secure Protocol (HSP) that combines DMP with TOL is proposed. In HSP, TOL is performed first to improve the initial state in the DMP step. Next, DMP is performed to find the local optima in the neighborhood. HSP can also reach an agreement and conceal per-agent utility information. Additionally, HSP can reduce the required memory for making an agreement, which is a major issue in DMP. Moreover, HSP can improve communication costs (memory usage) more than DMP by the experiments.

Aim 3: Addressing Weaknesses of the Nash Bargaining Solution in Nonlinear Negotiation

The Nash bargaining solution, which maximizes the product of the agent utilities, is a well-known metric that provably identifies the optimal (fair and social-welfare-maximizing) agreement for negotiations in linear domains [8, 52, 53]. In nonlinear domains, however, the Pareto frontier will often not satisfy the convexity assumption required to make the Nash solution optimal and unique [8, 52, 54]. There can, in other words, be multiple agreements in nonlinear domains that satisfy the Nash Bargaining Solution, and many or all of these will have sub-optimal fairness and/or social welfare.

A secure mediated protocol (SFMP) is proposed that addresses this challenge. The protocol consists of two main steps. In the first step, SFMP uses a nonlinear optimizer, integrated with a secure information-sharing technique called Secure Gathering [55], to find the Pareto front without causing agents to reveal private utility information. In the second step, an agreement is selected from the set of Pareto-optimal contracts using approximate fairness, which measures how equally the total utility is divided across the negotiating agents ([56] etc.). It demonstrates that SFMP produces better scalability and social welfare values than previous nonlinear negotiation protocols.

2 Multi-issue Negotiation with Highly Nonlinear Utility Functions

A model of non-linear multi-issue negotiation and a bidding-based negotiation protocol (basic bidding) designed as a multi-issue negotiation protocol suitable for agents with highly non-linear utility functions is described. The constraint-based utility functions are realistic because they allow us to produce bumpy and highly non-linear utility functions. In the basic bidding algorithm, agents generate bids by sampling their own utility functions to find local optima and then use constraint-based bids to compactly describe regions that have large utility values for that agent. These techniques make bid generation computationally tractable even in large utility spaces. A mediator then finds a combination of bids that maximize social welfare.

2.1 Basic Model of Multi-issue Negotiation

 

Definition 1: Agents and Mediator.:

N agents (\(a_1,\ldots ,a_N\)) want to reach an agreement with a mediator who manages the negotiation from a man-in-the-middle position.

Definition 2: Issues under negotiation.:

There are M issues (\(i_1,\ldots ,i_M\)) to be negotiatedFootnote 1.

Definition 3: Contract Space.:

The negotiation solution space is defined by the values that the different values may take. To simplify, we assume that the issue takes a value drawn from the domain of integers [0, X]:

$$\begin{aligned} D = [0,X]^M \end{aligned}$$
Definition 4: Contract or potential solution.:
$$\begin{aligned}\vec{s}=(s_1,...,s_M)\end{aligned}$$

A contract is represented by a vector of issue values. Each issue \(s_j\) has a value drawn from the domain of integers \([0,X]~(1\le j \le M)\).(i.e. \(s_j\in \{0, 1, ,\ldots ,X\}\))Footnote 2.

2.1.1 Constraint-based Complex Utility Model

Some of the protocols and experiments in this chapter rely on the constraint-based utility model. In other words, an agent’s utility function is described in terms of constraints. This produces a bumpy non-linear utility function and is a crucial departure from previous efforts on multi-issue negotiation, where the contract utility is calculated as the weighted sum of the utilities.  

Definition 5: Constraint.:
$$\begin{aligned}c_k \in C~(1\le k \le l).\end{aligned}$$

There are l constraints in an agent’s utility space. Each constraint represents a region in the contract space with one or more dimensions and an associated utility value.

Definition 5-1: Constraint Value.:

Constraint \(c_k\) has value \(w_{a}(c_k,\vec{s})\) if and only if it is satisfied by contract \(\vec{s}\) for the agent a.

Definition 5-2: Constraint Region.:

Function \(\delta _a(c_k,i_j)\) is a region of \(i_j\) in \(c_k\).

\(\delta _a(c_k,i_j)\) is \(\emptyset \) if \(c_k\) has no region regarded as \(i_j\).

Definition 5-3: The Number of Terms in the Constraint.:

Function \(\epsilon _a(c_k)\) is the number of terms in \(c_k\).

Definition 6: Utility function.:
$$\begin{aligned}u_{a}(\vec{s}) = \sum _{c_k\in C, \vec{s}\in x(c_{k})} w_{a}(c_{k},\vec{s}),\end{aligned}$$

where \(x(c_{k})\) is a set of possible contracts (solutions) of \(c_{k}\).

An agent’s utility for contract \(\vec{s}\) is defined as the sum of the utility for all the constraints it satisfies.

Definition 7: The relationship between agents and constraints:

Every agent has its own, typically unique, set of constraints.

These definitions produce a “bumpy” nonlinear utility function with high points where many constraints are satisfied and lower regions where few or no constraints are satisfied. It represents a crucial departure from previous efforts on multi-issue negotiation. The utility is calculated as the weighted sum of the utilities for individual issues, producing utility functions shaped like flat hyperplanes with a single optimum.

Fig. 2
A 3 D graph of utility versus issue 1 and issue 2. It plots a vertically oriented rectangular cuboid. It is mapped to a 3 D constraint utility space composed of multiple buildings of different heights, hills, and valleys.

An example of a utility space generated via a collection of binary constraints involving issues 1 and 2; the number of issues is two. One of the constraints has a value of 55, which holds if the value for issue 1 is [3, 7] and the value for issue 2 is [4, 6]. The utility space is highly nonlinear, with many hills and valleys

Figure 2 shows an example of a utility space generated via a collection of binary constraints involving Issues 1 and 2; the number of issues is two. For example, one of the constraints has a value of 55, which holds if the value for Issue 1 is [3, 7] and the value for Issue 2 is [4, 6]. The utility function is highly nonlinear, with many hills and valleys. It assumes that many real-world utility functions are more complex than this, involving more than two issues and higher-order (e.g., trinary and quaternary) constraints. This constraint-based utility function representation allows us to capture the issue interdependencies common in real-world negotiations. However, this representation can also capture linear utility functions as a particular case (they can be captured as a series of unary constraints). A negotiation protocol for complex contracts can, therefore, handle linear contract negotiations.

As is common in negotiation contexts, agents do not share their utility functions to preserve a competitive edge. It will generally be the case that agents do not fully know their desirable contracts in advance because each individual utility function is simply huge. For example, if we have 10 issues with 10 possible values per issue, this produces a space of \(10^{10}\) (10 billion) possible contracts, too many to evaluate exhaustively. Agents must thus operate in a highly uncertain environment.

2.1.2 Objective Function

The objective function for the negotiation protocol can mainly be described as follows:

$$\begin{aligned} \arg \max _{\vec{s}} \sum _{a\in N} u_{a}(\vec{s}). \end{aligned}$$

The negotiation protocol tries to find contracts that maximize social welfare, i.e., the total utilities for all agents. Such contracts, by definition, will also be Pareto-optimal. Theoretically possible to gather all the individual agents’ utility functions into one central place and then find all optimal contracts using well-known nonlinear optimization techniques such as simulated annealing (SA) or evolutionary algorithms (GA). However, centralized methods can’t be applied for negotiation purposes because agents prefer not to share their utility functions to preserve a competitive edge as is common in negotiation contexts.

2.2 Basic Bidding Protocol

Agents reach an agreement based on the following steps. It is called a basic bidding protocol. This protocol is a remarkable result focusing on the complex automated negotiation with high nonlinearity. The proposed automated negotiation protocols are compared with this basic bidding protocol as the baseline for evaluation.

The basic bidding protocol consists of the following four steps:  

Step 1: Sampling.:

Each agent samples its utility space to find high-utility contract regions. A fixed number of samples are taken from a range of random points drawn from a uniform distribution. Note that if the number of samples is low, the agent may miss some high-utility regions in its contract space and potentially end up with a sub-optimal contract.

Step 2: Adjusting.:

There is no guarantee that a given sample will lie on a locally optimal contract. Each agent, therefore, uses a nonlinear optimizer based on SA to try to find the local optimum in its neighborhood.

Step 3: Bidding.:

For each contract \(\vec{s}\) found by adjusted sampling, an agent evaluates its utility by summing the values of the satisfied constraints. If that utility is larger than the reservation value \(\delta \), then the agent defines a bid that covers all the contracts in the region with that utility value. Steps 1, 2, and 3 can be shown as Algorithm 1.

Step 4: Deal identification.:

The mediator identifies the final contract by finding all the combinations of bids, one from each agent, that are mutually consistent, i.e., that specify overlapping contract regions. For example, if a bid has a region, such as [0, 2] for issue 1, [3, 5] for issue 2, the bid is accepted by a contract point [1, 4], which means issue 1 takes 1, issue 2 takes 4. If a combination of bids, i.e., a solution, is consistent, there are definitely overlapping regions. For instance, a bid with regions \(({\text {Issue 1, Issue 2}}) = ([0,2],[3,5])\), and another bid with ([0, 1], [2, 4]) is consistent. If there is more than one such overlap appears, the mediator selects the one with the highest summed bid value (and thus, assuming truthful bidding, the highest social welfare). Each bidder pays the value of its winning bid to the mediator. The mediator employs a breadth-first search with branch cutting to find the social-welfare-maximizing overlaps. Step 4 can be shown as Algorithm 2.

Algorithm 1
A chart reads a set of algorithms for bid generation with S A. It takes the number of samples, temperature for simulated annealing, and set of values for each issue, and calculates the value of P subscript s m p l by employing the if loops.

Bid-generation with SA(Th, SN, V, T, B)

Algorithm 2
A chart reads a set of algorithms for search solutions. It takes the set of agents, the set of bid set of each agent, and a set of bids from an agent, and calculates the value of S C by employing the while, for, and of loops, and returns the maximum solution.

Search_solution(B)

It is easy to show in theory that this approach can be guaranteed to find optimal contracts. If each agent exhaustively samples every contract in its utility space and has a reservation value of zero, it will generate bids representing its complete utility function. With all agents’ utility functions in hand, the mediator can use an exhaustive search over all bid combinations to find the social-welfare-maximizing negotiation outcome. However, this approach is practical only for tiny contract spaces. The computational cost of generating bids and finding winning combinations grows rapidly as the size of the contract space increases. As a practical matter, the threshold is applied to limit the number of bids the agents can generate. Thus, deal identification can terminate in a reasonable amount of time.

3 Threshold Adjustment Mechanism for Keeping Agents’ Privacy

Existing works on automated negotiation protocols with non-linear utility functions have not considered the agents’ private information. Such private information should be kept secret as much as possible in their negotiations. A threshold adjustment mechanism is proposed. First, agents make bids that produce more utility than the common threshold value according to the basic bidding protocol [43]. Then the mediator asks each agent to reduce its threshold depending on how much private information it shares with others. Finally, each agent again makes bids above the threshold. This process continues iteratively until an agreement is reached or no solution is found. The experimental results show that the proposed method substantially outperforms existing negotiation methods regarding how much agents have to open their own utility space.

3.1 Threshold Adjustment Mechanism

The main idea of the threshold adjustment mechanism is that if an agent reveals a larger area of its utility space, it is given the opportunity to persuade other agents. On the other hand, when an agent shows a small area of its utility space, it should adjust its threshold to reveal a larger area if no agreement is reached. The revealed area can be defined by how the agent reveals its utility space according to its threshold value. The threshold values are initially set to the same value. Then each agent changes its threshold value based on the subsequent size of the revealed area.

Fig. 3
Two sets of diagrams. Each has six graphs tracing the utility, threshold, and issue 2 against issue 1 for agent 1, agent 2, and agent 3. It defines how the utility space is allocated for each agent and undergoes thresholding adjusting to allocate a maximum utility space.

Threshold adjustment process among three agents. The upper and lower panels show the thresholds and the revealed areas before and after threshold adjustments, respectively. Specifically, agent 3 revealed a small amount of its utility space in this case. Consequently, the increase in agent 3’s revealed utility space in this threshold adjustment is the largest among these three agents

Figure 3 shows an example of the threshold adjustment process among three agents. The upper and lower panels show the thresholds and the revealed areas before and after threshold adjustments, respectively. Specifically, Agent 3 revealed a small amount of its utility space in this case. Consequently, the increase in Agent 3’s revealed utility space in this threshold adjustment is the largest among these three agents. In the protocol, this process is repeated until an agreement is achieved or until they cannot find any agreement. The mediator or the mechanism designer defines the exact rate of the change in the size of the revealed utility space and the amount of threshold decrease. The threshold adjustment protocol was the first to propose an external loop for an effective consent mechanism. The details of the threshold adjustment mechanism are shown in Algorithm 3.

Algorithm 3
A chart reads a set of algorithms for threshold adjustment. It takes the area range of each agent, bid generation with S A, agent samples, and bids based on the basic bidding protocol. It calculates the value of S C by employing the while, for, and of loops, and returns the maximum solution.

Threshold_adjustment( )

The threshold adjustment process could reduce the computational cost of deal identification in Step 4 of the basic bidding protocol. The original Step 4 incurs an exponential computational cost because the computation consists of combinatorial optimization. In the proposed threshold adjustment process, agents incrementally reveal their utility spaces as bids. Thus, for each round, the mediator computes only the new combinations of bids submitted in that round. This process reduces the computational cost.

3.2 Experiments

3.2.1 Experimental Setting

The several experiments are conducted to evaluate the effectiveness of the proposed approach. 100 negotiations between agents in each experiment with randomly generated utility functions were ran. The threshold adjustment protocol was compared with the existing protocol without threshold adjustment in terms of optimality and privacy.

In the experiments on optimality, an optimizer to the sum of all the agent’s utility functions was applied to find the contract with the highest possible social welfare. This value was used to assess the efficiency of the negotiation protocols (i.e., how closely they approached the optimal social welfare). Simulated annealing (SA) is used to find the optimum contract because an exhaustive search became intractable as the number of issues grew very large. The SA initial temperature was 50.0 and decreased linearly to 0 over the course of 2500 iterations. The initial contract for each SA run was randomly selected.

Regarding privacy, the measure is the range of the revealed area. Namely, if an agent reveals one point of the utility space grid, it loses one privacy unit. If it reveals 1000 points, it loses 1000 privacy units. The revealed rate is defined as (Revealed rate) = (Revealed area)/(Entire area of utility space).

The parameters for the experiments were as follows: The number of agents is \(N = 3\). The number of issues ranges from 2 to 10, and the domain for issue values is [0, 9]. The utility function per agent has 10 unary constraints, 5 binary constraints, 5 ternary constraints, and so forth. (a unary constraint is related to one issue, a binary constraint is related to two issues, and so on). The maximum value for a constraint is \(100 \times ({\text {Number~of~Issues}})\). Constraints that satisfy many issues thus have larger weights on average. It seems reasonable for many domains. In meeting scheduling, for example, higher-order constraints affect more people than lower-order constraints, and hence they are more important. The maximum width for a constraint is 7. The following constraints, therefore, would all be valid: issue 1 = [2, 6], issue 3 = [2, 9], and issue 7 = [1, 3].

Three types of protocols were compared.

  1. (A)

    w/o Threshold Adjustment: The basic bidding protocol is applied [43]. This protocol exhaustively explores the entire utility space.

  2. (B)

    w/o Threshold Adjustment, w/ Bid Limitation: The basic bidding protocol with bids limitations is applied [43]. This protocol exhaustively explores the entire utility space. However, the number of an agent’s bids is limited to \(\root N \of {6400000}\).

  3. (C)

    w/ Threshold Adjustment: The proposed adjustment protocol is applied. This protocol does not have an explicit limitation on the number of bids. Each mechanism determines the amount of threshold decrease as \(50\times ({\text {SumAr}} -{\text {Ar}}_i)/{\text {SumAr}}\). \({\text {SumAr}}\) is the sum of all agent’s revealed areas and \({\text {Ar}}_i\) indicates \({\text {agent}}_i\)’s revealed area.

The number of samples taken during random sampling is \(({\text {Number~of~issues}}) \times 200\). The annealing schedule for sample adjustment is initial temperature 30, 30 iterations. Note that it is crucial that the annealer does not run very long or become very hot because then each sample will tend to find the global optimum instead of the peak of the optimum nearest the sampling point. The threshold used to select the bids to be made begins at 900 and decreases to 200 in the threshold adjustment mechanism. The protocol without the threshold adjustment process defines the threshold as 200. The threshold is used to eliminate contract points that have low utility. The limitation on the number of bids per agent is \(\root N \of {6400000}\) for N agents. Therefore, it was practical to run the deal identification algorithm only if it explored no more than about 6400000 bid combinations, which implies a limit of \(\root N \of {6400000}\) bids per agent, for N agents. In the experiments, 100 negotiations were ran in every condition. The code was implemented in Java 2 (1.5) and ran on a \({\text {Core}}^{TM}\) 2 Duo processor iMac with 1.0 GB of memory under Mac OS \(\times \) 10.4.

3.2.2 Experimental Result

Table 1 shows revealed rate (%), optimality rate, and number of bids of the comparable mechanisms. The mechanism without either threshold adjustment or bid limitation (A) increases the revealed rate. This means that if threshold adjustment and bid limitation are used, agents need to reveal much more of their utility space than in other mechanisms. The bid limitation is effective for keeping the increase in the revealed rate small. The revealed rate of the mechanism with bid limitation but without threshold adjustment starts decreasing when the number of issues is five; the reason is that bid limitation becomes active. Compared with the above two mechanisms, the mechanism with threshold adjustment drastically decreases the revealed rate.

Table 1 Revealed rate (%), optimality rate, and number of bids in the experiment. (A) w/o threshold adjustment, (B) w/o threshold adjustment, w/ bid limitation, (C) w/ threshold adjustment are compared in each metric

The proposed threshold adjustment mechanism can effectively reduce the revealed rates. It shows that the optimality yielded by the proposed mechanism is very competitive with other mechanisms. Regarding optimality, the difference among (A), (B), and (C) is small, at a maximum of around 0.1 for about three to seven issues. When the amount of threshold decrease is not large, say 50, agents could miss agreement points with larger total utilities. It occurs when some agents have higher utility on an agreement point, but others have much lower utility on that point. (A) forces agents to submit to all agreement points with a larger utility than the minimum threshold. Thus, it can find such cases. However, (B) and (C) fail to capture such cases when the amount of decrease is small.

The number of bids indicates the utility space that must be explored and the time needed to find a possible deal. The number of bids for (A) increases exponentially. Actually, this program fails to compute the combinations completely at more than six issues when using (A). Threshold Adjustment drastically reduces the number of bids. (C) manually limits the number of bids. The increase in the number of bids stops at the limit defined above. On the other hand, the proposed mechanism successfully reduces the number of bids drastically.

4 Secure and Efficient Negotiation Protocols

Distributed Mediator Protocol (DMP) and the Take it or Leave it (TOL) Protocol are proposed, which makes agreements and conceals agent utility values. In the DMP, it is assumed that there is many mediators who search the utility space to find agreements. When searching in their search space, they employ the multi-party protocol to simultaneously calculate and conceal the sum of the per-agent utility value. Furthermore, the DMP scales better with the complexity of the utility space by dividing the search space between the mediators. In the TOL Protocol, the mediator searches using the hill-climbing (HC) search algorithm. The evaluation value is determined by the agents’ responses, who either take or leave an offer to move from the current state to a neighboring state.

The Hybrid Secure Protocol (HSP) is also proposed, combining the DMP and TOL. In the HSP, TOL is performed first to improve the initial state in the DMP step. Next, the DMP is performed to find the local optima in the neighborhood. The HSP can also reach an agreement and conceal the per-agent utility information. Additionally, it can reduce the amount of memory required to make an agreement, which is a major issue in the DMP. Moreover, the HSP can reduce the communication cost (memory usage) more than the DMP can.

Although the DMP and HSP describe interactions among agents and mediators, they do not define the agreement search method, which is how the mediator searches for and finds agreement points. Thus, three agreement search methods are compared: HC, simulated annealing (SA), and a genetic algorithm (GA). HC and SA have been employed in previous works [43]. However, GAs also perform well in finding highly optimal contracts. Therefore, a GA-based method is compared with the other methods.

4.1 Secure Negotiation Protocol

4.1.1 Distributed Mediator Protocol (DMP)

It is assumed that there are more than two mediators (i.e., a distributed mediator) so that the DMP achieves distributed search and protection of the agents’ private information by employing a multi-party protocol [55, 57]. The DMP is described as follows.

There are m mediators (\(M_0, \ldots , M_m\)) who can calculate the sum of all the agent utility values if k mediators get together, and n agents (\(Ag_0, \ldots , Ag_n\)). All mediators share q, which initially a prime number.  

Step 1::

The mediators divide the utility space (search space) and choose which mediator will manage it. The method of dividing the search space and assigning tasks is beyond the scope of this discussion. Parallel computation is possible if the search space is divided. This means that the computational complexity during searching can decrease.

Step 2::

Each mediator searches its search space with a local search algorithm [58]. HC and SA are examples of local search algorithms. The objective function using a local search algorithm is used to maximize the social welfare. During the search, the mediator declares a multi-party protocol if it is searching in the state for the first time. Next, the mediator selects k mediators from all the mediators and asks for all agents to generate v (share).

Step 3::

Agent i (\(A_i\)) randomly selects a k-dimensional formula that fulfills \(f_i(0)=x_i\), and calculates \(v_{i,j}=f_i(j)\). (\(x_i\): agent’s i’s utility value). Next, \(A_i\) sends \(v_{i,j}\) to \(M_j\).

Step 4::

Mediator j (\(M_j\)) receives \(v_{1,j}, \ldots , v_{n,j}\) from all the agents. \(M_j\) calculates \(v_j =v_{1,j}+ \cdots + v_{n,j}\) mod q and reveals \(v_j\) to other mediators.

Step 5::

The mediators calculate the f(j) that fulfills f(j) \(=\) \(v_j\) by Lagrange’s interpolating polynomial. Finally, s, which fulfills f(0) \(=\) s, is the sum of all the agents’ utility values.

Steps 2–5 are repeated until they fulfill the at-end condition in the local search algorithm.  

Step 6::

Each mediator announces the maximum value (alternative) in his space to all mediators. Next, the mediators select the maximum value from all the alternatives.

Fig. 4
A framework of distributed mediator protocol comprising three agents 1, 2, and 3, and two mediators 1 and 2. The agents calculate the utility values without revealing them. The mediators calculate the sum of all agent's utility values by Lagrange's interpolating polynomial method.

Flow in distributed mediator protocol (DMP). There are three agents and two mediators. If two mediators get together, they can calculate the sum of the per-agent utility values. The skyblue area shows the steps that the agents perform without revealing them. As the figure indicates, the sum of all agent utility values can be calculated, and the values can be concealed by selecting the multi normal (\(f_i\)), generating the share (v), adding the share, and applying Lagrange s interpolating polynomial

Figure 4 shows the flow in the DMP. There are three agents and two mediators. If two mediators get together, they can calculate the sum of the per-agent utility values. The gray area shows the steps that the agents perform without revealing them. As the figure indicates, the sum of all agent utility values can be calculated, and the values can be concealed by selecting the multi normal (\(f_i\)), generating the share (v), adding the share, and applying Lagrange s interpolating polynomial.

The DMP has the advantages of keeping an agent’s utility information private and scaling well with the size of the utility space. The details are given as follows.  

Privacy::

The DMP can calculate and conceal the sum of all the agents’ utility values. The proof is identical to that for the multi-party protocol [57]. In the DMP, other agents and mediators cannot know the utility values without illegal collusion.

Additionally, k, which is the number of mediators performing the multi-party protocol, represents the trade-off between privacy issues and computational complexity. If k mediators exchange their shares (v) illegally, they can expose the agent utility values. Therefore, to protect an agent’s private information, k should be so large that mediators are discouraged from colluding illegally because it requires considerable effort. However, a large k requires more computation time because more mediators have to stop searching.

Scalability::

The computational cost can be greatly reduced because the mediators divide the search space. In existing protocols, they cannot find better agreements when the search space becomes huge. However, by dividing the search space, this protocol can locate better agreements in large search spaces.

The DMP has a limitation: Too many shares (v) are generated. This is because the shares are generated that correspond to the search space. Generating shares incurs a much greater communication cost as the number of agents increases than searching without generating shares. Thus, it is necessary to generate fewer shares with high optimality.

4.1.2 Take it or Leave it (TOL) Protocol for Negotiation

Take it or Leave it (TOL) Protocol is proposed, which can also reach agreements and conceal all the agents’ utility information. The mediator searches using the HC search algorithm [58], which is a simple loop that continuously moves in the direction of increasing evaluated value. The values of each contract are evaluated by the decisions that agents make to take or leave offers to move from the current state to the neighboring state. The agents can conceal their utility value using this evaluation value. This protocol consists of the following steps.

 

Step 1::

The mediator randomly selects the initial state.

Step 2::

The mediator asks the agents to move from the current state to a neighboring state.

Step 3::

Each agent compares its current state with the neighboring state and determines whether to take the offer or leave it. The agent takes the offer if the neighboring state provides a higher utility value than the current state. If the current state provides a higher or identical utility value than the neighboring state, the agent rejects (leaves) the offer.

Step 4::

The mediator selects the next state that is declared by most agents as “take it.” However, the mediator selects the next state randomly if more than two states are tied for being declared as “take it.” The mediator can prevent local maxima from being reached by random selection.

Steps 2, 3, and 4 are repeated until all agents declare “leave it,” or the mediator determines that a plateau has been reached. A plateau is an area of the state space landscape where the evaluation function is flat.

Fig. 5
A graph plots the current state of the mediator. The mediator asks the states of the agents and the responses from the agents are traced as the current states. The agents give their respective values as take it and leave it.

Take it or leave it (TOL) Protocol. First, the mediator informs agents about the state whose evaluation value he wants to know. Second, agents search for their utility space and declare “take it” or “leave it.” It determines the number of agents who declare “take it” (\({\text {VALUE (state)}}\)). These steps are repeated until they satisfy the at-end condition

Figure 5 shows the concept of the “Take it or Leave it (TOL) Protocol.” First, the mediator informs agents about the state whose evaluation value he wants to know. Second, agents search for their utility space and declare “take it” or “leave it.” It determines the number of agents who declare “take it” (\({\text {VALUE (state)}}\)). These steps are repeated until they satisfy the at-end condition. The TOL Protocol has the advantage of lower time complexity because it easily rates the evaluated value. However, it cannot find optimal solutions when a plateau is reached.

4.2 Hybrid Secure Negotiation Protocol (HSP)

A proposed protocol that combines the DMP with TOL is proposed to address the DMP’s limitation. This new protocol is called the HSP, which generates fewer shares than the DMP. It is described as follows.

 

Step 1::

The mediators divide the utility space (search space) and choose a mediator to manage it.

Step 2::

Each mediator searches its search space using TOL. The initial state is randomly selected. By performing TOL initially, the mediators can find somewhat more optimal solutions without generating shares (v).

Step 3::

Each mediator searches its search space using steps 2–5 in the DMP as proposed in Sect. 4.1. The initial state is the solution found in the previous step. By performing the DMP after TOL, mediators can find the local optima in the neighborhood and conceal each agent’s private information.

Steps 2 and 3 are repeated many times by changing the initial state.  

Step 4::

Each mediator communicates the maximum value (alternative) in their space to all the mediators. Next, the mediators select the maximum value from all the alternatives. Finally, they propose this alternative as the agreement point.

The HSP can find solutions with fewer shares than the DMP because the initial state in Step 3 is higher than that when only the DMP is performed. In addition, TOL does not generate shares, and the DMP searches in states in which TOL has not searched. Thus, the HSP can reduce the number of shares. Furthermore, TOL and the DMP can protect the agent’s utility value (private value). Therefore, HSP can also preserve the agent’s utility value.

Moreover, the HSP yields higher optimality. This is because TOL usually stops searching after reaching a plateau. Additionally, the main reason for lowering the optimality in the DMP is to reach the local optima, although the initial value in Step 3 is usually different because it is determined by TOL. Therefore, the HSP can produce agreements with higher optimality.

4.3 Experiments

4.3.1 Experimental Setting

100 negotiations between agents were ran in each experiment with randomly generated utility functions. In these experiments, the number of agents was six, and the number of mediators was four.

The following methods were compared:

  • “(A) DMP (SA)” is the Distributed Mediator Protocol, and the search algorithm is simulated-annealing [58].

  • “(B) DMP (HC)” is the Distributed Mediator Protocol, and the search algorithm is hill-climbing [58].

  • “(C) DMP (GA)” is the Distributed Mediator Protocol, and the search algorithm is the genetic algorithm [58].

  • “(D) HSP (SA)” is the hybrid secure protocol, and the search algorithm in the distributed mediator step is simulated annealing.

  • “(E) HSP (HC)” is the hybrid secure protocol, and the search algorithm in the distributed mediator step is the hill-climbing algorithm.

In the optimality experiments, an optimizer is applied to the sum of all the agents’ utility functions for each run to find the contract with the highest possible social welfare. This value was used to assess the efficiency of the negotiation protocols (i.e., how closely they approached the optimal social welfare). To find the optimum contract, SA is used because intractable as the number of issues grew very large. The SA initial temperature was 50.0 and decreased linearly to 0 throughout 2500 iterations. The initial contract for each SA run was randomly selected. The optimality rate is defined as (Maximum utility value calculated by each method)/(Optimum contract value using SA).

The number of agents was six, and the number of mediators was \(2^{ ({\text {the~number~of~issues}})}\). In the DMP, the mediators can calculate the sum of the per-agent utility values if four mediators get together and the search space is divided equally.

Utility function: The domain for the issue values is [0, 9]. The constraints include 10 unary constraints, 5 binary constraints, 5 ternary constraints, and so forth (a unary constraint is related to one issue, a binary constraint is related to two issues, and so on). The value for a constraint is \(100 \times ({\text {Number~of~Issues}})\). Constraints that satisfy many issues have, on average, which seems reasonable for many domains. To schedule meetings, for example, higher-order constraints affect more people than lower-order constraints; hence, they are more important. The maximum width for a constraint is 7.

The following parameters are set for HC, SA, and GA.  

Hill climbing (HC)::

The number of iterations is 20 + (Number of issues) \(\times \) 5. The final result is the maximum value achieved.

Simulated annealing (SA)::

The annealing schedule for the DMP includes an initial temperature of 50. For each iteration, the temperature is decreased by 0.1. Thus, it decreases to 0 after 500 iterations. \(20 \times ({\text {Number~of~issues}}) \times 5\) were conducted to search while varying the initial start point. The annealing schedule for the HSP in the DMP step includes an initial temperature of 10 with 100 iterations. Note that the annealer must not run very long or become very hot because then each initial state obtained by TOL will tend to find the global optimum instead of the peak of the optimum nearest the initial state in DMP.

Genetic algorithm (GA)::

The population size in one generation is 20+(Number of Issues) \(\times \) 5. A basic crossover method combining two parent individuals to produce two children (one-point crossover) is used. The fitness function is the sum of all the agents’ (declared) utility. 500 iterations were conducted. Mutations occurred with a very small probability. In a mutation, one of the issues in a contract vector was randomly chosen and changed. In the GA-based method, an individual is defined as a contract vector.

The code was implemented in Java 2 (1.5) and ran on a \({\text {Core}}^{TM}\) 2 Duo processor iMac with 1.0 GB of memory under Mac OS X10.5.

4.3.2 Experimental Results

Table 2 shows the optimality rates and the average of shares (v) of five protocols. For (B) DMP (HC), the rate decreases rapidly as the number of issues increases because HC reaches local optima by increasing the search space. For (C) DMP (GA), it does not decrease rapidly even if the number of issues increases. Additionally, (A) DMP (SA) is the same as the optimal solution. Therefore, the optimality depends on the search algorithm in the DMP. (D) HSP (HC) achieves high optimality because the HSP performs the DMP after performing TOL. In addition, (D) HSP (HC) achieves higher optimality than (C) HSP (SA) because SA in the DMP step sometimes stops searching for a worse state than the initial state owing to its random nature. In contrast, HC stops searching for a better state than the initial state.

Table 2 Optimality rate and the number of shares per agent in the experiment. “(A) DMP (SA),” “(B) DMP (HC),” “(C) DMP (GA),” “(D) HSP (SA),” and “(E) HSP (HC)”are compared in each metric

The number of shares enables us to compare the memory usage of the protocols. That for (C) DMP (GA) increases exponentially. On the other hand, (A) DMP (SA) and (B) DMP (HC) use fewer shares than (C) DMP (GA) because GA searches for more states than SA and HC. The number of shares in the DMP depends on the features of the search protocol. Furthermore, (C) HSP (SA) and (D) HSP (HC) use fewer shares than (A) DMP (SA), (B) DMP (HC), and (C) DMP (GA) because the initial state in the DMP step in the HSP has a higher value than the initial state in the DMP because TOL was performed first. Thus, the HSP can reduce the number of shares more than the DMP can.

5 Secure and Fair Protocol that Addresses Weaknesses in Nash Bargaining Solution

The Nash bargaining solution, which maximizes the product of the agent utilities, is a well-known metric that provably identifies the optimal (fair and social-welfare-maximizing) agreement for negotiations in linear domains [8, 52, 53]. In nonlinear domains, however, the Pareto frontier will often not satisfy the convexity assumption required to make the Nash solution optimal and unique [8, 52, 54]. In other words, in nonlinear domains, multiple agreements can satisfy the Nash bargaining solution, and many or all of these will have sub-optimal fairness and social welfare. Therefore, a new approach is necessary to produce good outcomes for nonlinear negotiations.

A secure mediated protocol (SFMP) that addresses this challenge is presented. The protocol consists of two primary steps. In the first step, the SFMP uses a nonlinear optimizer, integrated with a secure information-sharing technique called the secure gathering protocol [55], to find the Pareto frontier without causing agents to reveal private utility information. In the second step, an agreement is selected from the set of Pareto-optimal contracts using a metric called approximated fairness, which measures how equally the total utility is divided across the negotiating agents (e.g., [56]). It shows that SFMP produces better scalability and social welfare than previous nonlinear negotiation protocols.

5.1 Weaknesses of the Nash Bargaining Solution in Nonlinear Negotiation

Working in the nonlinear domain has some important impacts on the types of negotiation protocols that can be effective. First, consider Pareto-optimality, which is widely recognized as a basic requirement for a good negotiation outcome. It is defined as follows: Contract \(\vec{s}=(s_1,\ldots ,s_{M})\) is Pareto optimal if there is no \(\vec{s}'\) such that \(u_i(\vec{s}~') > u_i(\vec{s})\) for all agents (\(u_i(\vec{s})\) is agent i’s utility value). Pareto-optimality thus eliminates all contracts when others exist that are better for all the parties involved. In a linear negotiation (i.e., where the agent utility functions are defined as the weighted sum of the values for each issue), it is computationally trivial to find the Pareto frontier and the social welfare (sum of agent utilities) for every contract on the Pareto frontier is the same. In fact, the Pareto-optimal frontier for negotiation will be sparse in the proposed model, i.e., the Pareto-optimal contract points will be few and widely scattered.

Next, let us consider fairness. Fairness is critical in bargaining theory because some experimental results suggest that it profoundly influences human decision-making (e.g., [59]) in such contexts as family decision-making (e.g., where will we go on our next vacation?), the less formal economy of consumer transactions (such as ticket scalpers or flea markets), and price setting for consumer purchases. The ultimatum game is a popular example of this effect [60, 61]. People tend to offer “fair” (i.e., 50:50) splits, and offers of less than 20% are often rejected in this game, even though it is irrational to reject any deal because the alternative is a zero payoff. There are many other studies about the relationship between decision-making and fairness in experimental and behavioral economics [29, 62].

Fig. 6
Three line graphs of utility and contract. A. Agent utility values with an increasing slope for agent 1 and a decreasing slope for agent 2. B. Pareto Front with a declining slope. C. Nash bargaining product with a downward parabola, fluctuating fair division, and neutral social welfare.

Relationships among Nash product, fairness, and social welfare in a linear utility function

The Nash bargaining solution (i.e., the contract that maximizes the Nash product = the product of the agents’ utility functions) is widely used for identifying the fairest contract from those that make up the Pareto frontier. As shown in Fig. 6, the Nash bargaining solution divides the utility equally among the negotiating parties in a linear domain. It can be proven that there is a unique Nash bargaining solution for negotiations with convex Pareto frontiers, which is satisfied trivially for negotiations with linear utilities [8]Footnote 3.

Fig. 7
3 multiline graphs of utility and contract. A. Agent utility values with a concave-up increasing agent 1 and a concavely decreasing agent 2. B. Pareto Front with a declining trend. C. Three fluctuating trends for Nash's bargaining solution, fairness, and social welfare with a peak at Nash's product.

Relationship among Nash product, fairness, and social welfare in a non-linear utility function

These properties change radically in nonlinear negotiation. As shown in Fig. 7, when agents have nonlinear utility functions, the Pareto frontier can be non-convex [63]. Multiple Nash bargaining solutions can exist, even with continuous issue domains, and some of them may be non-optimal in terms of social welfare and fair division of utility. It is even straightforward to find nonlinear cases where all the contracts on the Pareto frontier are Nash bargaining solutions, although many diverge widely from maximal fairness and social welfare. The Nash bargaining solution concept, widely used as a basis for negotiation protocols for linear domains, will thus often fare poorly in nonlinear domains. Therefore, it is necessary to find negotiation protocols that can achieve high social welfare and fairness values with nonlinear agent utilities.

5.2 Secure and Fair Mediator Protocol with Approximated Fairness

The SFMP was defined to achieve these goals while protecting agents’ private utility information. It consists of two primary steps: (1) finding the set of Pareto-optimal contracts and (2) selecting a fair contract from that set. These steps are defined below.

  • Finding the Pareto Frontier: This step is achieved using a mediated approach [64, 65]. The mediators use this preference information to provide the objective function for a non-linear optimization technique such as simulated annealing (SA) or a genetic algorithm (GA). Over the course of multiple rounds, the mediators converge on the set of Pareto-optimal contracts. As is common in negotiation contexts, that agents prefer not to share their utility functions with others in order to preserve a competitive edge. Accordingly, the protocol uses a secure gathering protocol based on a multi-party protocol [55] to ensure that mediators can calculate the sum of the agents’ utilities without learning, or revealing, the individual agent’s utility information.

  • Selecting the Final Agreement: The SFMP selects the final agreement from the Pareto-optimal contract set by calculating the fairest. Several definitions of fair have been identified in social choice and game theory [56]. Suppose that a division \(X=X_{1}\cup \cdot \cup X_{n}\) among n agents where agent i receives \(X_{i}\). “Simple” fair division results if \(u_i(X_i)\ge 1/n\) whenever \(1\le i\le n\) (each agent gets at least 1/n.) Another definition, from game theory, calls a division X is fair if and only if it is Pareto-optimal and envy-free [66]. A division is “envy-free” if no agent feels another has a strictly larger piece of the utility [56].

The simple fair division is considered as the concept of fairness. Contract agreements, in general, rarely fully satisfy this condition. Accordingly, it is measured that how close an agreement is to simple fair division by calculating its “approximated fairness” , i.e., the deviation of each agent’s utility from the average of the total utility. The approximated fairness of a contract is formally defined as follows:

$$\begin{aligned} V(u_1, \ldots , u_n) =\displaystyle \sum ^{n}_{i=1} \frac{~(u_i - \overline{u})^2~}{\displaystyle n }\end{aligned}$$
$$\begin{aligned}&(u_1, \ldots , u_n: {\text {agent's~utility~value~in~contract}}, \\ & \overline{u}: {\text {the~average~of~all~agent's~utility~value}}). \end{aligned}$$

An ideal contract, therefore, has an approximated fairness value of zero, and all other contracts will have larger values. The final agreement selected by the protocol is the contract from the Pareto-optimal set with the smallest approximated fairness value.

Note that the fairness concept is equivalent to the Nash bargaining solution in linear contexts with continuous issue domains. Assume that \(u_1 + u_2 + \cdots + u_n = K({\text {constant}})\) (where \(u_i\): agent i’s utility value). The Nash product is maximized when \(u_1 = u_2 = \cdots = u_n = K/n\) (this has been proven mathematically in the field of isoperimetric problems). The approximated fairness does not, however, correspond to the Kalai-Smorodinsky solution because the latter is not always fair [67].

5.3 Experiments

A series of negotiation simulation experiments were ran to demonstrate the weaknesses of the Nash bargaining solution in non-linear domains and to compare the performance of the SFMP protocol with that of previous approaches. The sub-sections below describe the experimental setup and results.

5.3.1 Detailed Description of Secure and Fair Mediator Protocol (SFMP)

The SFMP uses multiple mediators to help ensure agent privacy. There are \(k = mn\) mediators \(M_j\) and n agents (\(A _i\)), where m is an arbitrary integer. Note that this approach requires that m be relatively high to effectively conceal the agents’ private information. If the number of mediators is low, it is more likely that all the mediators will collude and thus compromise the agents’ privacy.

(Optional Pre-Negotiation Step) Contract Space Division among Mediators: The mediators divide the contract space between them so that each mediator searches a different sub-region. Suppose, for example, there are two issues whose domain is the integers from 0 to 10. In this case, Mediator 1 can manage the region of values from 0 to 5 for Issue 1 and from 0 to 10 for Issue 2, while Mediator 2 can manage the region of values from 6 to 10 for Issue 1 and from 0 to 10 for Issue 2. This step is optional, but it has the advantage of potentially reducing the time needed to search the contract space by allowing parallel computation.

(Step1) Secure Search to Find a Pareto-optimal Contract Set: Each mediator searches its assigned portion of the contract space using a local search algorithm [58]. The experiments employed hill-climbing (HC), SA, and GA. In HC, an agent starts with a random solution, makes random mutations at each step, and selects the one that causes the most significant utility increase. When the algorithm cannot find any more improvements, it terminates. In SA, each step of the SA algorithm replaces the current solution with a randomly generated nearby contract, with a probability that depends on the change in the utility value and a global parameter T (the virtual temperature) that is gradually decreased during the process. The agent moves almost randomly when the temperature is high but acts increasingly like a hill climber as the temperature decreases. When T is 0, the search is terminated. The advantage of SA is that it can avoid getting stuck in the local optima that occur in non-linear optimization problems and often finds more optimal solutions than HC. GA is a search technique inspired by evolutionary biology, using inheritance, mutation, selection, and crossover techniques. First, many individual contracts are randomly generated to form an initial population. Next, at each step, a proportion of the existing population is selected based on its fitness (i.e., utility values). Crossover and mutation are then applied to these selections to generate the next generation of contracts. This process is repeated until a termination condition is reached. The objective function of all these local search algorithms is social welfare maximization. At each search step, the mediators determine the social welfare values by securely gathering their assigned agents’ utility values for the current contract(s). It called as secure value gathering.

(Step 2) Identify Agreement: All mediators share the maximum value in their sub-region of the contract space with all other mediators. On the basis of these values, they identify the Pareto-optimal contract set. The mediators then select the contract in that set that minimizes the approximated fairness metric. This represents the final agreement for that negotiation.

5.3.2 Nash Product Maximization Search (NPMS)

For a comparison case, the Nash Product Maximization Search (NPMS) is used to find the Nash bargaining solution for the tests [58]. The implementation used SA to maximize the Nash product for the negotiating agents, gathering their utility values using the secure gathering protocol. SA has been shown to be very effective for nonlinear optimization tasks [43]. NPMS can solve to assess the scale of the performance decrement caused by using the Nash bargaining solution concept in nonlinear domains.

5.3.3 Experimental Setting

Five experiments were conducted to evaluate the effectiveness of the approach. 100 negotiations between agents in each experiment with randomly generated utility functions were ran. The number of agents was six, and the number of mediators was four. The mediators could calculate the sum of the agents’ utilities. The search space was divided equally amongst the mediators. The domain for the issue values was [0, 9]. The constraints included 10 unary constraints, 5 binary constraints, 5 ternary constraints, and so on (a unary constraint relates to one issue, a binary constraint relates to two issues, and so on). The maximum value for a constraint was \(100 \times ({\text {Number~of~issues}})\). Constraints that satisfy many issues thus have, on average, larger utility, which seems reasonable for many domains. In scheduling meetings, for example, higher-order constraints affect more people than lower-order constraints, which are more important. The maximum width for a constraint was 7. The following constraints, for example, are both valid: Issue 1 = [2, 6] and Issue 3 = [2, 9].

The following negotiation protocols were compared: SFMP (SA), SFMP (HC), SFMP (GA), Nash Product Maximization Search (NPMS), Basic Bidding protocol, and Exhaustive Search.

  • (A) SFMP (SA): This is SFMP using SA as the optimization algorithm. The initial temperature was 50. For each iteration, the temperature decreased by 0.1, and so 500 iterations were performed. 20 + (Number of issues) \(\times \) 5 searches were conducted, randomly changing the initial start point for each search.

  • (B) SFMP (HC): This is SFMP using HC as the optimization algorithm. The random-restart HC mechanism [58] is employed. 20 + (Number of issues) \(\times \) 5 searches were conducted, randomly changing the initial start point for each search.

  • (C) SFMP (GA): This is SFMP using a GA as the optimization algorithm. The population size was 20 + (Number of issues) \(\times \) 5. A basic crossover method is conducted combining two parent individuals to produce two children (one-point crossover). The fitness function was the sum of all the agents (declared) utility. 500 iterations were conducted. Mutations occurred with a tiny probability. In a mutation, one of the issues in a contract vector was randomly chosen and changed.

  • (D) Nash Product Maximization Search (NPMS): NPMS used SA to search for the Nash bargaining solution(s), i.e., for contracts that maximize the Nash product., i.e., for contracts that maximize the Nash product. The initial temperature was 50 degrees. The temperature decreased by 0.1 degrees for each iteration, so 500 iterations were performed. 20 + (Number of issues) \(\times \) 5 searches, changing start point randomly for each search. These settings are the same as those for SFMP (SA).

  • (E) Basic Bidding protocol: The basic bidding protocol is that proposed [43]. The number of samples taken during random sampling is (Number of issues) \(\times \) 200. The threshold used to remove contract points that have low utility is 200. The limitation on the number of bids per agent is \(\root N \of {6400000}\) for N agents. This method fails to reach an agreement if the number of issues exceeds eight because it is computationally very complex.

  • (F) Exhaustive Search: An exhaustive search is a centralized brute-force algorithm that traverses the entire contract search space to find the Pareto-optimal contract set. The final agreement is then selected using the approximated fairness measure. This approach was computationally practical only when the number of issues was seven or fewer.

The code was implemented in Java 2 (1.5) and ran on a \({\text {Core}}^{TM}\) 2 Duo processor iMac with 1.0 GB of memory under Mac OS X 10.5.

5.3.4 Experimental Result

Table 3 compares the social welfare, the number of Pareto-optimal contracts, and the variance in the agents’ utilities for the final agreements achieved by these six methods.

Table 3 Social welfare, success rate in finding Pareto-optimal contracts. (A) SFMP (SA), (B) SFMP (HC), (C) SFMP (GA), (D) Nash Product Maximization Search (NPMS), (E) Basic Bidding protocol, and (F) Exhaustive Search are compared in each metric. If “–” is expressed, the score can’t be obtained in practical time because of the computational complexity. The social welfare was (Social welfare for final agreement from method)/(Social welfare for final agreement from SFMP (SA)). As predicted, SFMP (SA) and SFMP (GA) outperformed NPMS, confirming the claim that the Nash bargaining solution produces sub-optimal outcomes when applied to non-linear negotiation

About the social welfare, (A) SFMP (SA) and (C) SFMP (GA) performed similarly. Neither had fully optimal results, reflecting the difficulty of performing optimization in large non-linear contract spaces. All the SFMP protocols outperformed the basic bidding protocol, which was hampered by the limit on the number of bids per agent necessitated by the combinatorics of winner determination in this protocol. The performance of (B) SFMP (HC) decreased rapidly as the number of issues grew because HC became stuck on local optima. The performance of (A) SFMP (SA) and (C) SFMP (GA) did not decrease appreciably as the number of issues increased.

About success rate in finding Pareto-optimal contracts, (A) SFMP (SA) and (C) SFMP (GA) were better at finding Pareto-optimal contracts than either the NPMS or the basic bidding protocol. It makes sense because the SFMPs((A)–(C)) were explicitly designed to find the entire Pareto frontier before selecting a final agreement, whereas other protocols were not. (A) SFMP (SA) and (C) SFMP (GA) outperformed the basic bidding protocol because the latter often fails to find Pareto-optimal solutions owing to the limit on the number of bids allowed to each agent. As always, the performance of (B) SFMP (HC) decreased rapidly as the number of issues grew. (C) SFMP (GA) showed the highest performance on this measure because GA is inherently more suitable for finding Pareto-optimal contract sets. However, for all the methods, when the number of issues increased, the percentage of Pareto-optimal contracts found drastically decreased.

About the variance in the agents’ utilities for the final agreements to assess their fairness, the SFMPs ((A)–(C)) outperformed the basic bidding protocol on this measure because the latter does not consider fairness when finding agreements. (C) SFMP (GA) showed the lowest (best) value among the SFMP variants. (D) NPMS outperformed the SFMPs on this measure. It contradicts that the Nash bargaining solutions to vary widely in their fairness values, causing NPMS to produce sub-optimal fairness values on average.

These results can be explained by considering the allocation of computational effort in non-linear optimization. In an even moderately large non-linear optimization problem, the contract space is too large to explore exhaustively. For example, if there are only ten issues with ten possible values per issue, this produces a space of \(10^{10}\) (10 billion) possible contracts. As a result, with limited computational resources, It is no guarantee of finding the complete Pareto frontier. The SFMP is presumably able to find only a subset of the Pareto-optimal contracts, and those are scattered over the entire frontier. Because the coverage is sparse, the SFMP will often not find the Pareto-optimal contract that optimizes the fairness metric. It will reduce the average fairness score for the SFMP. The NPMS, in contrast, devotes its entire computational effort to finding a single Nash-product-maximizing contract. Even though it is an inferior optimization objective, it has the benefit of a more concentrated application of computing restheces.

Fig. 8
A scatter graph of agent 2 versus agent 1. It plots a decreasing concave down curve for the pareto frontline with an ideal pareto optimal and fair contract point on the curve. Data points below the curve are S F M P and Nash production maximization search.

Comparison of SFMP and NPMS in the outcome space. The diamond symbols indicate the contracts considered by the NPMS, and the square symbols indicate those considered by the SFMP

This interpretation is supported by Fig. 8, which shows the utility values for the SFMP ((A)–(C)) and (D) NPMS for a case with two agents and five issues with randomly generated non-linear utility functions. The diamond symbols indicate the contracts considered by the NPMS, and the square symbols indicate those considered by the SFMP. Because the SFMP aims to find the entire Pareto frontier, it searches throughout the frontier. The NPMS, by contrast, aims to find the contract that directly maximizes the Nash product; hence, it focuses its search toward the middle of the Pareto frontier. In this case, the SFMP came closer to the Pareto frontier than the NPMS.

6 Decomposing the Contract Space Based on Issue Interdependencies

One of the main challenges in developing effective non-linear negotiation protocols is scalability; it can be challenging to find high-quality solutions when there are many issues owing to computational intractability. One reasonable approach to reducing computational cost while maintaining high-quality outcomes is decomposing the contract space into several independent sub-spaces. A method for decomposing a contract space into sub-spaces is proposed according to the agents’ utility functions. A mediator finds sub-contracts in each sub-space based on votes from the agents and combines the sub-contracts to produce the final agreement. It is experimentally demonstrated that the proposed protocol allows highly optimal outcomes with greater scalability than previous efforts.

It is also addressed incentive compatibility issues [68]. Any voting scheme introduces the potential for strategic non-truthful voting by the agents, and the proposed method is no exception. For example, one of the agents may always vote truthfully. In contrast, another exaggerates so that its votes are always “strong.” It has been shown that this biases the negotiation outcomes to favor the exaggerator at the cost of reduced social welfare. It is applied the limitation of strong votes to decomposing the contract space into several largely independent sub-spaces. It is investigated whether and how this approach can be applied to contract space decomposition.

6.1 Strength of Issue Interdependency

The strength of an issue interdependency is captured by the interdependency rate. A measure is defined for the interdependency between \(i_j\) and \(i_{jj}\) for agent a (\(D_a(i_j,i_{jj})\)) as follows:

$$\begin{aligned}D_a(i_j,i_{jj}) = \sharp \{c_k | \delta _a(c_k,i_j) \not = \emptyset ~\wedge ~\delta _a(c_k,i_{jj}) \not =\emptyset \}.\end{aligned}$$

This measures the number of constraints that inter-relate the two issues.

Fig. 9
An interdependency graph with data points scattered randomly throughout the graph. The data points represent the issues and are interlinked through interdependencies. Issue 1, interdependent between issue 24 and issue 34 are labeled.

Example of interdependency graph (50 issues). Agents capture their issue interdependency information in the form of interdependency graphs, i.e., weighted non-directed graphs where a node represents an issue, an edge means the interdependency between issues, and the weight of an edge represents the interdependency rate between those issues

Agents capture their issue interdependency information in the form of interdependency graphs, i.e., weighted non-directed graphs where a node represents an issue, an edge means the interdependency between issues, and the weight of an edge represents the interdependency rate between those issues. An interdependency graph is thus formally defined as:

$$\begin{aligned} G(P,E,w): P=\{1, 2, \ldots , |I|\}({\text {finite~set}}),\end{aligned}$$
$$\begin{aligned} E\subset \{\{x,y\}| x,y \in P\}, w:E \rightarrow R. \end{aligned}$$

Figure 9 shows an example of an interdependency graph.

The objective function of the proposed protocol can be described as follows:

$$\begin{aligned} \arg \max _{\vec{s}} \sum _{a\in N} u_{a}(\vec{s}). \end{aligned}$$
(1)
$$\begin{aligned} \arg \max _{\vec{s}} u_a(\vec{s}),~(a=1,\ldots ,N). \end{aligned}$$
(2)

This protocol, in other words, tries to find contracts that maximize social welfare, i.e., the summed utilities for all agents. Such agreements, by definition, will also be Pareto-optimal. At the same time, all agents try to find contracts that maximize their own welfare.

6.2 Decomposing the Contract Space

6.2.1 Analyzing Issue Interdependency

The first step is for each agent to generate an interdependency graph by analyzing interdependencies in its own utility space.

6.2.2 Grouping issues

In this step, the mediator employs a breadth-first search to combine the issue clusters submitted by each agent into a consolidated set of issue groups. For example, if Agent 1 submits the clusters \(\{i_1, i_2\}, \{i_3,i_4,i_5\},\) \(\{i_0,i_6\}\) and Agent 2 submits the clusters \(\{i_1,i_2, i_6\},\{i_3, i_4\},\{i_0\},\{i_5\}\), the mediator combines them to produce the issue groups \(\{i_0,i_1,i_2,i_6\},\) \(\{i_3,i_4,i_5\}\). In the worst case, if all the issue clusters submitted by the agents have overlapping issues, the mediator generates the union of the clusters from all the agents. The details of this algorithm are given in Algorithm 4.

Algorithm 4
A chart reads a set of algorithms for combining issue groups. It takes the set of agents, and the set of issue groups of each agent, and calculates the value of S G by employing the while, for, and of loops.

Combine_IssueGroups(G)

Gathering all of the agents’ interdependency graphs in one central place and then finding the issue groups using standard clustering techniques is possible. However, it is difficult to determine the optimal number of issue groups or the clustering parameters using central clustering algorithms because the basis of clustering can differ for each agent. The proposed approach avoids these weaknesses by requiring that each agent generates its own issue clusters. In the experiments, agents used the well-known Girvan-Newman algorithm [69], which computes clusters in weighted non-direct graphs. The algorithm’s output can be controlled by changing the “number of edges to remove” parameter. Increasing the value of this parameter increases the number of issue dependencies that are ignored when calculating the issue clusters, thereby producing a more significant number of smaller clusters. The running time of this algorithm is \(\mathcal {O}(kmn)\), where k is the number of edges to remove, m is the total number of edges, and n is the total number of vertices.

6.2.3 Finding Agreements

A distributed variant of simulated annealing (SA) [58] is used to find optimal contracts in each issue group. In each round, the mediator proposes an agreement that is a random single-issue mutation of the most recently accepted contract (the accepted contract is initially generated randomly). Each agent then votes to accept(+2), weakly accept(+1), weakly reject(−1), or reject(−2) the new contract, depending on whether it is better or worse than the last accepted contract for that issue group. When the mediator receives these votes, it adds them together. If the sum of the vote values from the agents is positive or zero, the proposed contract becomes the currently accepted one for that issue group. If the vote sum is negative, the mediator will accept the agreement with probability \(P({\text {accept}}) = e^{\Delta U/T}\), where T is the mediator’s virtual temperature (which declines over time) and \(\Delta U\) is the utility change between the contracts. In other words, at higher virtual temperatures and smaller utility decrements, an inferior agreement is more likely to be accepted. If the proposed contract is not accepted, a mutation of the most recently accepted contract is proposed in the next round. This continues over many rounds. This technique allows the mediator to skip past local optima in the utility functions, significantly earlier in the search process in the pursuit of global optima.

Algorithm 5
A chart reads a set of algorithms for simulated annealing. It takes the sum of the numeric values mapped from votes to N from all agents, sets the initial solution randomly, and returns the next value as a randomly selected successor of the current and the current values based on probability.

Simulated_Annealing()

6.2.4 Exaggerator Agents

Any voting scheme introduces the potential for strategic non-truthful voting by the agents, and the proposed method is no exception. For example, one of the agents may always vote truthfully, whereas another exaggerates so that its votes are always strong. It has been shown that this biases the negotiation outcomes to favor the exaggerator at the cost of reduced social welfare [36]. An enhancement of the negotiation protocol is necessary that prevents exaggerated votes and maximizes social welfare.

Simply limiting the number of strong votes by each agent can work well. If the limit is very low, it is effectively lost the benefit of voting weight information and obtain lower social welfare values. Limiting the number of strong votes per agent can avoid this; however, if the strong vote limit is set too high, all an exaggerator has to do is save all of its strong votes until the end of the negotiation. At this point, it can drag the mediator toward making a series of proposals that are inequitably favorable to it. The experiments demonstrate that limiting the number of strong votes is effective for finding high-quality solutions.

6.3 Experiments

6.3.1 Experimental Setting

Several experiments were conducted to evaluate the proposed approach. In each experiment, 100 negotiations were ran using the following parameters. The domain for the issue values was [0, 9]. Constraint-based utility functions were employed. Each agent had 10 unary constraints, 5 binary constraints, 5 ternary constraints, and so on (a unary constraint is related to one issue, a binary constraint is related to two issues, and so on). The maximum weight for a constraint was 100 \(\times \) (Number of issues).

Fig. 10
Two sets of graphs. Each has an interdependency graph and a scatter graph. 1. Sparse connection. 2. Dense connection. The scatter graph plots the number of issues versus the sum of the weight of connections the node has to other issues. Each plot decreasing concave up trends of data points.

Issue interdependencies in the experiments. It gives examples of inter-dependency graphs and the relationship between the number of issues and the sum of the connection weights between issues for these two cases. The sparse connection case is closer to a scale-free distribution with power-law statistics, whereas the dense connection condition is closer to a random graph

Each agent’s issues were organized into ten small clusters with strong dependencies between the issues within each cluster. Then, two conditions were ran: Sparse Connections and Dense Connections. Figure 10 gives examples of inter-dependency graphs and the relationship between the number of issues and the sum of the connection weights between issues for these two cases. As these graphs show, the Sparse Connection case is closer to a scale-free distribution with power-law statistics, whereas the Dense Connection condition is closer to a random graph.

The following negotiation methods were compared:

  • “(A) Issue Grouping (True Voting):” SA is applied based on the agents’ votes, and negotiation is performed separately for each issue group. The resulting sub-agreements are combined to produce the final agreement. All agents make truthful votes.

  • “(B) Issue-Grouping (Exaggerator Agents):” SA is applied based on the agents’ votes with issue grouping. All the agents make exaggerated votes.

  • “(C) Issue-Grouping (Limitation):” This is the same as (B) except that a limitation on strong votes is applied. The maximum number of strong votes is 250, the optimal number of limitations in these experiments.

  • “(D) Without Issue-Grouping:” This method is presented in [36], using SA based on the agents’ votes without generating issue-groups.

In all these cases, the search began with a randomly generated contract, and the SA initial temperature was 50.0 and decreased linearly to 0 throughout the negotiation. In (D), the search process involved 500 iterations. In (A)–(C), the search process involved 50 iterations for each issue group. Therefore, all the cases used the same computation time and are thus directly comparable. In all cases, the number of edges removed from the issue inter-dependency graph when the agents were calculating their issue groups was six.

The centralized SA was applied to the sum of the individual agent’s utility functions to approximate the optimal social welfare for each negotiation test run. An exhaustive search was not a viable option because it becomes computationally intractable as the number of issues grows. The SA initial temperature was 50.0 and decreased linearly to 0 throughout 2,500 iterations. The initial contract for each SA run was randomly selected. A normalized optimality rate was calculated for each negotiation run, defined as (Social welfare achieved by each protocol)/(Optimal social welfare calculated by SA).

The code was implemented in Java 2 (1.6) and was run on a \({\text {Core}}^{TM}\) 2 Duo CPU with 2.0 GB of memory under Mac OS X 10.6.

Fig. 11
Two multiline graphs. 1. Sparse connection. 2. Dense connection. Each plot the optimality rate versus the number of issues with 4 decreasing concave up trends for issue grouping of true voting, issue grouping of exaggerator agents, limiting of the strong vote, and without issue grouping.

Comparison of optimality versus number of issues changes in the sparse connection and dense connection cases

Fig. 12
Two multiline graphs. 1. Sparse connection. 2. Dense connection. Each plots the optimality rate versus the number of issues with four neutral trends for issue grouping of true voting, issue grouping of exaggerator agents, issue grouping of limiting of strong vote, and without issue grouping.

Comparison of optimality versus number of agents changes in the sparse connection and dense connection cases

6.3.2 Experimental Result

Figures 11 and 12 compare the optimality rate in the Sparse Connection and Dense Connection cases. (A) achieved a higher optimality rate than (D), which means that the issue-grouping method produces better results for the same amount of computational effort. The optimality rate of the (A) decreased as the number of issues (and therefore the size of the search space) increased. (B) performed worse than condition (A) because the exaggerator agents reduced the social welfare in multi-agent situations. However, (C) outperformed condition (B); therefore, limiting the number of strong votes is effective for counteracting the reduction in the social welfare caused by the exaggerator agents.

The optimality rates for all methods were almost unaffected by the number of agents, as Fig. 12 shows. The optimality rate for (A) is higher than that for (D) in the Sparse Connections case; this is also true in the Dense Connections case but to a lesser degree. This is because the issue grouping method can achieve high optimality if the number of ignored inter-dependencies is low, which is more likely to be true in the Sparse Connections case. The sparse issue inter-dependencies characterize many real-world negotiations.

Fig. 13
A multiline graph compares the quality factor versus the number of edges to be progressively removed. It plots two declining trends for the central method and the decentralized method. The central method has the highest quality factor than the decentralized method.

Number of edges to be progressively removed (clustering parameter) v.s. QF

It is also assessed a quality factor measure, QF = (Sum of internal weights of edges in each issue group)/(Sum of external weights of edges in each issue group) to assess the quality of the issue groups, i.e., the extent to which issue dependencies occurred only between issues in the same clusters, rather than between issues in different groups. A higher-quality factor should increase the advantage of the issue grouping protocols because fewer dependencies are ignored when negotiation is done separately for each issue group. Figure 13 shows the quality factors when the number of agents is 3 and 20 as a function of the number of edges to be removed, which is the key parameter in the clustering algorithm. For example, the number of issues is 50 in the Sparse Connection case. In the (a) Central Method, all the agents’ inter-dependency graphs are gathered in one central place, and then the issue groups are identified using the well-known Girvan-Newman algorithm [69]. In the (b) Decentralized Method, a breadth-first search is employed to combine the issue clusters submitted by each agent into a consolidated set of issue groups.

A comparison of (a) with (b) in Fig. 13 reveals that the decentralized method outperforms the central method. This is because, in the method, all the agents’ issues are included in the final issue grouping without a fixed clustering parameter. QF became smaller when the number of edges to be progressively removed grew larger. This is because the number of issue groups generated by each agent increases as the number of edges to be progressively removed becomes larger. A rapid decrease sometimes occurs as the number of edges to be progressively removed increases. These points are good parameters for decomposing the issue groups. In real life, the agents’ utilities reflect an adequate concept of issue groups, and agents can determine the optimal issue groups by analyzing the utility spaces.

7 Conclusion and Future Work

7.1 Conclusion

The work described in this chapter makes numerous essential contributions to state of the art in automated negotiation. The contributions of this work can be summarized as follows.  

Section 2::

A model of nonlinear multi-issue negotiation and a bidding-based negotiation protocol (basic bidding) were described for multiple-issue negotiation among agents with highly nonlinear utility functions. Applying constraints produces a bumpy and highly nonlinear utility function. In the basic bidding protocol, agents generate bids by sampling their utility functions to find local optima and then use constraint-based bids to describe regions with large utility values for that agent compactly. These techniques make bid generation computationally tractable even in large utility spaces. A mediator then finds a combination of bids that maximizes social welfare.

Section 3::

A threshold adjustment mechanism for multi-issue negotiations among agents with nonlinear utility functions were proposed. A negotiation with interdependent issues in which the agents’ utility functions are nonlinear was assumed. Many real-world negotiation problems are complex and involve multiple interdependent issues. The concept of the revealed area was proposed, which represents the amount of utility information an agent reveals. Moreover, the threshold adjustment mechanism reduces the amount of private information each agent reveals. Additionally, this mechanism could reduce the computational cost of finding a deal with high optimality. Experimental results demonstrated that the threshold adjustment mechanism could reduce the computational cost and provide sufficient optimality.

Section 4::

A Distributed Mediator Protocol (DMP) were proposed, which can reach agreements while completely concealing agents’ utility information and achieving high scalability concerning utility space. Moreover, the Hybrid Secure Protocol (HSP) was proposed that combines the DMP and the Take it or Leave it (TOL) Protocol. Experimental results demonstrated that the HSP could reduce the required memory with high optimality.

Section 5::

It was shown that the Nash bargaining solution, although optimal for negotiations with linear utilities, can lead to sub-optimal outcomes when applied to nonlinear negotiations. Secure and Fair Mediator Protocol (SFMP) was proposed. This negotiation protocol uses a combination of nonlinear optimization, secure information sharing, and an approximated fairness metric. It was demonstrated that it achieves higher social welfare values than a protocol based on searching for the Nash bargaining solution. Finally, it was shown that the SFMP outperforms the own previous efforts to enable multi-lateral negotiations in complex domains.

Section 6::

A new negotiation protocol based on grouping issues that can find high-quality agreements in inter-dependent issue negotiation was proposed. In this protocol, agents privately generate their own issue inter-dependency graphs, the mediator identifies issue groups according to these graphs, and multiple independent negotiations proceed for each issue sub-group. It was demonstrated that the proposed protocol has greater scalability than those in previous works and analyzed the incentive compatibility issues.

7.2 Future Work

Future work includes building protocols to find Pareto-optimal contracts more quickly, making them more scalable and increasing fairness performance. One potential approach to this problem is to focus the search efforts of the mediators more closely on the fair portion of the Pareto frontier.

Another possible future work is to analyze the negotiation protocol theoretically. Investigating the incentive compatibility issues can ensure that the protocol cannot be gamed by agents seeking to gain disproportionate influence or sabotage the outcomes. Enhancing the negotiation protocol that incentivizes truthful bidding can preserve equity and maximize social welfare. In the bilateral case, it can be done using a type of Clarke tax [70], wherein each agent has a limited budget from which it has to pay other agents before the mediator accepts a contract that favors that agent, but reduces the utility for others. This approach incentivizes agents to avoid exaggeration because it will cause them to spend their limited budget on contracts that do not strongly affect their true utility values.

In this chapter, cardinal utilities in constraint-based utility functions were considered; however, other utility functions based on cardinal utilities and ordinal utilities are essential factors to apply to the real-world setting [9].