Distributed neurodynamic approaches to nonsmooth optimization problems with inequality and set constraints

In this paper, neurodynamic approaches are proposed for solving nonsmooth distributed optimization problems under inequality and set constraints, that is to find the solution that minimizes the sum of local cost functions. A continuous-time neurodynamic approach is designed and its state solution exists globally and converges to an optimal solution of the corresponding distributed optimization problem. Then, a neurodynamic approach with event-triggered mechanism is considered for the purpose of saving communication costs, and then, the convergence and its Zeno-free property are proved. Moreover, to realize the practical application of the neurodynamic approach, a discrete-time neurodynamic approach is proposed to solve nonsmooth distributed optimization problems under inequality and set constraints. It is rigorously proved that the iterative sequence generated by the discrete-time neurodynamic approach converges to the optimal solution set of the distributed optimization problem. Finally, numerical examples are solved to demonstrate the effectiveness of the proposed neurodynamic approaches, and the neurodynamic approach is further applied to solve the ill-conditioned Least Absolute Deviation problem and the load sharing optimization problem.


Introduction
Distributed optimization problems over multi-agent systems have received much attention and the main principle of solving distributed optimization problems is to optimize the global cost function through the cooperation of interrelated agents [31,41,45]. The cost function of a distributed optimization problem is generally the summation of local cost functions which is only acquired by an individual agent. In particular, to protect the privacy of communication, agents in the multi-agent system only share particular information with their neighbors. Distributed optimization problems can be found in fields of traffic balance [41] [45], and collaborative control, such as the realization of formation maintenance between multiple vehicles [31]. Considering the limitations of communication bandwidth and communication range in many applications, it becomes necessary to study distributed neurodynamic approaches that require only partial information exchange between neighbors. In fact, problems in the engineering field can usually be abstracted into mathematical models and then solved using corresponding neurodynamic approaches, such as [2, 8,11,13,27,35].
In recent years, kinds of neurodynamic approaches for solving distributed optimization problems have been published (see [11,[13][14][15][27][28][29]36,46]). By means of collaborative control between agents, these neurodynamic approaches can not only protect privacy, but also solve the server overload issue. For unconstrained distributed optimization problems, authors in [36] constructed a neurodynamic approach under the assumption that the optimal solution set is a compact set. However, authors in [11] proposed a neurodynamic approach based on a strongly connected and directed communication graph to weaken this assumption. Actually, there are few unconstrained optimization problems in the actual situation, so scholars pay more attention to distributed optimization problems with constraints [13,28,29,46]. Authors in [46] constructed a distributed gradient neurodynamic approach to solve the distributed optimization problem over a multi-agent system where each agent has its own private constraint function, while authors in [29] proposed a continuous-time projection neurodynamic approach sharing the same constraint set among agents in the multi-agent system. It should be noted that the neurodynamic approach in [46] requires the local cost functions which are twice continuously differentiable. To relax this differentiability hypothesis, authors in [27] proposed a subgradient neurodynamic approach for nonsmooth distributed optimization problem, which was later developed into a subgradient projection neurodynamic approach in [28] to resolve distributed optimization problems with consensus constraints. Inspired by the works above, authors in [13] proposed a subgradient neurodynamic approach for solving the nonsmooth distributed optimization problem. In addition, authors in [23] presented a primal-dual projection neurodynamic approach for nonsmooth distributed optimization problems under local bounded constraints. Because instant messaging is an ideal state, authors in [21] considered the time delay in information exchange and designed a subgradient projection neurodynamic approach. Furthermore, by combining primal-dual methods for searching saddle points with projection operators for meeting set constraints, a continuous-time neurodynamic approach for nonsmooth constrained convex optimization was designed in [49].
All neurodynamic approaches mentioned above require that the communication process of the multi-agent system takes place at each time. Therefore, to avoid the high-energy consumption caused by frequent communication and in keeping with the fact that each node is usually equipped with a limited amount of energy, a large number of neurodynamic approaches with event-triggered mechanisms have emerged (see [7,16,25,33,51]). Among them, authors in [51] proposed an event-triggered neurodynamic approach to solve distributed optimization problems with equality constraints when the local objective functions are quadratic functions, and authors in [33] designed a neurodynamic approach for the second-order multi-agent systems with event-triggered and time-triggered communication. Furthermore, to consider the more general case, authors in [25] designed a gradient neurodynamic approach with event-triggered mechanism to solve distributed convex problems under set constraints.
As we all know, whether in computers or in practical applications, discrete-time neurodynamic approaches are more practical. For resolving distributed optimization problems with different constraints, a variety of discrete-time neurodynamic approaches have been proposed in public. With the help of properties of the projection operator, authors in [27] proposed a discrete-time neurodynamic approach for constrained nonsmooth optimization problems based on the parallel and distributed computing general framework proposed by [34]. On the basis of [27], authors in [21] proposed a subgradient neurodynamic approach for the same nonsmooth optimization problem. However, the neurodynamic approach in [21] required that the constraint set is compact. To weaken this assumption, authors in [37] introduced a discrete-time neurodynamic approach under switching topologies for distributed optimization problems with strongly convex local cost functions. Recently, authors in [24] have presented a discrete-time neurodynamic approach with fixed step and analyzed its convergence rate. Later, authors in [38] proposed a subgradient neurodynamic approach with various step sizes. Authors in [50] further extended smooth local cost functions to the summation of smooth convex functions and nonsmooth L 1 -norm functions.
However, most of the above-mentioned neurodynamic approaches require some inevitably strict assumptions. For example, the local cost functions of the considered distributed optimization problem are smooth, or the structure of the constraint set is simple or even has no constraints. To overcome these limitations, we propose neurodynamic approaches to solve the nonsmooth distributed optimization problem under inequality and set constraints. Here are detailed contributions.
1. Unlike the existing neurodynamic approaches where the Laplacian matrixes of communication graph are used to ensure consensus constraints (see [13,17,46]), one of the highlights of this paper is using the exact penalty method to deal with uniform constraints, which reduces the dimension of solution space. 2. The continuous-time neurodynamic approach proposed herein has better convergence property than approaches in [6,18] because it can ensure that state solutions converge to an optimal point of the distributed optimization problem rather than the optimal solution set. Furthermore, the proposed continuous-time neurodynamic approach can solve the nonsmooth distributed optimization problem under inequality and set constraints, and it can be expended to solve distributed optimization problems in [11,12,22,42]. 3. Compared with the neurodynamic approaches under continuous communication in [11,13,29,46] and so on, the advantages of the event-triggered neurodynamic approach designed in this paper are saving the communication energy consumption between nodes and reducing the number of controller updates. Thus, avoiding unnecessary consumption of network resources and improving the utilization rate of bandwidth. Moreover, the eventtriggered neurodynamic approach can solve distributed optimization problems under inequality and set con-straints, which has the ability to solve more general problems than [25,33,51]. 4. A discrete-time variable step-size neurodynamic approach is designed for the nonsmooth convex distributed optimization problems with inequality and set constraints. As far as we know, existing discrete-time neurodynamic approaches in [32,45,50] can only be applied to cases that have differentiable local cost functions or affine inequality constraints, so the proposed discretetime neurodynamic approach herein can solve more general problems. In addition, discrete-time neurodynamic approaches are easier to implement and apply than continuous-time neurodynamic approaches.
This article is organized as follows. "Preliminaries" lists some notations and basic knowledge about graph theory, convex analysis, etc. In "Problem description and equivalent form", the distributed optimization problem with inequality and set constraints is transformed into a problem with set constraints equivalently. In "Main results", a continuous-time distributed neurodynamic approach and an event-triggered neurodynamic approach are discussed; meanwhile, the convergence of two neurodynamic approaches is proved, respectively. Moreover, a discrete-time neurodynamic approach is presented and its convergence is analyzed. "Simulations and applications" illustrates the effectiveness and performance of neurodynamic approaches through numerical examples. The last section summarizes the thesis and looks forward to the research direction in the future.

Preliminaries
First of all, here are some terminologies and symbols that will be utilized. R n is the set of n-dimension real vectors, R m×n is the set of m × n real matrices, and N is the set of natural numbers. x T and x = √ x T x denote the transpose and norm of x ∈ R n , respectively. The open ball centered on x ∈ R n and y ∈ R m . 1 n is an n-dimensional vector where all the components are ones, and 0 n is an n-dimensional vector where all the components are zero. The sign ⊗ stands for the Kronecker product. For two sets A, B ⊆ R n , int(A) and bd(A) mean the interior and the boundary of set A, A + B = {x + y : x ∈ A, y ∈ B} and A + x 0 = {x + x 0 : x ∈ A}.
Next, some basic definitions of graph theory are as follows. Let G = (V, E) be a communication topology between several nodes in a multi-agent system consisting of N agents, where V = {1, 2, . . . , N } and E ⊆ V × V are node set and edge set, respectively. If (i, j) ∈ E, Node j is a neighbor of node i. If A = (a i j ) N ×N satisfies that a i j ≥ 0, and a i j > 0 when (i, j) ∈ E, A is a weighted adjacency matrix of G. If there are distinct nodes i l ∈ V, such that L = {(i, i 1 ), (i 1 , i 2 ), . . . , (i k , j)}, then i and j are connected and L is a path between nodes i and j. Furthermore, the communication topology G is called undirected, if its weighted adjacency matrix is symmetric. And an undirected graph G is called connected, if there exists a path between any pair of nodes.
Subsequently, several necessary definitions and propositions of convex analysis and projection operators are given.
Let C ⊆ R n be a nonempty closed and convex set, then the projection operator on C is a function P C : R n → C, which is defined by Proposition 2 [27] For the projection operator P C , it follows that: Definition 3 Suppose that C ⊆ R n is a nonempty closed convex set. The normal cone of C at x, denoted by N C (x), is defined as Proposition 3 [10] Suppose that C, C 1 , C 2 ⊆ R n are nonempty closed convex sets.

Problem description and equivalent form
In this section, we consider the following constrained optimization problem: where x ∈ R n is the decision vector, f i (x) : R n → R is convex but not necessary smooth, and constraint functions is a bounded closed convex set. Without loss of generality, the optimal solution set of optimization problem (1) is nonempty.
Remark 1 Optimization problems have been discussed by [11][12][13]17,22,42,46] and so on, since they can be applied in various engineering and control areas. Actually, the optimization problem discussed here is under inequality and set constraints, so optimization problem (1) has a wider range of applications than problems discussed in [12,22] and [42].
Assumption 1 is actually common preconditions and have been used in [26,48]. It is easy to see that the feasible region Ξ ∩ Ω of optimization problem (1) is a bounded set, which implies that there exists a positive number L, such that for any ξ ∈ ∂ f (x), η ∈ ∂ḡ(x), x ∈ Ξ ∩ Ω. Besides, define a function whereḡ l (x) is the lth component ofḡ(x) and g i j (x) means the jth component of g i (x). Due to the convexity of g i (x), it can be concluded that D(x) is convex. Specially, the closed form for ∂ D(x) can be calculated [43] that where I = {1, 2, . . . , m}, I 0 (x) = {l ∈ I :ḡ l (x) = 0} and I + (x) = {l ∈ I :ḡ l (x) > 0}. Let Inspired by the exact penalty method mentioned in [20] and [44], we can set up the equivalent transformation of optimization problem (1) as follows: Theorem 1 Under Assumption 1, x * ∈ R n is an optimal solution of optimization problem (1) if and only if x * is an optimal solution of the following optimization problem: where the penalty parameter σ > L M/ĝ, and M, L,ĝ are from Assumption 1, (2) and (4) respectively.
Proof Let x * be an optimal solution of optimization problem (1), then due to Proposition 3, it derives that there exists Next, the proof will be divided into cases as follows.
. It is also known by convexity that x * is an optimal solution of optimization problem (5).
Thus, x * is an optimal solution of optimization problem (1). Now, we consider a multi-agent system composed of N agents, whose communication topology is G = (V, E) and V = {1, 2, . . . , N }. For i ∈ V, let x i be the decision variable of agent i and N i = { j ∈ V : (i, j) ∈ E} be the set of neighbors of agent i, then the following lemma is essential to the equivalent transformation of optimization problem (5).

Assumption 2 The communication graph between agents is undirected and connected.
Lemma 1 [20] The optimization problem (5) can be written as a distributed optimization problem with communication graph G

Remark 2
The problem (12) is a typical distributed optimization problem, which aims to find the optimal solution to global cost function through cooperation between agents.
Finding the solution to distributed optimization problems has aroused great interest among scholars due to its privacy protection characteristics, intelligence, and flexibility such as [13,17,46]. The above papers ensure that consensus constraints hold by introducing the Laplacian matrixes of communication graphs, but we equivalently transform the optimization problems with consensus constraints by the exact penalty method shown as the next theorem, which reduces the dimension of solution space.
Under Assumption 1, it is obvious from (2) that there is Therefore, based on the following theorem, neurodynamic approaches for solving the distributed optimization problem are proposed in the next section.
Theorem 2 [39] Under Assumptions 1 and 2 as well as

Main results
In this section, a continuous-time neurodynamic approach, an event-triggered neurodynamic approach, and a discrete-time neurodynamic approach are proposed to solve the distributed optimization problem with inequality and set constraints.

Continuous-time neurodynamic approach
In this subsection, we propose a projection subgradient neurodynamic approach for nonsmooth convex distributed optimization problem (13) as follows: where h(x) is defined in (13). Noting that Then, the neurodynamic approach for agent i ∈ V can be rewritten aṡ

Remark 3
In detail, the projection operator P Ω in (15) is to ensure that the state solution stays at Ω, the term ensures the state solution converges to an optimal solution of neurodynamic approach (15), makes the state solution reach consensus. To explain neurodynamic approach (15) more clearly, its circuit diagram is shown in Fig. 1.
In fact, the subgradients of f i and g i (i ∈ V) are represented by piecewise functions.
Actually, the combination of projection method and subgradient method is very common when solving distributed optimization problems, such as [24,30,48,49].
Proof According to Propositions 1, 2, and Assumption 1 in this paper, it can be concluded that h(x) is bounded from below on R N n and Lipschitz on any bounded subset of Ω. Thus, by applying the Theorem 5.2 in [5], there exists a local solution x(t) of neurodynamic approach (14) By the definition of differential inclusion, there is γ (t) ∈ ∂h(x(t)) for a.e. t ∈ [0, T ), such thaṫ Inspired by the proof of Lemma 2.4 in [5], through integrating Since t 0 e s e t − 1 ds = 1 and x 0 ∈ Ω, it is easy to deduce Theorem 4 Under Assumptions 1 and 2, if σ > max{L M/ĝ, N L + N L(N L + 2)}, the state solution x(t) of neurodynamic neurodynamic approach (14) with initial value x(0) = x 0 exists globally and converges to an optimal solution to distributed optimization problem (13), that is, are the optimal solution to optimization problem (1).
be an optimal solution of distributed optimization problem (13), and x i (t) (i ∈ V) be the state solution from initial value x i (0) of neurodynamic approach (14). Then, according to the definition of differential inclusion, there exist measurable functions for a.e. t ∈ [0, T ).  (15) Construct a function as follows: The It can be seen from Proposition 2 that Therefore, it has From x(t) ∈ Ω and (21), it can be obtained that 0 ≤ H (x(t)) ≤ H (x(0)) for t ≥ 0. Then, x(t) is bounded. According to Theorem 3, we get that x(t) exists globally by the extension theorem of solutions in [1], that is

Define another function
Since As t → +∞, it derives that Therefore, there exists a subsequence {t k } > 0, such that Due to 2) in Assumption 1 and the boundedness of ∈ Ω and Ω is closed. Next, we will prove thatx = col{x 1 ,x 2 , . . . ,x N } is an optimal solution of problem (13). From (23), there are Then by the proposition of P Ω (·), it yields that for all y i ∈ Ω, and Let l → +∞ and combine with (24), and there are lim sup Thus, −ξ i −ση i −σ 2ζ i ∈ N Ω (x i ) by Definition 3. Therefore, x = col{x 1 ,x 2 , . . . ,x N } is an optimal solution to distributed optimization problem (13).

Remark 4
The continuous-time neurodynamic approach proposed in this subsection can be used to solve nonsmooth distributed optimization problems with inequality and set constraints, and the state solution converges to an optimal solution. Thus, the continuous-time neurodynamic approach has better convergence property than [6] and [18] in which the state solution only converges to the optimal solution set. In addition, the proposed continuous-time neurodynamic approach can be extended to solve distributed optimization problems in [9,11,13,17,46]. See Table 1 for details.

Event-triggered neurodynamic approach
It must be pointed out that in the process of solving largescale optimization problems, the information transmission of agents may consume a lot of energy. Therefore, the addition of an event-triggered mechanism will greatly reduce energy consumption and save communication costs, which has been studied in [16,25,51]. Therefore, in this subsection, a distributed event-triggered neurodynamic approach is presented for solving distributed optimization problems and the corresponding event-triggered condition is designed by the Lyapunov approach. Before introducing the event-triggered mechanism, the following theorem should be shown.

Theorem 5 Assume that p(x) is a strongly convex function
and x * is the optimal solution of the following optimization problem: where Ω ⊆ R n is a compact set. Let x [ν] is the optimal solution of Proof According to the penalty method mentioned in [4], if x * is the optimal solution of the constrained optimization problem (26) and y [ν] is the optimal solution of the following unconstrained optimization problem: for any ν ∈ N and α > 0, then lim ν→+∞ y [ν] = x * . Since x [ν] is the optimal solution of optimization problem (27), from the definitions of G [ν] (x), F [ν] (y) and x [ν] , we have Thus, lim ν→+∞ ϕ(x [ν] ) 2 = 0, which implies ϕ(x) = 0 for any limit pointx of {x [ν] }. Let ν → +∞, and it derives p(x) = p(x * ). From the strongly convexity of p(x), it is easy to get that lim ν→+∞ x [ν] =x = x * .
According to Theorem 5, we construct a new distributed optimization problem where f i (·), G i (·) (i ∈ V) and Ω are from the distributed optimization problem (12). Here, we further assume that f i (·) is differentiable strongly convex, so it is obvious that H [ν] (·) is a differentiable strongly convex function for any ν ∈ N. Thus, distributed optimization problem (29) has a unique optimal solution, denoted by x [ν] . Now, a neurodynamic approach with event-triggered mechanism is proposed as follows: where t (i) k ∈ [0, +∞) (k ∈ N) is the kth triggering instant, β i is a positive parameter, and H [ν] i (

Remark 5
In fact, an event-triggering algorithm consists of two parts: a neurodynamic algorithm and an eventtriggering mechanism. Combining with published literatures about event-triggering mechanism in [7,16,25,33,51], we are  Zeno behavior refers to the phenomenon that the system triggers countless events within a limited time. It is well known that Zeno behavior is not desirable for control implementation, since physical devices cannot sample infinitely fast. Therefore, it is not negligible to determine whether there is Zeno behavior in an event-triggered neurodynamic approach.
Definition 4 [19] Under an event-triggered neurodynamic approach, agent i (i ∈ V) is said to be Zeno if where T 0 is a finite constant, and Zeno-free otherwise. Moreover, the event-triggered neurodynamic approach is said to be Zeno if there is an agent to be Zeno, and Zeno-free otherwise.

Remark 6
A crucial reason for the popularity of eventtriggered neurodynamic approaches is that it can reduce unnecessary interaction, at the same time, save limited network resources. More specifically, the information interaction occurs only when the trigger condition is met. Inspired by [47], an event-triggering condition can be designed based on Lyapunov functions to ensure convergence of the neurodynamic approach on the one hand and prevent Zeno behavior on the other.

Assumption 3 The gradient ∇ H
Therefore, if the initial value x i (0) ∈ Ω, it can be obtained that x i (t) ∈ Ω. Let x * = col{x * 1 , x * 2 , . . . , x * N } and N } be the optimal solutions for distributed optimization problem (13) and (29), respectively, e i (t) = x i (t) − x i (t (i) k ) be the measurement error for agent i ∈ V. Then, we specify the event-triggered condition for storing or updating the sampled data where k )) to facilitate the following description.

Theorem 6
If Assumptions 1-3 hold and x 0 ∈ Ω, then the state solution x(t) of event-triggered neurodynamic approach (30) converges to x ν when the parameters β i and P [ν] i satisfy β i P [ν] i > 3 + √ 10 for each agent i ∈ V. Furthermore, the event-triggered neurodynamic approach (30) is Zeno-free.

Proof Consider a Lyapunov function
where H [ν] (·) is defined in (29). Since x ν is the optimal solution of distributed optimization problem (29), then Since According to 1) in Proposition 2, it follows that Furthermore, based on Assumption 3 and the Cauchy-Schwarz inequality, we have ) and β i P [ν] i > 3 + √ 10. Then, the Lyapunov stability theorem and the strongly convexity of H [ν] (·) imply that x(t) is convergent to x [ν] , which is defined as the optimal solution of (29). Next, since e i (t (i) . From event-triggering condition (31), we can get that the next event will not be triggered before q i (t) = 0. Thus From Theorem 5, the following corollary can be drawn. (30) converges to the optimal solution x * of (2) when ν is large enough and the parameters β i and P [ν] i satisfy β i P [ν] i > 3+ √ 10 for each agent i ∈ V. Furthermore, the event-triggered neurodynamic approach (30) is Zeno-free.

Remark 7
Event-triggering mechanism is outstanding in alleviating bandwidth pressure and saving energy consumption, because agents do not need to communicate information all the time as required in [11,13,29,46], but communicate their local information intermittently. In addition, compared to works in [25,33,51], the event-triggered neurodynamic approach (30) in this paper can solve distributed optimization problems under inequality and set constraints.

Discrete-time neurodynamic approach
Since solving the distributed optimization problem relies on the information exchange between agents in the multi-agent system, and the communication of agents is essentially a discrete process in actual operation, so studies on discretetime neurodynamic approaches are also achieved widespread attention. In this subsection, the discrete-time neurodynamic approach in [3] is extended for solving nonsmooth convex distributed optimization problem (13) under assumptions mentioned in Sect. 4.1.
Approach I: Discrete-time neurodynamic approach.

Return to
Step 2 with k replaced by k + 1.
where +∞ k=1 k = +∞ and and h(x) is defined in distributed optimization problem (13). Let {x k } and {τ k } be sequences generated by Approach I, . Since x k ∈ Ω, then x k is an optimal solution of distributed optimization problem (13). Let O be the optimal set of distributed optimization problem (13), it is obvious that if τ k = 0, then x k ∈ O.

Remark 8
Approach I can be viewed as the discrete form of continuous-time neurodynamic approach (14) in time dimension. It turns out that discrete-time neurodynamic approaches are more suitable for processing generators equipped with embedded digital microprocessors. This is why discretetime neurodynamic approaches are easy to apply in practice. Besides, the step-size rule has been applied to other neurodynamic approaches in [28] and [38].

Proposition 4
Under Assumptions 1 and 2, τ k ≤ k and Proof It is clear that the conclusion holds if τ k = 0. Now, we suppose τ k = 0. In this case, consider the convex function then u k ∈ ∂Φ x k (τ k ). Since τ k is the minimum point of for all z ∈ Ω − x k . Then, x k ∈ Ω implies that 0 ∈ Ω − x k . Take z = 0, so u k , τ k ≤ 0.
By substituting (34) into (35), we can derive The result can be obtained directly, since τ k = 0.

Proposition 5
If Assumptions 1 and 2 hold, then for any x ∈ Ω Proof From Proposition 4, we have By (35) and (34), it has u k = τ k + k ς k ς k and u k , In combination with the above inequalities By means of the convexity of h(x), then ς k , Hence, the conclusion holds.
Lemma 2 If Assumptions 1 and 2 hold, then {x k } is bounded.
Proof According to Proposition 5, for x * ∈ O, we have 3 2 Therefore, it can be obtained that As a result, {x k } is bounded by +∞ Proof From Proposition 5, it is obvious that for any Since {x k } is bounded, then {ς k } is bounded. Without lose of generality, we suppose that ς k ≤ K , k ∈ N and γ k = It is easy to calculate that Let m → +∞, we have Therefore, it has a subsequence {γ i k }, such that lim k→+∞ γ i k ≤ 0. Otherwise, there are ρ > 0,k ≥ 0, such that which contradicts with +∞ k=k k = +∞. Therefore, {x i k } converges to a point, noted by x * . Since Ω is a closed set, then x * ∈ Ω.
Next we prove x * ∈ O. If x * / ∈ O, then there existŝ x ∈ Ω, such that h(x) < h(x * ). It has been proved above that lim k→+∞ γ i k ≤ 0. It yields that is, h(x) ≥ h(x * ), which results in contradiction.

Theorem 8
Under Assumptions 1 and 2, all cluster points of {x k } are optimal solutions to distributed optimization problem (13).
Proof According to the proof of Theorem 7, it is sufficient to prove for any x ∈ Ω, all cluster points of By Theorem 7, there is a subsequence {γ i k } such that lim k→+∞ γ i k ≤ 0. If there exist δ > 0 and another subsequence {γ l k }, such that γ l k ≥ δ. Therefore, we consider the subsequence γ j k , where j k (k ∈ N) is defined as follows: Obviously, γ j k is well defined. Next, it comes to the conclusion that By (45) and (47), it can be derived that Because lim k→+∞ γ i k ≤ 0, so {k : γ k ≤ 0} is an infinite set. Known from +∞ k=0 k γ k < +∞, then there isk > 0, such that for k ≥k. Furthermore, definē ObviouslyS is finite. Based on (49), it gets which contradicts to (48). To sum up, all cluster points of {x k } belong to O.
Corollary 2 Assume {x k } is generated by Approach I, then {x k } converges to the optimal solution set of distributed optimization problem (13).
converges to the optimal solution set of optimization problem (1).

Remark 9
The variable step-size neurodynamic approach designed as Approach I can resolve the nonsmooth distributed optimization problem with convex inequality and set constraints. Compared with existing discrete-time neurodynamic approaches in [32,45] and [50] which can only solve smooth distributed optimization problems with equality constraints or affine inequality constraints, Approach I has the ability to solve the nonsmooth distributed optimization problem with inequality and set constraints. Additionally, the Approach I is convergent without the assumption that local cost functions are strongly convex.

Simulations and applications
In this section, the continuous-time neurodynamic approach and the discrete-time neurodynamic approach in this paper are utilized to solve a numerical example. Moreover, an illconditioned Least Absolute Deviation problem and a load sharing problem are considered and solved. Therefore, the feasibility of proposed neurodynamic approaches is verified.

Numerical simulations
Example 1 Consider a system of 20 agents interacting over an undirected circled communication graph to collaboratively solve the following optimization problem: where 1, 2, . . . , 20 and x = col{x 1 , x 2 , . . . , x 20 } ∈ R 40 . Obviously, problem (51) is a nonsmooth convex distributed optimization problem under inequality constraint set Ξ i = {x ∈ R 2 : (I) Continuous-time neurodynamic approach (14) for (51) It is easy to identify the inequality constraint set Ξ i is bounded and there isx i = (1, 0) ∈ int(Ξ i ) ∩ Ω such that Assumption 1 is satisfied. Furthermore, for any Thus, the parameters are chosen as M = 10 and L = 300. According to utilizing continuous-time neurodynamic approach (14) from an initial point, we get that the state solution is convergent to an optimal solution x * = [0, 0] T of optimization problem (1), which is shown in Fig. 2a, and the trend of the global cost function is shown in Fig. 2b. Therefore, the effectiveness of the neurodynamic approach (14) can be guaranteed.
(II) Approach I for (51) According to Approach I with the same initial value in (I), take σ = 121000 and α k = 1/(k + 5000). Through 5000 iterations, then the convergence of the iteration sequence and the value of the global objective function f (x) can be seen intuitively in Fig. 3 and the result is same as that in(I) .
(III) A higher dimensional simulation for (51) Now, we come to consider a multi-agent system that N = 100 and agents interact over an undirected communication graph showed in Fig. 4. to collaboratively solve the optimization problem (51). The 100 agents are divided into five floors, in which 20 agents in each layer are connected in a circular manner, and each two floors are connected by agents whose coordinate were (x, y, z) = (1, 0, z). Therefore, this communication graph is undirected and connected. Therefore, by applying continuous-time neurodynamic approach (14) and Approach I proposed in this paper, respectively. Figure  5 shows the convergence of these two algorithms and illus-trates their effectiveness for higher dimensional distributed optimization problems. In Example 1, the dimension of the solution obtained by the neurodynamic approach 14 proposed in this paper is 40. Actually, the approaches proposed in [40] and [52] can be implemented to solve problem (51), and the dimensions of their solution are as high as 100 and 140, respectively. In addition, the approach in [51] is only suitable for solving distributed optimization problems with affine inequalities, so it cannot solve problem (51). However, when solving optimization problems of the same dimension, the solution dimension generated by the approach in [51] is up to 160, which is three times larger than the solution dimension obtained by (14). It is worth pointing out that this phenomenon will become more obvious when dealing with problems under larger dimensional constraints. Thus, the neurodynamic approach (14) can greatly reduce the amount of calculation and reduce the consumption of computer CPU.

Least absolute deviation problem
Example 2 The system in [50] and Approach I proposed in this paper are used to solve the following ill-conditioned Least Absolute Deviation problem: where D = Since the system in [50] and Approach I are able to solve the nonsmooth distributed convex optimization problems. As [50] mentioned, the condition number of the matrix is about 200, so it is difficult to solve problem (52) using traditional optimization algorithms. Here, we consider a multi-agent network consisting of five nodes, each of which is assigned an objective function containing D and c. The communication lines between agents can be seen in Fig. 6. Then, from the same initial point, both algorithms produce sequences of iterations that converge to the optimal solution (2, 1, −2) T and the results are given in Fig. 7. It is clear from Fig. 7 that one of the advantages of the algorithm in this paper is much faster convergence.

Load sharing problem
In the field of power system, the load sharing problem is frequently considered. We now want to find the optimal generation allocation to share the load under the constraints By the work of [46], the load sharing optimal problem can be abstracted into the following mathematical model: where p load ∈ R n is the constant load demands at buses, f i is the local cost function at bus i. p is the generation capacity constraint at bus i, ν j ≤ ν j ≤ ν j is the power flow constraint in line j. Notice that p load = [ p load 1 , p load 2 , . . . , p load N ] T and ν = [ν 1 , ν 2 , . . . , ν d ] T . We need to know that the upper and lower bounds of the constraint are given constants.
Equivalently, the problem (53) can be reformulated as the following distributed optimization problem: Example 3 To solve the load sharing problem (54), we consider a five-bus and five-line system, and the interaction of the buses is shown in Fig. 6 Fig. 8. Obviously, the solution satisfies the given constraints and minimizes the cost of load sharing.

Conclusion
In this paper, we proposed three neurodynamic approaches for solving distributed optimization problems with inequality and set constraints. At first, a continuous-time neurodynamic approach was proposed without using the Laplacian matrix of the communication topology, and the state solution of the neurodynamic approach was proved to converge to an optimal solution of the nonsmooth distributed optimization problem under several mild assumptions. Then, an eventtriggered neurodynamic approach was designed to reduce the communication burden and it was verified that Zeno behavior does not occur. Furthermore, we proposed a discrete-time neurodynamic approach and proved the iteration sequence converges to the optimal solution set of the nonsmooth distributed optimization problem. At last, numerical examples were realized to demonstrate the effectiveness and superiorities of neurodynamic approaches. From an application point of view, we plan to extend neurodynamic approaches for solving distributed optimization problems on directed or time-varying communication graphs in the next step.

Declarations
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, and there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled "Distributed Neurodynamic Approaches to Nonsmooth Optimization Problems with Inequality and Set Constraints". This research is supported by the National Natural Science Foundation of China (62176073, 11871178).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.