Dinh and Jeyakumar have written a well-organized survey paper on the celebrated Farkas’ lemma and its generalizations in the field of mathematical optimization, most parts of which, as can be seen from the reference list, are based on their own research work or joint work with their collaborators in the last three decades. The paper is easy to read, even without previous specialized knowledge in the area. Very likely, this paper will be quite useful for researchers in all levels.

Farkas’s lemma is essentially an example of theorem of the alternative, a theorem stating that exactly one of two systems has a solution. The systems appearing in its original version are defined by finitely many linear (in)equalities. The last thirty years have seen the greatest number of extensions of Farkas’ lemma, covering a wide range of systems including conic-linear, conic-sublinear systems, convex inequality systems, conic-convex systems, classes of nonconvex systems such as the difference of convex (DC) systems, composite convex systems and quadratic systems. Most recent extensions cover inequality systems involving uncertain vectors, matrices, and polynomials. In this line of research, also in the establishment of zero duality gap and strong duality of Lagrangean theory, a closed cone condition plays a crucial role. The recent literature on closedness of the image of a closed convex cone under a linear continuous mapping can be found in (Pataki 2007; Borwein and Moors 2010a, b; Hu and Wang 2011).

Section 2 of this survey is devoted to the generalization of Farkas’ lemma from systems of finitely many linear (in)equalities to systems involving arbitrary convex cones and continuous linear mappings between spaces of arbitrary dimensions, and to uncertain linear systems with some particular structures. In section 3, Farkas’s Lemma is extended to sublinear systems including conical sublinear systems, separable sublinear systems, conical difference sublinear systems, and difference sublinear convex systems. Section 4 of this survey is devoted to extensions of Farkas’ lemma to cone-convex systems, convex infinite systems, convex uncertain systems, convex polynomial systems and sublinear-convex systems. Extensions of Farkas’ lemma to systems involving composite convex functions and composite nonconvex systems are presented in section 5. Applications of such extensions can be found in composite convex optimization. Section 6 of this survey is devoted to extensions of Farkas’ lemma to nonconvex systems including difference of convex systems, positively homogeneous systems, and nonconvex quadratic systems. In the last section, sequential forms of generalized Farkas’ lemma without any qualification conditions are presented in several versions.

The book “Mathematical programming and Control Theory” (Craven 1978) by Professor Bruce D. Craven is one of the earliest references on the topic of alternative theorems, where the result was established for a cone convex system with an interior point assumption. The early extension of theorem of alternative was concentrated on generalized convex systems. The first commenter remembered that in the early 90’s of the last century during his stay in the University of New South Wales as a postgraduate student, Professor Jeyakumar continued to work on the Farkas lemma, together with Professor Glover and their team, and had established many significant extensions to difference sublinear system, difference convex functions, and systems with pointwise minimum of sublinear functions etc.

It is known that the elementary Farkas lemma (see Introduction in the article) plays a key role in establishing the first-order necessary optimality condition for nonlinear constrained optimization problems via a constraint qualification, while this optimality condition can also be derived using the classical \(l_1\) exact penalty function and nonsmooth analysis tools but without using Farkas lemma. Recently Farkas lemma has been applied to derive the first-order necessary optimality condition for twice-differentiable nonlinear constrained optimization problems by virtue of an \(l_p (0 \le p \le 1)\) exact penalty function. In doing so, it was showed that the objective gradient function is nonnegative on the set of feasible directions for the linearized constraint set by estimating the Dini upper-directional derivative of the non-Lipschitz penalty term by virtue of a second-order Taylor expansion (see Yang and Meng 2007; Meng and Yang 2010).

It is worth mentioning that Farkas lemma in a non-homogeneous form (Mangasarian 1969; Still and Streng 1996) can be very helpful in establishing second-order necessary conditions for nonlinear programming problems, and in verifying the nonemptiness of a semi-closed polyhedron (Fang et al. 2012) defined as the intersection set of finitely many half-spaces (not necessarily closed).