On Maximizing Sums of Non-monotone Submodular and Linear Functions

We study the problem of Regularized Unconstrained Submodular Maximization (RegularizedUSM) as defined by Bodek and Feldman [BF22]. In this problem, you are given a non-monotone non-negative submodular function $f:2^{\mathcal N}\to \mathbb R_{\ge 0}$ and a linear function $\ell:2^{\mathcal N}\to \mathbb R$ over the same ground set $\mathcal N$, and the objective is to output a set $T\subseteq \mathcal N$ approximately maximizing the sum $f(T)+\ell(T)$. Specifically, an algorithm is said to provide an $(\alpha,\beta)$-approximation for RegularizedUSM if it outputs a set $T$ such that $\mathbb E[f(T)+\ell(T)]\ge \max_{S\subseteq \mathcal N}[\alpha \cdot f(S)+\beta\cdot \ell(S)]$. We also study the setting where $S$ and $T$ are subject to a matroid constraint, which we refer to as Regularized Constrained Submodular Maximization (RegularizedCSM). For both RegularizedUSM and RegularizedCSM, we provide improved $(\alpha,\beta)$-approximation algorithms for the cases of non-positive $\ell$, non-negative $\ell$, and unconstrained $\ell$. In particular, for the case of unconstrained $\ell$, we are the first to provide nontrivial $(\alpha,\beta)$-approximations for RegularizedCSM, and the $\alpha$ we obtain for RegularizedUSM is superior to that of [BF22] for all $\beta\in (0,1)$. In addition to approximation algorithms, we provide improved inapproximability results for all of the aforementioned cases. In particular, we show that the $\alpha$ our algorithm obtains for RegularizedCSM with unconstrained $\ell$ is tight for $\beta\ge \frac{e}{e+1}$. We also show 0.478-inapproximability for maximizing a submodular function where $S$ and $T$ are subject to a cardinality constraint, improving the long-standing 0.491-inapproximability result due to Gharan and Vondrak [GV10].


Introduction
Submodularity.Submodularity is a property satisfied by many fundamental set functions, including coverage functions, matroid rank functions, and directed cut functions.Optimization of submodular set functions has found a wealth of applications in machine learning, including the spread of influence in social networks [KKT03], sensor placement [KSG08], information gathering [KG11], document summarization [LB11; Wei+13; GGV15], image segmentation [JB11], and multi-object tracking [She+18], among others (see [KG14] for a survey).Submodular Maximization.Many problems involving maximization of non-negative submodular functions can be classified as either unconstrained or constrained, which we refer to as USM and CSM, respectively.For USM, the objective is to return any set in the domain of the function approximately maximizing the function, while for CSM, the returned set must additionally satisfy a matroid independence constraint (or "matroid constraint" for short).The simplest nontrivial example of a matroid constraint is a cardinality constraint, which means that an upper bound is given on the allowed size of the returned set.
In general, it is impossible to approximate the maxima of instances of USM or CSM to arbitrary accuracy in polynomial time, so we focus on both finding algorithms that return a set with expected value at least times that of the optimum, known as -approximation algorithms, and proving that no such polynomial-time algorithms can exist, known as -inapproximability results.Now we briefly review past results for both USM and CSM.A (1 − −1 )-approximation for monotone CSM was achieved by Nemhauser et al. [NWF78] using a greedy algorithm for the special case of a cardinality constraint and later generalized by Calinescu et al. [Cal+11] to a matroid constraint using a continuous greedy algorithm.On the other hand, a 0.5-approximation for non-monotone USM was provided by Buchbinder et al. [Buc+12] using a randomized double greedy algorithm, while the best known approximation factor for non-monotone CSM is 0.385 due to Buchbinder and Feldman [BF16] using a local search followed by an aided measured continuous greedy.
The first two approximation factors are tight; (1 − −1 + )-inapproximability and (0.5 + )-inapproximability for any > 0 were shown by Nemhauser and Wolsey [NW78] and Feige et al. [FMV11], respectively, using ad hoc methods.On the other hand, the best known inapproximability factor for non-monotone CSM is 0.478 due to Gharan and Vondrak [GV10] using the symmetry gap technique of [Von11].This technique has the advantage of being able to succinctly reprove the inapproximability results of [NW78;FMV11].Submodular + Linear Maximization.In this work we consider approximation algorithms for maximizing the sum of a non-negative non-monotone submodular function and a linear function .The function = + is still submodular, though not necessarily non-negative.Here, the linear term has several potential interpretations.For example, by setting to be non-positive, serves as a regularizer or soft constraint that favors smaller sets [Har+19].
Sviridenko et al. [SVW17] were the first to study algorithms for + sums in the case of monotone, in order to provide improved approximation algorithms for monotone CSM with bounded curvature.Here, the curvature ∈ [0, 1] of a non-negative monotone submodular function is roughly a measure of how far is from linear.
They provide a (1 − ∕ − )-approximation algorithm and a complementary (1 − ∕ + )-inapproximability result.The idea of the algorithm is to decompose into + and show that an approximation factor of 1 − −1 can be achieved with respect to and an approximation factor of 1 can be achieved with respect to simultaneously.Formally, if ℐ is the independent set family of a matroid, the algorithm computes a set ∈ ℐ that satisfies by first "guessing" the value of ( ), and then running continuous greedy.The algorithm also works when the sign of is unconstrained.Feldman subsequently removed the need for the guessing step and the dependence on ( ) by introducing a distorted objective [Fel18].Many faster algorithms for the case of monotone have since been developed [Har+19; Kaz+21; NET21].However, only very recently has the case of non-monotone been considered.Lu et al. [LYG21] were the first to do so using a distorted measured continuous greedy, showing how to compute ∈ ℐ such that [ ( ) + ( )] ≥ max ∈ℐ [( −1 − ) ( ) + ( )], but only when is non-positive.Bodek and Feldman [BF22] were the first to consider the case where non-monotone and is unconstrained.They define and study the problem of Regularized Unconstrained Submodular Maximization (RegularizedUSM): Definition 1.1 (RegularizedUSM).Given a (not necessarily monotone) non-negative submodular function ∶ 2 → ℝ ≥0 and a linear function ∶ 2 → ℝ over the same ground set , an algorithm is said to provide an ( , )approximation for RegularizedUSM if it outputs a set ⊆ such that [ ( )+ ( )] ≥ max ⊆ [ ⋅ ( )+ ⋅ ( )].

Our Contributions
In this work, we present improved approximability and inapproximability results for RegularizedUSM as well as the setting where and are subject to a matroid constraint, which we refer to as Regularized Constrained Submodular Maximization (RegularizedCSM): Definition 2.1 (RegularizedCSM).Given a (not necessarily monotone) non-negative submodular function ∶ 2 → ℝ ≥0 and a linear function ∶ 2 → ℝ over the same ground set , as well as a matroid with family of independent sets denoted by ℐ, an algorithm is said to provide an ( , )-approximation In particular, we are the first to present ( , )-approximation algorithms for RegularizedCSM when is not non-positive, and the we obtain for RegularizedUSM is superior to that of [BF22] for all ∈ (0, 1).To show approximability, the main techniques we use are the measured continuous greedy introduced by Feldman et al. [FNS11] and used by [BF16; LYG21], the distorted objective introduced by Feldman [Fel18] and used by [LYG21], as well as the "guessing step" of [SVW17].To show inapproximability, the main technique we use is the symmetry gap of [Von11], and most of our symmetry gap constructions are based on those of [GV10].
Organization of the Paper.We present the definitions and notation used throughout this paper in Section 3. Sections 4 to 8 form the bulk of our paper and are summarized below.We conclude with discussion of open problems in Section 9.

Section 4: Inapproximability of Maximization with Cardinality Constraint
We first consider CSM without a regularizer.Gharan and Vondrak [GV10] proved 0.491-inapproximability of CSM in the special case where the matroid constraint is a cardinality constraint.We improve the inapproximability factor to 0.478 in Theorem 4.1 by modifying a construction from the same paper [GV10, Theorem E.2] that uses the symmetry gap technique of [Von11].

Section 5: Non-positive
The results of this section are summarized in Figure 1.In Section 5.1, we present improved ( ( ), )-approximations for RegularizedUSM for all ≥ 0 and RegularizedCSM for all ∈ [0, 1].Previously, the best known result for both RegularizedUSM and RegularizedCSM was ( ) = − − due to Lu et al. [LYG21].This function achieves its maximum value at (1) = −1 − > 0.367.We improve the approximation factor for RegularizedCSM to (1) > 0.385, matching the best known approximation factor for CSM without a regularizer due to Buchbinder and Feldman [BF16].Additionally, we show that larger values of ( ) are achievable for RegularizedUSM when > 1.The idea is to combine the "guessing step" of Sviridenko et al.
[SVW17] with a generalization of the aided measured continuous greedy algorithm of Buchbinder and Feldman [BF16].Following the convention of [BF22], the and axes represent the coefficients of and , respectively.We use blue for approximation algorithms and red for inapproximability results, and the shaded area represents the gap between the best known approximation algorithms and inapproximability results.Observe that Theorem 5.6 unifies the two inapproximability theorems from [BF22].(0.5, 2 ln 2− )-inapproximability is due to Theorem 5.9.For RegularizedCSM, the results are the same for ≤ 1.
A natural follow-up question is whether there is a (0.5, )-approximation algorithm for RegularizedUSM with non-positive for some .Although it is unclear whether this is the case for general , we use linear programming to show this result when is an undirected or directed cut function (Theorems 5.4 and 5.5).
In Section 5.2, we use the symmetry gap technique to demonstrate improved inapproximability for Regular-izedUSM with non-positive .The previous best inapproximability results were [BF22, Theorem 1.1] near = 0 and [BF22, Theorem 1.3] near = 1.Our result, which generalizes the construction from Section 4, beats or matches both of these theorems for all .

Section 6: Non-negative , RegularizedUSM
The results of this subsection and the next are summarized in Figures 2 and 3.
We note that Theorem 5.1 can be modified to obtain guarantees for RegularizedUSM with non-negative (in Section 6.2).But first, we take a slight detour and reanalyze the guarantee for this task provided by the randomized double greedy algorithm of [Buc+12] (RandomizedDG), which achieves the best-known ( ( ), )approximations near = 3∕4.We also reanalyze the guarantee of the deterministic variant of double greedy from the same paper (DeterministicDG).
• Improved analysis of RandomizedDG (Theorem 6.2).We then show that RandomizedDG simultaneously achieves Observe that for both DeterministicDG and RandomizedDG, increasing improves the dependence of the approximation on but decreases the dependence on .Setting = 1 recovers the guarantees of [BF22].We also provide examples showing that neither DeterministicDGnor RandomizedDG achieve ( , )-approximations better than Theorems 6.1 and 6.2 in Theorems 6.3 and 6.4, respectively.
In Section 6.2 we provide improved approximation algorithms for non-negative near = 1 by combining the results of Sections 5.1 and 6.1: Theorem 6.5.An ( ( ), )-approximation algorithm for RegularizedUSM with non-negative exists for any ( ( ), ) in Table 3.In particular, the ( ) obtained for ≥ 0.85 is superior to that of Theorem 6.2 alone, and (1) > 0.385, matching the approximation factor of Theorem 5.1.
Note that (0.385) > 0.385 matches the (trivial) result of directly applying the algorithm of [BF16] to + .In Section 7.2, we prove a complementary inapproximability result showing that our algorithm is tight for ≥ −1 .

Section 8: Unconstrained
The results of this section are summarized in Figure 5.In Section 8.1 we modify and reanalyze the distorted measured continuous greedy of [LYG21] to achieve a better approximation factor for RegularizedUSM than [BF22, Theorem 1.2] for all ∈ (0, 1): Theorem 8.1.For all ≥ 0, there is a − + − − , + − -approximation algorithm for RegularizedUSM.This algorithm achieves the same approximation guarantee for RegularizedCSM when ≤ 1.
Note that unlike [BF22, Theorem 1.2], our algorithm also applies to RegularizedCSM, and is tight for Reg-ularizedCSM when ≥ +1 .To the best of our knowledge, this is the first algorithm to achieve any ( , )approximation for RegularizedCSM when the sign of is unconstrained.We then demonstrate that our algorithm is not tight for < +1 ; in particular, by combining the methods for non-positive and non-negative (Sections 5 and 7) we achieve a slightly greater value of for = 0.7.Note that Theorem 8.1 only guarantees a (0.277, 0.7)-approximation when ≈ 0.925.Theorem 8.3.There is a (0.280, 0.7)-approximation algorithm for RegularizedCSM.
In Section 8.2 we extend the symmetry gap construction for non-positive from Theorem 5.6 in order to obtain stronger inapproximability results for unconstrained .We first show that a natural generalization of Theorem 5.6 proves ( ( ), )-inapproximability where (1) < 0.440, and then provide a different construction that shows (0.408, 1)-inapproximability (Theorems 8.4 and 8.5).

Preliminaries
We use much the same notation as [BF22, Section 2].
A set function is said to be monotone if for every two sets ⊆ ⊆ , ( ) ≤ ( ), and it is said to be linear if there exist values { ∈ ℝ ∈ } such that for every set ⊆ , ( ) = ∑ ∈ . When considering the sum of a non-negative submodular function and a linear function whose sign is unconstrained, define + ≜ { | ∈ and ( ) > 0} and − ≜ ∖ + .In other words, + contains the elements of the ground set with positive sign in and − contains all the rest.We additionally define + ( ) ≜ ( ∩ + ) and − ( ) ≜ ( ∩ − ) to be the components of with positive and negative sign, respectively.Multilinear Extensions.All vectors of reals are in bold (e.g., ).Given two vectors , ∈ [0, 1] , we define ∨ , ∧ and • to be the coordinate-wise maximum, minimum, and multiplication, respectively, of and .We also define ∖ ≜ − ∧ .Given a set function ∶ 2 → ℝ, its multilinear extension is the function where R( ) is a random subset of including every element ∈ with probability , independently.One can verify that is a multilinear function of its arguments as well an extension of in the sense that ( ) = ( ) for every set ⊆ .Here, is the vector with value 1 at each ∈ and 0 at each ∈ ∖ , and is known as the characteristic vector of the set .
Value Oracles.We make the standard assumption that an algorithm for + sums accesses only through a value oracle.Given a set ⊆ , a value oracle for returns ( ) in polynomial time.On the other hand, is directly provided to the algorithm.

Matroid Polytopes.
A matroid ℳ may be specified by a pair of a ground set and a family of independent sets ℐ.The matroid polytope corresponding to ℳ is defined to be ({ | ∈ ℐ}), where denotes the convex hull.By construction, is guaranteed to be down-closed; that is, 0 ≤ ≤ and ∈ imply ∈ .We also make the standard assumption that is solvable; that is, linear functions can be maximized over in polynomial time.For CSM and RegularizedCSM, we let denote any set such that ∈ ℐ (equivalently, ∈ ), while for USM and RegularizedUSM, we let denote any subset of .For example, in the context of CSM, Miscellaneous.We let denote any positive real.Many of our algorithms are "almost" ( , ) approximations in the sense that they provide an ( − , )-approximation in , 1 time for any > 0. Similarly, some of our results show ( + , )-inapproximability for any > 0.
Prior Work.A more comprehensive overview than Section 1 of all relevant prior approximation algorithms, together with their corresponding inapproximability results, is deferred to Appendix A.1.

Inapproximability of Maximization with Cardinality Constraint
In this section, we prove Theorem 4.1: Theorem 4.1.There exist instances of the problem max{ ( ) ∶ ⊆ and ≤ } such that a 0.478-approximation would require exponentially many value queries.
The formal definition of refinement can be found in [Von11].The important thing to note is that F satisfies the same properties as ℱ.In particular, F preserves cardinality and matroid independence constraints.Before proving Theorem 4.1, we start with a related lemma.

Lemma 4.5 (Inapproximability of Cardinality Constraint on Subset of Domain)
. Let be some subset of the ground set.There exist instances of the problem max{ ( ) ∶ ⊆ ∧ ∩ ≤ } such that a 0.478-approximation would require exponentially many value queries.
Proof.It suffices to provide and ℱ satisfying the definitions of Lemma 4.4 with symmetry gap < 0.478.The construction is identical to that of [GV10, Theorem E.2], except we define That is, we drop the constraint ∩ { , } ≤ 1. Recall that [GV10, Theorem E.2] defines the submodular function as the sum of the weighted cut functions of two directed hyperedges and an undirected edge (see [GV10, Figure 4] for an illustration).Specifically, the weighted cut function on the directed hyperedge ∉ , and 0 otherwise.The weighted cut function on the directed hyperedge ({ 1 , 2 , … , }, ) is defined in the same way.Finally, the weighted cut function on the undirected edge ( , ) contributes 1 − if ∩ { , } = 1 and 0 otherwise.Thus, the multilinear extension of is as follows: As in [GV10, Lemma 5.4], we let be the group of permutations generated by { 1 , 2 }, where swaps the two hyperedges and rotates the tail vertices of the first hyperedge.It is easy to check that ( , ℱ) are strongly symmetric with respect to both 1 , and 2 , and that the symmetrization of is as follows: , the maximum of over all symmetric is thus: where the approximate equality holds as → ∞.Now, The third equality holds since ̂ ( , ) ≤ ̂ (1 − , ) for ∈ (1∕2, 1] (thus, adding the constraint ≤ 1∕2 has no effect), while the inequality holds due to the proof of [GV10, Theorem E.2].So the symmetry gap is less than 0.478, as desired.Now, all we need to do to show Theorem 4.1 is convert the cardinality constraint on in Lemma 4.5 into a cardinality constraint on all of .
Proof of Theorem 4.1.Again, it suffices to provide and ℱ satisfying the definitions of Lemma 4.4 with symmetry gap < 0.478.We start with the construction from Lemma 4.5, replace each element and with copies ,1 … , , and ,1 … , and set ≜ + 1.The goal is to show that symmetry gap of with respect to ℱ remains less than 0.478 as → ∞.Specifically, we may redefine such that is as follows: Importantly, remains non-negative submodular and symmetric, with the new symmetrization being as follows for an appropriate choice of : It can be verified that ( ) can be written in terms of the same function of two variables ̂ ( , ) from Lemma 4.5: where Equation (4.2) is satisfied due to the expectation of the product of two independent variables equaling the product of their expectations, and Equation (4.3) holds as → ∞.As in the proof of Lemma 4.5, where the approximate equality holds for sufficiently large because lim →∞ .

Non-Positive
The results of this section are summarized in Figure 1.

Approximation Algorithms
In this subsection we provide improved approximations for general (Theorem 5.1) as well as for a cut function (Theorems 5.4 and 5.5).
We start with a special case of Theorem 5.1.

Lemma 5.2.
There is a (0.385, 1) approximation algorithm for RegularizedCSM when is non-positive.
Proof.The idea is to combine the "guessing step" of Sviridenko et al.
The idea is that if we know the value of ( ), we can run [BF16] For at least one of these values of ("guesses"), we will have ). Combining these guarantees shows that is a (0.385 + , 1 + ) approximation, which in turn implies a (0.385, 1) approximation since Before proving Theorem 5.1, we start by briefly reviewing the main algorithm from [BF16] when executed on a solvable down-closed polytope .First, it uses a local search to generate ∈ such that both of the following inequalities hold with high probability: (5.2) Finally, assuming is the matroid polytope corresponding to a family of independent sets ℐ, the algorithm uses pipage rounding to convert both and to integral solutions ∈ ℐ and ∈ ℐ such that [ ( )] ≥ ( ) and [ ( )] ≥ ( ), and returns the solution from and with the larger value of . 1 To obtain improved approximation bounds, we need the following generalization of Equation (5.3):

Lemma 5.3 (Generalization of Aided Measured Continuous Greedy). If we run Aided Measured Continuous Greedy
given a fractional solution and a polytope for a total of time, where ≥ , it will generate ∈ ∩ Note that this matches term by term with Equation (5.3) when = 1.
Proof Sketch.By [BF16], proving the conclusion for integral sets implies the conclusion for fractional .So it suffices to prove the following. (5.4) The idea of the original aided measured continuous greedy is to run measured continuous greedy for time only on the elements of ∖ , and then for 1 − additional time with all elements of .Working out what happens when we run it for a total of instead of 1 time is just a matter of going through the equations from [BF16, Section 4] and making a few minor changes.The remainder of the proof is deferred to Appendix A.3.
Proof of Theorem 5.1.Our algorithm for RegularizedUSM is as follows: 1.As in Lemma 5.2, first guess the value of ( ) to within a factor of 1 + , and then replace with 2. Generate using the local search procedure on ( , ) described by [BF16].
For improved approximations, larger sets can be chosen (but we found the benefit of doing so to be negligible).
4. Round from step 1 and all fractional solutions found in step 2 to valid integral solutions.Note that by replacing with R( ), the value of + is preserved in expectation.
5. Return the solution from step 4 with the maximum value, or the empty set if none of these solutions has positive expected value.Let be the expected value of this solution.
• The remaining vertices of the hull correspond to running Lemma 5.3 on given for all ( , ) ∈ .
For the case of RegularizedCSM, the reasoning is almost the same, but to ensure that all points returned by Lemma 5.3 lie within , we only include pairs in with ≤ 1 in step 3, and pipage rounding with respect to the original (not ∩ { ∶ ( ) ≥ (1 + ) ( )}, which is not necessarily a matroid polytope) must be used for step 4. The results turn out to be identical to those displayed in Figure 1 for ≤ 1.
Next, we state better approximation results for an undirected and directed cut function, respectively.The proofs, which use linear programming, are deferred to Appendix A.3.We note that linear programming was previously used to provide a 0.5-approximation for MAX-DICUT by Trevisan [Tre98] and later by Halperin and Zwick [HZ01].
Theorem 5.4.There is a (0.5, 1)-approximation algorithm for RegularizedCSM when has arbitrary sign and is the cut function of a weighted undirected graph ( , , ); that is, for all ⊆ , where each edge weight is non-negative.
Note that while our above result for undirected cut functions applies to RegularizedCSM, our subsequent result for directed cut functions only applies to RegularizedUSM.
Theorem 5.5.There is a (0.5, 1)-approximation algorithm for RegularizedUSM when has arbitrary sign and is the cut function of a weighted directed graph ( , , ); that is, for all ⊆ , where each edge weight is non-negative.

Inapproximability
In this subsection, we prove Theorem 5.6.Recall from Figure 1  Before proving Theorem 5.6, we state a generalization of the symmetry gap technique to + sums that we use for Theorem 5.6 and the rest of our inapproximability results.Definition 5.7.We say that max ∈ℱ [ ( ) + ( )] is strongly symmetric with respect to a group of permutations if ( ) = ( ( )) for all ∈ and ( , ℱ) are strongly symmetric with respect to as defined in Definition 4.3.Lemma 5.8 (Inapproximability of ( , ) Approximations).Let max ∈ℱ [ ( ) + ( )] be an instance of non-negative submodular maximization, strongly symmetric with respect to a group of permutations .For any two constants , ≥ 0, if max then no polynomial-time algorithm for RegularizedCSM can guarantee a ( , )-approximation.The same inapproximability holds for RegularizedUSM by setting ℱ = 2 .
Proof Sketch.[BF22, Theorem 3.1] shows this lemma for the case of RegularizedUSM.The proof for Regular-izedCSM is similar, so it is omitted.
The idea behind the proof of Theorem 5.6 is to generalize the symmetry gap construction of [BF22, Theorem 1.3], which in turn is a modification of the 0.478-inapproximability result of [GV10] used in Section 4.
Proof of Theorem 5.6.Set to be the same as defined in Lemma 4.5.Now apply Lemma 5.8 with = { , 1 }.For a fixed , we can show ( , )-inapproximability using this method if it is possible to choose and such that the following inequality is true: max So our goal is now to minimize to the LHS of the above inequality.[BF22, Theorem 1.3] sets = = 0, and then chooses and 1… = 1… ≜ in order to minimize the quantity max However, choosing = ≜ to be negative rather than zero gives superior bounds for small .That is, our goal is to compute (5.5) We can approximate the optimal value by brute forcing over a range of ( , , ).For ∈ {0.8, 0.9, 1.0}, it is optimal to set = 0, and our guarantee is the same as that of [BF22, Theorem 1.3].Our results for ∈ {0.6, 0.7} are stronger than those of [BF22, Theorem 1.3] even though they also satisfy = 0, because that theorem actually only considers ≥ −0.5 and ≤ 0.5.
Next, we consider the limit of Theorem 5.6 as ( ) → 0.5.Note that this is not a new result in the sense that [BF22, Theorem 1.3] can already prove it when the parameters and are chosen appropriately, but we nevertheless believe that there is value in explicitly stating it.Theorem 5.9.For any > 0, there are instances of RegularizedUSM with non-positive such that (0.5, 2 ln 2− ≈ 1.386) is inapproximable.Proof.To find the maximum such that we can show (0.5, )-inapproximability using the construction of Theorem 5.6, our goal is to choose ∈ (0, 0.5) and < 0 such that the RHS of the following inequality is maximized: We can rewrite half the expression within the max as max ,0≤ ( so the RHS of Equation (5.6) becomes: (5.7) Next, we claim that for any * > 0, it is possible to choose < 0 such that the numerator of Equation (5.7) reaches its minimum at = * .Define the function ℎ( ) ≜ . It suffices to check that ℎ is decreasing at = 0 and concave up for ≥ 0; that is, ℎ( ) Both of these inequalities follow from the assumption ∈ (0, 0.5).

Approximations with Double Greedy
In this subsection, we show improved approximability for DeterministicDGand RandomizedDG in Theorems 6.1 and 6.2, and then show that both of these results are tight in Theorems 6.3 and 6.4.The results of this subsection are summarized in Figure 2. First, we briefly review the behavior of the original DeterministicDG and RandomizedDG of [Buc+12] when executed on a non-negative submodular function , as well as their approximation factors.
• In RandomizedDG, the first event occurs with probability proportional to ≜ max( ( ), 0), while the second event occurs with probability proportional to ≜ max(− ( ∖{ }), 0).In the edge case where = = 0, it does not matter which event occurs.
Finally, the algorithm returns = .
Since ⊆ , the LHS of this inequality is at most ( ) by submodularity.On the other hand, the RHS of this inequality is greater than ( −1 ) by assumption.
We note that a similar lemma as Theorem 6.2 was previously used by [Buc+14] for maximizing a submodular function subject to a cardinality constraint.
Proof.We claim that the following modified version of Equation (6.3) holds for any > 0: As in the proof of Theorem 6.1, it is easy to check that Equation (6.5) implies the conclusion.It remains to show Equation (6.5).We note that in the edge case = = 0, Equation (6.1) implies that ( −1 ) = ( −1 ∖{ }) = 0, so the inequality reduces to 0 ≤ 0. Otherwise, recall that the original proof of double greedy lower bounded the LHS of Equation (6.5) by On the other hand, we can lower bound twice the RHS by where the last step follows from the AM-GM inequality as in the original proof.
Next, we prove that DeterministicDG and RandomizedDG do no better than the bounds we just showed.Recall that [BF22, Theorem 1.4] proved that the original DeterministicDG is an ( , )-approximation algorithm whenever ≤ 1 3 and + ≤ 1.To show that this analysis is tight, it suffices to check that whenever > 1 3 or + > 1, there are instances where DeterministicDG does not achieve the desired approximation factor.The former inequality holds by [Buc+12, Theorem II.3], while the latter holds by applying the following theorem with = 1: Theorem 6.3.For any ≥ 1 and > 0, there are instances of RegularizedUSM with non-negative where the variant of DeterministicDG described in the proof of Theorem 6.1 does not achieve an ( , )-approximation for any ( , ) above the line connecting (0, 1) and

Additional Approximation Algorithms
In this subsection we prove Theorem 6.5.The results of this subsection and the next are summarized in Figure 3. Theorem 6.5.An ( ( ), )-approximation algorithm for RegularizedUSM with non-negative exists for any ( ( ), ) in Table 3.In particular, the ( ) obtained for ≥ 0.85 is superior to that of Theorem 6.2 alone, and (1) > 0.385, matching the approximation factor of Theorem 5.1.
Adding ( ) to both sides, we conclude that So an algorithm returning would achieve a (0.385, 1)-approximation as desired.
For close to one, we can obtain better ( , )-approximations than Theorem 6.2 alone provides by combining double greedy with the following corollary of Lemma 6.6: Corollary 6.7.An ( , )-approximation algorithm for RegularizedUSM for the case of non-positive may be used to return a set ⊆ such that for the case of non-negative.
Now we can prove Theorem 6.5 by combining Corollary 6.7 with Theorem 6.2.
Proof of Theorem 6.5.Our algorithm returns the best of the solutions returned by the following two algorithms: 1. Double greedy on + 2. Corollary 6.7 using Theorem 5.1 for ∈ ≜ {( (1 + 0.01 ), 1 + 0.01 ) | ∈ and 0 ≤ ≤ 30} As with Theorem 5.1, for a fixed we can lower bound the ( ) guaranteed by the algorithm above by the solution to the following linear program after choosing the set ℛ appropriately: Let denote the expected value of the returned solution.Any point ( 1 , 2 , 3 ) within the convex hull satisfies the following inequality: The conditions 2 + 3 ≥ , 3 ≥ 0 ensure that ≥ 1 ( ) + ( ).

Inapproximability
In this subsection, we prove Theorems 6.8 and 6.9.
Proof Sketch.We start by showing (0.478, 1)-inapproximability, which is easier.It suffices to show the following generalization for Lemma 6.6.Any ( , 1)-approximation algorithm for the RegularizedUSM instance ( ( ∖ ), − ( )) immediately implies a ( , 1)-approximation algorithm for ( ( ), ( )).Letting ∖ be the set returned by the former approximation algorithm, we find Note that when is set to be non-negative, this means that any ( , 1)-approximation algorithm for non-positive implies an ( , 1)-approximation algorithm for non-negative.Similarly, by setting to be non-positive, we get the implication in the opposite direction.This also means that ( , 1)-inapproximability results for one sign of can be converted to corresponding inapproximability results for the other sign of .Thus, the (0.478, 1)inapproximability result for non-positive implies the same inapproximability result for non-negative .
The slightly stronger result of (0.478, 1 − ) inapproximability for some > 0 follows from modifying the symmetry gap construction of Theorem 5.6.Let ( − , − ) be the and defined in Theorem 5.6 for = 1.Then let For sufficiently large, this instance shows ( , 1)-inapproximability for some < 0.478.Furthermore, if we fix to be constant, then the desired result follows; specifically, we can choose > 0 such that Next, we provide an inapproximability result for = 0.5 by fixing = 2 in the construction for Theorem 6.8.Theorem 6.9.For any > 0, there are instances of RegularizedUSM with non-negative such that (0.5, 2 √ 2∕3 ≈ 0.943 + ) is inapproximable.
Now fix = 2 and = 0.5, and define Then the minimum such that we can show ( , + )-inapproximability using this technique is given by max (2 − 1) .

Non-Negative : RegularizedCSM
The results of this section are summarized in Figure 4.
The next lemma combines Lemma 7.2 with the aided measured continuous greedy used by [BF16].

Lemma 7.5 (Guarantee of Distorted Aided Measured Continuous Greedy). Let be unconstrained. If we run Distorted Aided Measured Continuous Greedy given a fractional solution and a polytope for a total of time, where
Note that the terms depending on are precisely the same as those in Lemma 5.3.
Proof.As with Lemma 5.3, we only present an informal proof assuming direct oracle access to the multilinear extension and giving the algorithm in the form of a continuous-time algorithm.The techniques mentioned in [BF16] and [LYG21] can be used to formalize this at the cost of introducing the (1) term.Let ( ( )) ≜ − ( ( ))) + ( ( )) be the value of the distorted objective at time .Then ( ( )) Here, the first and third terms of the summation correspond directly to those of the original aided measured continuous greedy, while the second comes from observing that ( ) ≤ 1 − − for ∈ ∖ and ( ) ≤ 1 − − max( − ,0) for ∈ ∧ .
To lower bound ( ( )), we can integrate Equation (7.1) from = 0 to = .As expected, the dependence on turns out to be the same as Lemma 5.3.
Proof of Theorem 7.1.The algorithm is similar to that of Theorem 5.1.
2. Generate using the local search procedure described by [BF16, Lemma 3.1] on ( + , ).This finds ∈ such that and Note that unlike Theorem 5.1, there is no guessing step.
4. Round from step 1 and all fractional solutions found in steps 2 and 3 to valid integral solutions using pipage rounding, which preserves the value of + in expectation.
5. Return the solution from step 4 with the maximum value, or the empty set if none of these solutions has positive expected value.Let be the expected value of this solution.

Inapproximability
In this subsection, we prove Theorem 7.6, which can be used to show that Theorem 7.1 is tight for ≥ ( − 1)∕ .
We then discuss whether the construction used in Theorem 7.6 could potentially be extended to RegularizedUSM.
Proof.Let ≜ 1 − + .By Lemma 5.8, it suffices to construct a submodular function satisfying max where is the matroid polytope corresponding to a matroid ℳ = ( , ℐ).We use the same that [Von11] uses for proving the inapproximability of maximization over matroid bases.Specifically, we consider the Maximum Directed Cut problem on disjoint arcs; that is, ( ) ≜ ∑ =1 [ ∈ and ∉ ].Its multilinear extension is as follows: ( We define the independent sets of the matroid to be precisely the subsets of that contain at most one element from 1 , … , and at most − 1 elements from 1 , … , , resulting in the following matroid independence polytope: Finally, we define as follows: Then the RHS of Equation (7.4) is at least: while the LHS of Equation (7.4) corresponds to the value of the best symmetrized solution , which is = 1 , = −1 , giving the following: For sufficiently large we have In fact, the bound of Theorem 7.6 is (nearly) tight for close to one.As the used by Lemma 5.8 to prove Theorem 7.6 is just a directed cut function, it is natural to ask whether directed cut functions can be used by Lemma 5.8 to show improved inapproximability for RegularizedUSM.We build on Theorem 5.5 to show that doing so is impossible.
Theorem 7.8.When is unconstrained, setting to be a directed cut function in Lemma 5.8 cannot be used to show (0.5, 1)-inapproximability for RegularizedUSM.
The proof is deferred to Appendix A.3.

Unconstrained
The results of this section are summarized in Figure 5.
Recall that Bodek and Feldman [BF22, Theorem 1.2] guaranteed a Now we show that the desired approximation factor is achieved.Disregard the factors of (1) in Lemma 7.2; they can always be accounted for later at the cost of introducing the factor of .Add + − − 1 times the inequality of Lemma 7.4 to the inequality from Lemma 7.2.
Then divide both sides by + − and return the set out of and that gives a higher value of + , giving the desired result after accounting for : Next we show that Theorem 8.1 is tight near = 1.
• The remaining vertices correspond to Lemma 7.5.
We conclude by noting that an analogue of Corollary 7.7 (Tight RegularizedCSM Near = 1 for ≥ 0) holds for unconstrained , though for a smaller range of .

Inapproximability
In this subsection we prove Theorems 8.4 and 8.5.Note that Theorem 7.6 cannot possibly apply to Regularize-dUSM because Theorem 8.1 achieves (1− + , )-approximations for close to one.Unfortunately, we are unable to prove (1, )-inapproximability of RegularizedUSM, but we modify Theorem 5.6 to show improved inapproximability for unconstrained than for non-negative or non-positive.

A.1 Prior Work
We outline the general idea for all continuous greedy algorithms because our results build on them.

Monotone (Constrained):
It is well-known that a simple greedy algorithm achieves a 1 − 1 -approximation for maximizing monotone submodular functions subject to a cardinality constraint [NWF78], and that this approximation factor is optimal [NW78].Calinescu et al. [Cal+11] introduced the continuous greedy algorithm, which achieves a 1 − 1 -approximation for maximizing the multilinear extension of a monotone submodular function over a solvable down-closed polytope .The idea is to continuously evolve a fractional solution ( ) from "time" = 0 to = 1 such that This continuous process can be discretized into a polynomial number of steps at the cost of a negligible loss in the approximation factor.If is the matroid polytope corresponding to a matroid ℳ = ( , ℐ), then pipage rounding may be used to round the fractional solution (1) to an independent set ∈ ℐ such that [ ( )] ≥ ( (1)) [Von11].
Non-monotone (Unconstrained): Feige et al. [FMV11] showed that no polynomial-time algorithm may provide a (0.5 + )-approximation for maximizing a non-monotone submodular function.Buchbinder et al. [Buc+12] later discovered a randomized double greedy algorithm that achieves a 0.5-approximation in expectation.The idea is to iterate through the elements of the ground set in arbitrary order, and for each one choose whether or not to include it in the returned set with some probability.
Non-monotone (Constrained): Feldman et al. [FNS11] showed a 1∕ > 0.367-approximation for maximizing the multilinear extension of a non-monotone submodular function over a solvable down-closed polytope using a measured continuous greedy.The idea is to continuously evolve a fractional solution ( ) As with the original continuous greedy, the fractional solution (1) can be rounded to an integer solution when is a matroid polytope.Additionally, when is monotone, measured continuous greedy provides the same guarantee as [Cal+11].
The approximation factor was later improved by Buchbinder and Feldman [BF16] to 0.385.The idea is to first run local search on the multilinear extension to find a "locally optimal" fractional solution ∈ , round to a set , and then run a measured continuous greedy "aided" by .Either will be a 0.385-approximation in expectation, or the set returned by aided measured continuous greedy will be.The aided measured continuous greedy consists of running measured continuous greedy from = 0 to = on ∖ , followed by running measured continuous greedy from = to = 1 on the entire ground set , where = 0.372.The optimal value of was determined by solving a non-convex optimization problem.
On the inapproximability side, Gharan and Vondrak [GV10] showed that no polynomial-time algorithm may achieve a 0.478-approximation for maximizing a non-negative submodular function subject to a matroid independence constraint or a 0.491-approximation for maximizing a non-negative submodular function subject to a cardinality constraint using the symmetry gap framework of Vondrak [Von11].The symmetry gap framework may also be used to succinctly reprove the optimality of the 1 − 1 and 1 2 approximation factors for monotone and nonmonotone maximization, respectively, which were previously proved by ad hoc methods.The idea is that given a maximization problem with a symmetry gap of ∈ (0, 1), we can construct a family of pairs of functions that require exponentially many value oracle queries to distinguish but whose maxima differ by a factor of .This in turn shows the inapproximability of a ( + )-approximation.
Feldman [Fel18] later combined continuous greedy with the notion of a distorted objective that initially places higher weight on the linear term and increases the weight on the submodular term over time.This distorted continuous greedy achieves the same approximation factor as [SVW17] without the need for the guessing step.The idea is to continuously evolve a fractional solution ( ) from = 0 to = such that Using the symmetry gap technique [Von11], Bodek and Feldman [BF22, Theorem 1.1] proved that no (1 − − + , )-approximation algorithm for RegularizedUSM exists for any ≥ 0, even when is constrained to be non-positive (see Figure 1 for an illustration).This matches the guarantee of distorted continuous greedy, which achieves a (1 − − − , )-approximation for RegularizedCSM whenever ∈ [0, 1].When is constrained to be non-positive, Lu et al. [LYG21] achieve a (1 − − − , )-approximation for RegularizedCSM for any ≥ 0 using distorted measured continuous greedy (described below).For the remainder of this section, is not necessarily monotone.where ( ) = − ( ) + ( ) as in distorted continuous greedy above.3Setting = gives the desired approximation factor.Note that when = 0, the guarantee of distorted measured continuous greedy becomes the same as measured continuous greedy.As noted in the previous paragraph, the approximation guarantee of this algorithm becomes the same as Feldman's distorted continuous greedy when is monotone.After evaluating and rearranging this final expression, we can see that this matches Equation (5.4).Here, corresponds to whether the edge ( , ) was cut.Note that ̂ ( ) = ( ) for all ⊆ , meaning that ̂ is an extension of (though not multilinear).Furthermore, since is an undirected cut function, (1 − , 1 − ) if + > 1, and then performing the following sequence of computations:

Non
Let * be a solution attaining the optimal value for Equation (A.1), which can be found using any LP solver (e.g. using the ellipsoid method).Then Before defining ̂ , we examine the symmetrization operator .Recall that symmetrization is defined with respect to a permutation group .Partition the ground set into subsets = 1 ⊍ 1 ⋯ ⊍ , where ⊍ denotes the disjoint union of two sets, and define the function ∶ → {1, 2, … , } to be the mapping from every element of the ground set to the subset that contains it.This mapping satisfies the property that ( ) = ( ) if and only if there exists a permutation ∈ such that ( ) = .Observe that ; that is, the value at in is just the average of the values in of all in the same subset as .Next, we define ̂ in terms of avg 1 ( ), avg 2 ( ), … , avg ( ), which guarantees that property 1 is satisfied.For all 1 ≤ , ≤ , define ≥ 0 as the sum of the weights of the edges directed from to (where can equal ).Then ̂ ( ) ≜ ∑ =1 ∑ =1 min(avg ( ), 1 − avg ( )).
It remains to show that properties 2 and 3 are satisfied.
Property 2: Since every subset is symmetric, the proportion of edges from to that are cut by is bounded above by the proportion of elements in contained within (which is precisely avg ( )) as well as the proportion of elements in not contained within (which is precisely 1 − avg ( )).This implies that ̂ ( ) ≥ ( ) for all .

Figure 1 :
Figure 1: Graphical presentation of results for RegularizedUSM with a non-positive linear function (Section 5).Following the convention of[BF22], the and axes represent the coefficients of and , respectively.We use blue for approximation algorithms and red for inapproximability results, and the shaded area represents the gap between the best known approximation algorithms and inapproximability results.Observe that Theorem 5.6 unifies the two inapproximability theorems from[BF22].(0.5, 2 ln 2− )-inapproximability is due to Theorem 5.9.For RegularizedCSM, the results are the same for ≤ 1.

Figure 5 :
Figure 5: Graphical presentation of results with an unconstrained linear function (Section 8).

Corollary 7. 7 ( 1 ≤
Tight RegularizedCSM Near = 1 for ≥ 0).For all −< 1, there is a (1 − − , )approximation algorithm for RegularizedCSM with non-negative , nearly matching the bound of Theorem 7.6.Proof.The better of Corollary 7.3 and Lemma 7.4 will be an ( , )-approximation for all ( , ) lying above the segment connecting using a local search technique.Theorem 8.1 improves on this approximation factor for all ∈ (0, 1) and also provides guarantees for RegularizedCSM.Proof.Our algorithm simply returns the better of the solutions returned by the following two algorithms:1.The set returned by running Lemma 7.2 (Distorted Measured Continuous Greedy) for = time 2. The set returned by Lemma 7.4 (Trivial Approximation)

Proof.
The algorithm is Theorem 7.1 augmented to use the guessing step from Theorem 5.1.That is, we start by guessing the value of − ( ) to within a factor of 1 + and replacing with∩ { ∶ − ( ) ≥ (1 + ) − ( )}as in Theorem 5.1, and then run Theorem 7.1.
≜ − ( ) + ( ) is the distorted objective at time . 2For = 1, this gives a (1 − 1∕ − , 1)approximation, eliminating the in the linear term that appears in the bound of [SVW17] due to the guessing step.

Proof of Theorem 5. 4 .
Consider the following linear program:

Table 5 :
Table 5 summarizes some of the best known inapproximability results and their corresponding approximation guarantees.The gaps between approximability and inapproximability are particularly large in the second and fourth rows, corresponding to RegularizedUSM for ≤ 0 and unconstrained , respectively.All inapproximability results use the symmetry gap technique; are there any other inapproximability techniques potentially worth considering?Gaps Between Current Approximability and Inapproximability [SVW17] Maxim Sviridenko, Jan Vondrák, and Justin Ward."Optimal approximation for submodular and supermodular optimization with bounded curvature".In: Mathematics of Operations Research 42.4 (2017), pp.1197-1218.