Skip to main content
Log in

Computing Majority by Constant Depth Majority Circuits with Low Fan-in Gates

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

We study the following computational problem: for which values of k, the majority of n bits MAJn can be computed with a depth two formula whose each gate computes a majority function of at most k bits? The corresponding computational model is denoted by MAJk ∘ MAJk. We observe that the minimum value of k for which there exists a MAJk ∘ MAJk circuit that has high correlation with the majority of n bits is equal to Θ(n1/2). We then show that for a randomized MAJk ∘ MAJk circuit computing the majority of n input bits with high probability for every input, the minimum value of k is equal to n2/3 + o(1). We show a worst case lower bound: if a MAJk ∘ MAJk circuit computes the majority of n bits correctly on all inputs, then kn13/19 + o(1).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Allender, E., Koucký, M.: Amplifying lower bounds by means of self-reducibility. J. ACM 57(3), 14:1–14:36 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  2. Amano, K., Yoshida, M.: Depth two (n − 2)-majority circuits for n-majority. Preprint (2017)

  3. Bruno, B.: Personal communication (2017)

  4. Xi, C., Oliveira, I.C., Servedio, R.A.: Addition is exponentially harder than counting for shallow monotone circuits. Electronic Colloquium on Computational Complexity (ECCC) 22, 123 (2015)

    MATH  Google Scholar 

  5. Dubhashi, D.P., Panconesi, A.: Concentration of Measure for the Analysis of Randomized Algorithms. Cambridge University Press, Cambridge (2009)

    Book  MATH  Google Scholar 

  6. Engels, C., Garg, M., Makino, K., Rao, A.: On expressing majority as a majority of majorities. Electronic Colloquium on Computational Complexity (ECCC) 24, 174 (2017)

    Google Scholar 

  7. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 1. Wiley, New York (1968)

    MATH  Google Scholar 

  8. Goldmann, M., Håstad, J., Razborov, A.A.: Majority gates vs. general weighted threshold gates. Comput. Complex. 2, 277–300 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  9. Goldreich, O.: Valiant’s polynomial-size monotone formula for majority, 2001. Available at http://www.wisdom.weizmann.ac.il/oded/PDF/mono-maj.pdf

  10. Hofmeister, T.: The power of negative thinking in constructing threshold circuits for addition. In: Proceedings of the Seventh Annual Structure in Complexity Theory Conference, Boston, Massachusetts, USA, June 22-25, 1992, pp 20–26 (1992)

  11. Hush, D., Scovel, C.: Concentration of the hypergeometric distribution. Stat. Probab. Lett. 75(2), 127–132 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Jukna, S.: Extremal Combinatorics - With Applications in Computer Science. Texts in Theoretical Computer Science. An EATCS Series. Springer, Berlin (2011)

    MATH  Google Scholar 

  13. Jukna, S.: Boolean Function Complexity - Advances and Frontiers, volume 27 of Algorithms and Combinatorics. Springer, Berlin (2012)

    MATH  Google Scholar 

  14. Jukna, S., Razborov, A.A., Savický, P., Wegener, I.: On P versus NP cap co-NP for decision trees and read-once branching programs. Comput. Complex. 8(4), 357–370 (1999)

    Article  MATH  Google Scholar 

  15. Kamp, J., Zuckerman, D.: Deterministic extractors for bit-fixing sources and exposure-resilient cryptography. SIAM J. Comput. 36(5), 1231–1247 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Kane, D.M., Williams, R.: Super-linear gate and super-quadratic wire lower bounds for depth-two and depth-three threshold circuits. In: Wichs, D., Mansour, Y. (eds.) Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pp 633-643 (2016)

  17. Kombarov, Y.A.: On depth two circuits for the majority function. In: Proceedings of Problems in theoretical cybernetics, pp 129–132. Max Press (2017)

  18. Kulikov, A.S., Podolskii, V.V.: Computing majority by constant depth majority circuits with low fan-in gates. In: Vollmer, H., Vallée, B. (eds.) 34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, March 8-11 Hannover, Germany, volume 66 of LIPIcs, p 2017. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017)

  19. Magniez, F., Nayak, A., Santha, M., Sherman, J., Tardos, G., Xiao, D.: Improved bounds for the randomized decision tree complexity of recursive majority. Random Struct. Algorithms 48(3), 612–638 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  20. Minsky, M., Papert, S.: Perceptrons - An Introduction to Computational Geometry. MIT Press, Cambridge (1987)

    MATH  Google Scholar 

  21. Mossel, E., O’Donnell, R.: On the noise sensitivity of monotone functions. Random Struct. Algorithms 23(3), 333–350 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  22. O’Donnell, R.: Analysis of Boolean Functions. Cambridge University Press, Cambridge (2014)

    Book  MATH  Google Scholar 

  23. Posobin, G.: Computing majority with low-fan-in majority queries. CoRR, arXiv:1711.10176 (2017)

  24. Sedgewick, R., Flajolet, P.: An Introduction to the Analysis of Algorithms. Addison-Wesley-Longman, Reading (1996)

    MATH  Google Scholar 

  25. Serfling, R.J.: Probability inequalities for the sum in sampling without replacement. Ann. Stat. 2(1), 39–48 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  26. Siu, K.-Y., Bruck, J.: On the power of threshold circuits with small weights. SIAM J. Discrete Math. 4(3), 423–435 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  27. Valiant, L.G.: Short monotone formulae for the majority function. J. Algorithms 5(3), 363–366 (1984)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We would like to thank the participants of Low-Depth Complexity Workshop (St. Petersburg, Russia, May 21–25, 2016) for many helpful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander S. Kulikov.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Special Issue on Theoretical Aspects of Computer Science (STACS 2017)

A preliminary version of this paper [18] appeared in the proceedings of the 34th International Symposium on Theoretical Aspects of Computer Science (STACS 2017). The research presented in Section 4 was supported by Russian Science Foundation (project 16-11-10123). The research presented in Section 5 was partially supported by grant MK-5379.2018.1 and by the Russian Academic Excellence Project ‘5-100’.

Appendix: Proofs of Probabilistic Lemmas

Appendix: Proofs of Probabilistic Lemmas

Proof of Lemma 3

To bound the probability we will use Stirling’s approximation, the following simple form will be enough:

$$n! \sim \left( \frac{n}{e}\right)^{n} \sqrt{n}. $$

It is well known (see e.g. [7]) that the probability in (2) is maximal for l = pm, so it is enough to upper bound the probability for this l.

We have

$$\operatorname{Prob}\left[Y =mp\right] = \binom{m}{pm} p^{pm}(1-p)^{(1-p)m} = \frac{m!}{pm!(1-p)m!} p^{pm}(1-p)^{(1-p)m} . $$

Let us first consider binomial coefficients separately:

$$\begin{array}{@{}rcl@{}} \frac{m!}{pm!(1-p)m!} &\sim& \frac{\left( \frac{m}{e}\right)^{m} \sqrt{m}}{\left( \frac{pm}{e}\right)^{pm} \sqrt{pm}\left( \frac{(1-p)m}{e}\right)^{(1-p)m} \sqrt{(1-p)m}}\\ & =& \frac{1}{p^{pm}(1-p)^{(1-p)m}} \cdot\frac{1}{\sqrt{p(1-p)}\sqrt{m}}. \end{array} $$

Thus,

$$\operatorname{Prob}\left[Y =mp\right] = O\left( \frac{1}{\sqrt{m}}\right) . $$

Proof of Lemma 4

By Lemma 3 there is a constant α such that

$$ \operatorname{Prob}\left[\left|Y-\frac m2\right| < \alpha \sqrt{m}\right] < \frac{1}{100}. $$
(15)

Denote

$$\begin{array}{@{}rcl@{}} A &=& \operatorname{Prob}\left[Y \leq \frac{m}{2} - \alpha \sqrt{m} \right],\\ B &=& \operatorname{Prob}\left[\frac{m}{2}- \alpha \sqrt{m} < Y < \frac{m}{2} \right],\\ C &=& \operatorname{Prob}\left[\frac{m}{2} < Y < \frac{m}{2} + \alpha \sqrt{m}\right],\\ D &=&\operatorname{Prob}\left[Y \geq \frac{m}{2} + \alpha \sqrt{m} \right]. \end{array} $$

Clearly,

$$A+B+C+D = 1$$

and (15) can be rewritten as

$$B+C < \frac{1}{100}. $$

Let us denote X = B + C. Then A + D = 1 − X and \(1-X \geq \frac {99}{100}\)

To prove the lemma we need to show that \(C+D \geq \frac 12 + {\Omega }\left (\varepsilon \sqrt {m}\right )\).

Consider \(k = \frac m2 + t\) for some t > 0. Note that Prob [Y = k] ≥ Prob [Y = mk], so

$$C \geq B. $$

Now consider the case of \(t \geq \alpha \sqrt {m}\). We have

$$\operatorname{Prob}\left[Y = k\right] = \binom{m}{k}\left( \frac 12 + \varepsilon\right)^{k} \left( \frac 12 - \varepsilon\right)^{m-k} . $$

On the other hand

$$\operatorname{Prob}\left[Y = m-k\right] = \binom{m}{k}\left( \frac 12 + \varepsilon\right)^{m-k} \left( \frac 12 - \varepsilon\right)^{k}. $$

Then we have

$$\begin{array}{@{}rcl@{}} \frac{\operatorname{Prob}\left[Y = k\right]}{\operatorname{Prob}\left[Y = m-k\right]} &=& \frac{(\frac 12 + \varepsilon)^{2k-m}}{(\frac 12 - \varepsilon)^{2k-m}} =\left( \frac{\frac 12 + \varepsilon}{\frac 12 - \varepsilon} \right)^{2t}\\ &\geq&\left( 1 + \frac{2\varepsilon}{\frac 12 - \varepsilon} \right)^{2 \alpha \sqrt{m}}\geq 1 + c \varepsilon \sqrt{m} . \end{array} $$

for some positive constant c = c(α). The last inequality in this sequence is Bernoulli’s inequality.

Since this inequality holds for all \(t \geq \alpha \sqrt {m}\), we have

$$D \geq A \cdot (1 + c \varepsilon \sqrt{m}) . $$

Thus we have

$$1- X = A+D \leq \frac{D}{1+c \varepsilon \sqrt{m}} + D. $$

We can rewrite

$$D \geq \frac{1+c \varepsilon \sqrt{m}}{2+c \varepsilon \sqrt{m}} (1-X) = \frac 12 (1-X) + \frac{c \varepsilon \sqrt{m}}{4 + 2c \varepsilon \sqrt{m}} (1-X). $$

Since \(\varepsilon \sqrt {m}\) is positive and upper-bounded by 1, we have

$$\frac{c \varepsilon \sqrt{m}}{4 + 2c \varepsilon \sqrt{m}} = {\Theta}(\varepsilon \sqrt{m}). $$

It follows

$$A+D \geq \frac{1}{2} + {\Omega}(\varepsilon \sqrt{m}) (1-X) = \frac{1}{2} + {\Omega}\left( \varepsilon \sqrt{m}\right), $$

where in the last inequality we use that \(1-X \geq \frac {99}{100}\).

Proof of Lemma 5

One simple way to prove this lemma is by a direct observation of probabilities.

In the setting with a random S the probability that |TS| = l is

$$\operatorname{Prob}\left[|T\cap S^{\prime}|=l\right] = \frac{ \binom{k}{l}\binom{m-k}{t-l}}{\binom{m}{t}}. $$

In the setting with a random T the probability of the same event is

$$\operatorname{Prob}\left[|T\cap S^{\prime}|=l\right] = \frac{ \binom{t}{l}\binom{m-t}{k-l}}{\binom{m}{k}}. $$

It is straightforward to see that these probabilities are equal. □

Proof of Lemma 6

The probability under consideration is equal to

$$\operatorname{Prob}\left[|T\cap S^{\prime}|=l\right] = \frac{ \binom{k}{l}\binom{m-k}{t-l}}{\binom{m}{t}}. $$

It is convenient to introduce notation \(c = \frac tm\). Note that then ε < c < 1 − ε. The probability above then can be rewritten as

$$\operatorname{Prob}\left[|T\cap S^{\prime}|=l\right] = \frac{ \binom{k}{l}\binom{m-k}{cm-l}}{\binom{m}{cm}}. $$

It is not hard to see that the maximum is achieved for l equal to ck (the probability is increasing for l < ck as a function of l and is decreasing for l > ck).

So we need to upper bound

$$ \frac{\binom{k}{ck}\binom{m-k}{c(m-k)}}{\binom{m}{cm}} = \frac{\frac{k!}{ck!(1-c)k!}\frac{(m-k)!}{c(m-k)!(1-c)(m-k)!}}{\frac{m!}{cm!(1-c)m!}}. $$
(16)

To bound the probability we again will use Stirling’s approximation:

$$n! \sim \left( \frac{n}{e}\right)^{n} \sqrt{n}. $$

As in the proof of Lemma 2 let us first consider binomial coefficients separately:

$$\begin{array}{@{}rcl@{}} \frac{m!}{cm!(1-c)m!} &\sim& \frac{\left( \frac{m}{e}\right)^{m} \sqrt{m}}{\left( \frac{cm}{e}\right)^{cm} \sqrt{cm}\left( \frac{(1-c)m}{e}\right)^{(1-c)m} \sqrt{(1-c)m}}\\ & =& \frac{1}{(c^{c}(1-c)^{1-c})^{m}} \cdot\frac{1}{\sqrt{c(1-c)}\sqrt{m}} \\ & =& d^{m} \cdot \frac{1}{\sqrt{c(1-c)}\sqrt{m}}, \end{array} $$

where by d we denote \(\frac {1}{c^{c}(1-c)^{1-c}}\).

Now for (16) we have

$$\frac{d^{k} \cdot \frac{1}{\sqrt{c(1-c)}\sqrt{k}}\cdot d^{m-k} \cdot \frac{1}{\sqrt{c(1-c)}\sqrt{m-k}}}{d^{m} \cdot \frac{1}{\sqrt{c(1-c)}\sqrt{m}}} = \frac{\sqrt{m}}{\sqrt{c(1-c)}\sqrt{k}\sqrt{m-k}} \sim \frac{1}{\sqrt{k}}, $$

where the last equivalence follows since \(\sqrt {m-k} = {\Theta }(\sqrt {m})\).

So, we have shown the first part of the lemma and the second part for l = ck. To ensure the second part for |lck| < d we can compare probabilities for l and l + 1:

$$\begin{array}{@{}rcl@{}} \frac{ \binom{k}{l}\binom{m-k}{cm-l}}{\binom{m}{cm}} = \frac{ \binom{k}{l + 1}\binom{m-k}{cm-l-1}}{\binom{m}{cm}}\cdot \frac{l + 1}{k-l}\cdot \frac{m-k-(cm-l)+ 1}{cm-l}. \end{array} $$

Note that if |lck| < d the probabilities differ by a constant factor. Thus the asymptotic of the probability is the same for all l satisfying |lck| < d. This finishes the proof of lemma. □

Proof of Lemma 7

We introduce the same notation as in the previous proof: \(c = \frac tm\). The probability is bounded by

$$\begin{array}{@{}rcl@{}} \frac{ {\sum}_{r \in A}\binom{m-k}{cm-|r|}}{\binom{m}{cm}} &=& \sum\limits_{r \in A}\left( \frac{1}{\binom{k}{|r|}}\frac{ \binom{k}{|r|}\binom{m-k}{cm-|r|}}{\binom{m}{cm}} \right)\\ &\leq& \max_{|r|} \left( \frac{ \binom{k}{|r|}\binom{m-k}{cm-|r|}}{\binom{m}{cm}} \right) \sum\limits_{r \in A}\frac{1}{\binom{k}{|r|}} \leq \max_{|r|} \left( \frac{ \binom{k}{|r|}\binom{m-k}{cm-|r|}}{\binom{m}{cm}} \right), \end{array} $$

where the last inequality is LYM inequality (see e.g. [12], Theorem 8.6).

Now we can bound the probability by the same argument as in Lemma 6. □

Proof of Lemma 9

The lemma can be shown by a simple direct calculation:

$$\begin{array}{@{}rcl@{}} \operatorname{Prob}\{|T \cap S^{\prime}| \ge l\} &\le& \frac{\binom{k}{l}\binom{m-l}{t-l}}{\binom{m}{t}}\\ &\le& k^{l} \cdot \frac{t}{m} \cdot \frac{t-1}{m-1} \cdot \dotsm \cdot \frac{t-l + 1}{m-l + 1} \le k^{l} \cdot \left( \frac{t}{m}\right)^{l} = \left( \frac{kt}{m}\right)^{l}, \end{array} $$

where in the first inequality in the numerator we pick l elements in the set of k and then pick the rest tl elements from all remaining ml elements and in the second inequality we use a simple bound \(\binom {k}{l}\leq k^{l}\). □

Proof of Lemma 10

By Lemma 6, there is a constant α such that

$$\operatorname{Prob}\left[\left|Y-\frac k2\right| < \alpha \sqrt{k}\right] < \frac{1}{100}. $$

Denote

$$\begin{array}{@{}rcl@{}} A &=& \operatorname{Prob}\left[Y \leq \frac{k}{2} - \alpha \sqrt{k} \right],\\ B &=& \operatorname{Prob}\left[\frac{k}{2}- \alpha \sqrt{k} < Y < \frac{k}{2} \right],\\ C &=& \operatorname{Prob}\left[\frac{k}{2} < Y < \frac{k}{2} + \alpha \sqrt{k}\right],\\ D &=&\operatorname{Prob}\left[Y \geq \frac{k}{2} + \alpha \sqrt{k} \right]. \end{array} $$

Clearly,

$$A+B+C+D = 1$$

and (15) can be rewritten as

$$B+C < \frac{1}{100}. $$

Let us denote X = B + C. Then A + D = 1 − X and \(1-X \geq \frac {99}{100}\)

To prove the lemma we need to show that \(C+D \geq \frac 12 + {\Omega }\left (\varepsilon \sqrt {m}\right )\).

Consider \(k = \frac m2 + t\) for some t > 0. Note that Prob [Y = k] ≥ Prob [Y = mk], so

$$C \geq B. $$

Consider any \(r \geq \alpha \sqrt {k}\) and denote \(l = \frac k2 + r\). Then

$$\operatorname{Prob}\left[Y = l\right] = \frac{\binom{k}{l}\binom{m-k}{t-l}}{\binom{m}{t}} . $$

On the other hand

$$\operatorname{Prob}\left[Y = k-l\right] = \frac{\binom{k}{k-l}\binom{m-k}{t-(k-l)}}{\binom{m}{t}} . $$

Then we have

$$\frac{\operatorname{Prob}\left[Y = l\right]}{\operatorname{Prob}\left[Y = k-l\right]} = \frac{\binom{m-k}{t-l}}{\binom{m-k}{t-k+l}}= \frac{\binom{m-k}{\frac m2 +\varepsilon m -\frac k2 - r}}{\binom{m-k}{\frac m2 +\varepsilon m -\frac k2 + r}} . $$

Denote N = mk. Then

$$\begin{array}{@{}rcl@{}} \frac{\operatorname{Prob}\left[Y = l\right]}{\operatorname{Prob}\left[Y = k-l\right]} &=& \frac{\binom{N}{\frac N2 +\varepsilon m - r}}{\binom{N}{\frac N2 +\varepsilon m + r}}= \frac{(\frac N2 +\varepsilon m + r)!(\frac N2 -\varepsilon m - r)!}{(\frac N2 +\varepsilon m - r)!(\frac N2 -\varepsilon m + r)!}\\ &=&\frac{(\frac N2 +\varepsilon m - r + 1)(\frac N2 +\varepsilon m - r + 2)\ldots(\frac N2 + \varepsilon m + r)}{(\frac N2 -\varepsilon m - r + 1)(\frac N2 -\varepsilon m - r + 2)\ldots(\frac N2 - \varepsilon m + r)}\\ &\geq&\left( \frac{\frac N2 +\varepsilon m + r}{\frac N2 -\varepsilon m + r}\right)^{2r}= \left( 1 + \frac{2\varepsilon m}{\frac N2 -\varepsilon m + r}\right)^{2r} \\ &\geq&1 + \frac{4 r\varepsilon m}{\frac N2 -\varepsilon m + r} \geq 1 + c\varepsilon r \geq 1 + c^{\prime}\varepsilon \sqrt{k}, \end{array} $$

for some positive constants c and c.

Since this inequality holds for all \(r \geq \alpha \sqrt {k}\), we have

$$D \geq A \cdot (1 + c^{\prime} \varepsilon \sqrt{k}) . $$

Thus we have

$$1- X = A+D \leq \frac{D}{1+c^{\prime} \varepsilon \sqrt{k}} + D. $$

We can rewrite

$$D \geq \frac{1+c^{\prime} \varepsilon \sqrt{k}}{2+c^{\prime} \varepsilon \sqrt{k}} (1-X) = \frac 12 (1-X) + \frac{c^{\prime} \varepsilon \sqrt{k}}{4 + 2c^{\prime} \varepsilon \sqrt{k}} (1-X). $$

Since \(\varepsilon \sqrt {k}\) is positive and upper-bounded by 1, we have

$$\frac{c^{\prime} \varepsilon \sqrt{k}}{4 + 2c^{\prime} \varepsilon \sqrt{k}} = {\Theta}(\varepsilon \sqrt{k}). $$

It follows

$$A+D \geq \frac{1}{2} + {\Omega}(\varepsilon \sqrt{k}) (1-X) = \frac{1}{2} + {\Omega}\left( \varepsilon \sqrt{k}\right), $$

where in the last inequality we use that \(1-X \geq \frac {99}{100}\). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kulikov, A.S., Podolskii, V.V. Computing Majority by Constant Depth Majority Circuits with Low Fan-in Gates. Theory Comput Syst 63, 956–986 (2019). https://doi.org/10.1007/s00224-018-9900-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-018-9900-3

Keywords

Navigation