Abstract
Several problems in operations research, such as the assembly line crew scheduling problem and the k-partitioning problem can be cast as the problem of finding the intra-column rearrangement (permutation) of a matrix such that the row sums show minimum variability. A necessary condition for optimality of the rearranged matrix is that for every block containing one or more columns it must hold that its row sums are oppositely ordered to the row sums of the remaining columns. We propose the block rearrangement algorithm with variance equalization (BRAVE) as a suitable method to achieve this situation. It uses a carefully motivated heuristic—based on an idea of variance equalization—to find optimal blocks of columns and rearranges them. When applied to the number partitioning problem, we show that BRAVE outperforms the well-known greedy algorithm and the Karmarkar–Karp differencing algorithm.
Similar content being viewed by others
Notes
Rearranging refers to the act of permuting (swapping) elements.
In Bernard et al. (2017) it is shown that for some particular type of matrices this approach makes it possible to obtain after d − 1 steps (rearrangements) the situation in which all columns are oppositely ordered to the sum of all others. This development allows to obtain approximations for the minimum variance of a sum of (scaled) Bernoulli distributions in a fast way.
In Graham (1966), the so-called parallel machine scheduling problem is solved by iteratively assigning subsequent jobs to the machine whose current completion time is minimum.
The use of block rearrangements is consistent with the notion of \(\varSigma \)-countermonotonic matrices, as developed in Section 3.4 of Puccetti and Wang (2015); see in particular Definition 3.8 and Theorem 3.8 herein.
See also Section 2.3 in Bernard et al. (2015).
Vectors \(W,Z\in \mathbb {R}^n\) are said to be oppositely ordered if \( (w_j-w_i)(z_j-z_i)\le 0\) for all \(1\le i < j\le n\).
It is a direct consequence of the fact that unless W and Z have the same distribution, \(W\preccurlyeq _{\mathrm {cx}}Z\) implies \({E}[f(W)]<{E}[f(Z)]\) for any strictly convex function f; See Theorem 3.A43 of Shaked and Shanthikumar (2007).
Note that this shuffling step already leads to a substantial reduction in the variability among the row sums, compared to a sorted (co-monotonic) arrangement.
Sorting of the columns is done to render our paper more transparent with respect to reproducibility of the numerical results and also to enable a more fair comparison between block rearrangement algorithms and the RA of Puccetti and Rüschendorf (2012) in the case of heterogeneous columns. In fact, the RA goes through the columns sequentially (starting with the first column and so on). If the first few columns show little variability, rearranging them would not have much effect on the variability of row sums. For the block rearrangement algorithms, this issue does not exist, as they do not apply rearrangements sequentially starting with the first column.
To verify this condition, one needs to consider \((2^{d}-2)/2=2^{d-1}-1\) different partitions of the matrix into two blocks. The division by two is because for each partition \(\delta \), the corresponding \(1-\delta \) does not need to be considered. Furthermore, the cases in which all elements of \(\delta \) are zero (resp. one), do not require consideration.
Recall that \({E}[S]=\frac{1}{n}\sum _{j=1}^{n}s_{j}\) is invariant under intra-column rearrangements.
The greedy algorithm has a running time of \(O(d\log d)\) and is flexible enough to deal with real numbers. Most partitioning algorithms are designed to work with integer numbers or positive real numbers. See Korf (1998) for a review of alternative solutions.
In fact, from Proposition A.1 in Bernard et al. (2017) it follows that the greedy algorithm yields an arrangement in which each column is oppositely ordered to the sum of the other columns and is thus—in the particular context of the partition problem—as good as the RA.
The results for the BRAVE algorithms are based on the greedy algorithm for finding the blocks. Using the KK algorithm, we obtained sightly better results, but the computation times were longer. We also tested the Block RA1 algorithm of Bernard and McLeish (2016) with block selection as in (2). These simulations showed that block selection based on maximal Spearman correlation of the row sums in a few iterations leads to a rearranged matrix with maximal Spearman correlation that is close to −1, but the improvement in the objective function V is relatively low compared to the RA, BRAVE and BRAVE \(+\) RA algorithms. Moreover, the Spearman correlation approach requires sorting the partial row sums for all considered partitions at every iteration, and is time-consuming. For the sake of brevity, we do not report these results.
This is to be expected. The columns in this experiment are similar and on average BRA chooses blocks of equal size so that the variances of their row are typically close to each other.
To save space we do not report the running times for the Pareto and the Bernoulli–Beta set-ups, since the obtained results for the running times are qualitatively similar. This is expected, since the nature of the randomized matrix entries has little impact on the computer time spent on the sorting and summation operations done in the rearrangement algorithms.
References
Alvim AC and Ribeiro CC (2004) A hybrid bin-packing heuristic to multiprocessor scheduling. In: International workshop on experimental and efficient algorithms. Springer, Berlin, pp 1–13
Bernard C, McLeish D (2016) Algorithms for finding copulas minimizing convex functions of sums. Asia Pac J Oper Res 33(5):1650040. doi:10.1142/S0217595916500408
Bernard C, Rüschendorf L, Vanduffel S (2015) Value-at-risk bounds with variance constraints. J Risk Insur. doi:10.1111/jori.12108
Bernard C, Rüschendorf L, Vanduffel S, Yao J (2017) How robust is the value-at-risk of credit risk portfolios? Eur J Financ 23(6):507–534
Boland PJ, Proschan F (1988) Multivariate arrangement increasing functions with applications in probability and statistics. J Multivar Anal 25(2):286–298
Chopra S, Rao MR (1993) The partition problem. Math Program 59(1–3):87–115
Coffman E, Yannakakis M (1984) Permuting elements within columns of a matrix in order to minimize maximum row sum. Math Oper Res 9(3):384–390
Day PW (1972) Rearrangement inequalities. Can J Math 24(5):930–943
Dell’Amico M, Martello S (1995) Optimal scheduling of tasks on identical parallel processors. ORSA J Comput 7(2):191–200
Dell’Amico M, Martello S (2005) A note on exact algorithms for the identical parallel machine scheduling problem. Eur J Oper Res 160(2):576–578
Dell’Amico M, Iori M, Martello S, Monaci M (2008) Heuristic and exact algorithms for the identical parallel machine scheduling problem. INFORMS J Comput 20(3):333–344
Embrechts P, Puccetti G, Rüschendorf L (2013) Model uncertainty and VaR aggregation. J Bank Financ 37(8):2750–2764
Frangioni A, Necciari E, Scutella MG (2004) A multi-exchange neighborhood for minimum makespan parallel machine scheduling problems. J Comb Optim 8(2):195–220
Gent IP, Walsh T (1998) Analysis of heuristics for number partitioning. Comput Intell 14(3):430–451
Graham RL (1966) Bounds for certain multiprocessing anomalies. Bell Syst Tech J 45(9):1563–1581
Hayes B (2002) Computing science: the easiest hard problem. Am Sci 90(2):113–117
Hsu W-L (1984) Approximation algorithms for the assembly line crew scheduling problem. Math Oper Res 9(3):376–383
Jakobsons E, Wang R (2016) Negative dependence in matrix arrangement problems. http://ssrn.com/abstract=2756934
Karmarkar N, Karp RM (1982) The differencing method of set partitioning. Technical Report UCB/CSD 82/113, Computer Science Division, University of California, Berkeley
Korf RE (1998) A complete anytime algorithm for number partitioning. Artif Intell 106(2):181–203
Marshall AW, Olkin I, Arnold BC (2011) Inequalities: theory of majorization and its applications, 2nd edn. Springer, New York
Mertens S (1998) Phase transition in the number partitioning problem. Phys Rev Lett 81(20):4281
Mokotoff E (2004) An exact algorithm for the identical parallel machine scheduling problem. Eur J Oper Res 152(3):758–769
Puccetti G, Rüschendorf L (2012) Computation of sharp bounds on the distribution of a function of dependent risks. J Comput Appl Math 236(7):1833–1840
Puccetti G, Wang R (2015) Extremal dependence concepts. Stat Sci 30(4):485–517
Rüschendorf L (2013) Mathematical risk analysis. In: Mikosch TV, Resnick SI, Robinson SM (eds) Springer Series in Operations Research and Financial Engineering. Springer, Heidelberg
Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, Berlin
Wang B, Wang R (2011) The complete mixability and convex minimization problems with monotone marginal densities. J Multivar Anal 102(10):1344–1360
Wang B, Wang R (2016) Joint mixability. Math Oper Res 41(3):808–826
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there are no conflicts of interest.
Rights and permissions
About this article
Cite this article
Boudt, K., Jakobsons, E. & Vanduffel, S. Block rearranging elements within matrix columns to minimize the variability of the row sums. 4OR-Q J Oper Res 16, 31–50 (2018). https://doi.org/10.1007/s10288-017-0344-4
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10288-017-0344-4
Keywords
- Assembly line crew scheduling
- Greedy algorithm
- Rearrangements
- k-Partitioning
- Karmarkar–Karp differencing algorithm