MOARLE: Matrix Operation Accelerator Based on Run-Length Encoding

  • Masafumi Oyamada
  • Jianquan Liu
  • Kazuyo Narita
  • Takuya Araki
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8709)


Matrix computation is a key technology in various data processing tasks including data mining, machine learning, and information retrieval. Size of matrices has been increasing with the development of computational resources and dissemination of big data. Huge matrices are memory- and computational-time-consuming. Therefore, reducing the size and computational time of huge matrices is a key challenge in the data processing area. We develop MOARLE, a novel matrix computation framework that saves memory space and computational time. In contrast to conventional matrix computational methods that target to sparse matrices, MOARLE can efficiently handle both sparse matrices and dense matrices. Our experimental results show that MOARLE can reduce the memory usage to 2% of the original usage and improve the computational performance by a factor of 124x.


Matrix Compression Run-length encoding Similarity search Euclidean distance 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Apple Inc.: Apple Technical Note TN1023 (1996)Google Scholar
  2. 2.
    Bache, K., Lichman, M.: UCI machine learning repository (2013)Google Scholar
  3. 3.
    Brodie, B.C., Taylor, D.E., Cytron, R.K.: A Scalable Architecture for High-Throughput Regular-Expression Pattern Matching. In: ISCA, pp. 191–202 (2006)Google Scholar
  4. 4.
    Cormen, T.H., Stein, C., Rivest, R.L., Leiserson, C.E.: Introduction to Algorithms, 2nd edn. McGraw-Hill Higher Education (2001)Google Scholar
  5. 5.
    Deshpande, M., Karypis, G.: Item-based top-N Recommendation Algorithms. ACM Trans. Inf. Syst. 22(1), 143–177 (2004)CrossRefGoogle Scholar
  6. 6.
    Feng, X., Kumar, A., Recht, B., Ré, C.: Towards a unified architecture for in-RDBMS analytics. In: SIGMOD, pp. 325–336. ACM (2012)Google Scholar
  7. 7.
    Fu, W.J.: Penalized Regressions: The Bridge versus the Lasso. Journal of Computational and Graphical Statistics 7(3), 397–416 (1998)MathSciNetGoogle Scholar
  8. 8.
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
  9. 9.
    Guennebaud, G., Jacob, B., et al.: Eigen v3 (2010),
  10. 10.
    Guyon, I., Gunn, S.R., Ben-Hur, A., Dror, G.: Result analysis of the nips 2003 feature selection challenge. In: NIPS (2004)Google Scholar
  11. 11.
    Pinar, A., Heath, M.T.: Improving Performance of Sparse Matrix-vector Multiplication. In: SC. ACM, New York (1999)Google Scholar
  12. 12.
    Rendle, S.: Scaling Factorization Machines to Relational Data. PVLDB 6(5), 337–348 (2013)Google Scholar
  13. 13.
    Willcock, J., Lumsdaine, A.: Accelerating Sparse Matrix Computations via Data Compression. In: ICS, pp. 307–316. ACM, New York (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Masafumi Oyamada
    • 1
  • Jianquan Liu
    • 1
  • Kazuyo Narita
    • 1
  • Takuya Araki
    • 1
  1. 1.Green Platform Research Labs., NEC Corp.KawasakiJapan

Personalised recommendations