Upstream, Downstream or Competitor? Detecting Company Relations for Commercial Activities

  • Yi-Pei Chen
  • Ting-Lun HsuEmail author
  • Wen-Kai Chung
  • Shih-Chieh Dai
  • Lun-Wei Ku
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11589)


Due to intricate network in industry business and high cost of supervision, financial institutions usually focus on supervising core enterprises in a supply chain instead of all corporations, which indirectly lower the strength and efficiency of financial institutions as a role of capital supervisor and credit-risk transformer. Furthermore, banks require these corporations to provide correct information by themselves, which lacks of the objectivity of the source information and increases the supervision cost for these banks. Thus, we summarize a company relation detection task in hope to exposing more information about companies to investors and banks by learning a system from public available datasets. We regard this task as an classification problem, and our system can predict relations between any two companies by learning on both structured and unstructured data. To the best of our knowledge, it’s the first time to implement deep learning technique to this task. A F1 score 0.769 is achieved from our system.


Business dashboards Commercial activity Company relation detection Knowledge graph Multi-relational graph embedding Deep learning application 


  1. 1.
    Hsieh, Y.L., Yang, D.-L., Wu, J.: Using data mining to study upstream and downstream causal relationship in stock market. Computer 1, F02 (2005)Google Scholar
  2. 2.
    Ma, W.-Y., Chen, K.-J.: Introduction to CKIP Chinese word segmentation system for the first international Chinese Word Segmentation Bakeoff, pp. 168–171. Association for Computational Linguistics (2003)Google Scholar
  3. 3.
    Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014).
  4. 4.
    Zeiler, M.D.: ADADELTA: An Adaptive Learning Rate Method. CoRR abs/1212.5701 (2012).
  5. 5.
    Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2014)
  6. 6.
    Hinton, G.: Neural networks for machine learning - lecture 6a - overview of mini-batch gradient descent (2012)Google Scholar
  7. 7.
    Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Statist. 22(3), 400–407 (1951). Scholar
  8. 8.
    Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. Ann. Math. Statist. 23(3), 462–466 (1952). Scholar
  9. 9.
    Qian, N.: On the momentum term in gradient descent learning algorithms. Neural Netw. 12(1), 145–151 (1999). Scholar
  10. 10.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014). Scholar
  11. 11.
    Ng, A.Y.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Proceedings of the Twenty-First International Conference on Machine Learning, ICML 2004, Banff, Alberta, Canada, p. 78. ACM, New York (2004).

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yi-Pei Chen
    • 1
  • Ting-Lun Hsu
    • 1
    Email author
  • Wen-Kai Chung
    • 1
  • Shih-Chieh Dai
    • 1
  • Lun-Wei Ku
    • 1
  1. 1.Institute of Information Science, Academia SinicaTaipeiTaiwan

Personalised recommendations