Path-Based Knowledge Graph Completion Combining Reinforcement Learning with Soft Rules

  • Wenting Yu
  • Xiangnan Ma
  • Luyi BaiEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1075)


Knowledge graphs are useful resources for numerous applications, but they are often greatly incomplete. A popular method of knowledge graph completion is path-based reasoning that is to infer new relations by connecting other existing paths at a pair of entities. But this traditional method does not consider the reality of paths. In this paper, we propose a model that combines the reinforcement learning (RL) framework with soft rules to learn reasoning path. In our model, we adjust the partially observed Markov decision process and extract the soft rules with different confidence levels from datasets. In contrast to prior work, we modify the reward function to make RL-agent incline to choose the paths which conform to soft rules, and set the probability of reasoning paths. Meanwhile, we make some restrictions in order to extract soft rules with high confidence levels. We analyze the complexity of our algorithm and use an instance to evaluate the correctness of our model.


Knowledge graph completion Reinforcement learning Soft rules 



This work was supported by the National Natural Science Foundation of China (61402087), the Natural Science Foundation of Hebei Province (F2019501030), the Natural Science Foundation of Liaoning Province (2019-MS-130), and the Fundamental Research Funds for the Central Universities (N172304026).


  1. 1.
    Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. arXiv preprint. arXiv:1406.3676 (2014)
  2. 2.
    Heck, L., Hakkani-Tür, D., Tur, G.: Leveraging knowledge graphs for web-scale unsupervised semantic parsing (2013)Google Scholar
  3. 3.
    Daiber, J., Jakob, M., Hokamp, C., et al.: Improving efficiency and accuracy in multilingual entity extraction. In: Proceedings of the 9th International Conference on Semantic Systems, pp. 121–124. ACM (2013)Google Scholar
  4. 4.
    Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)Google Scholar
  5. 5.
    Lao, N., Zhu, J., Liu, L., Liu, Y., Cohen, W.W.: Efficient relational learning with hidden variable detection. In: Advances in Neural Information Processing Systems, pp. 1234–1242 (2010)Google Scholar
  6. 6.
    Xiong, W., Hoang, T., Wang, W.Y.: Deeppath: a reinforcement learning method for knowledge graph reasoning. arXiv preprint. arXiv:1707.06690 (2017)
  7. 7.
    Das, R., Dhuliawala, S., Zaheer, M., Vilnis, L., Durugkar, I., Krishnamurthy, A., McCallum, A.: Go for a walk and arrive at the answer: reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint. arXiv:1711.05851 (2017)
  8. 8.
    Richardson, M., Domingos, P.: Markov logic networks. Mach.Learn. 62(1–2), 107–136 (2006)CrossRefGoogle Scholar
  9. 9.
    Brocheler, M., Mihalkova, L., Getoor, L.: Probabilistic similarity logic. arXiv preprint. arXiv:1203.3469 (2012)
  10. 10.
    Guo, S., Wang, Q., Wang, L., Wang, B., Guo, L.: Knowledge graph embedding with iterative guidance from soft rules. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  11. 11.
    Galárraga, L., Teflioudi, C., Hose, K., Suchanek, F.M.: Fast rule mining in ontological knowledge bases with AMIE ++. VLDB J. Int. J. Very Large Data Bases 24(6), 707–730 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Computer and Communication EngineeringNortheastern University (Qinhuangdao)QinhuangdaoChina

Personalised recommendations