User to User QoE Routing System

  • Hai Anh Tran
  • Abdelhamid Mellouk
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6649)


Recently, wealthy network services such as Internet protocol television (IPTV) and Voice over IP (VoIP) are expected to become more pervasive over the Next Generation Network (NGN). In order to serve this purpose, the quality of these services should be evaluated subjectively by users. This is referred to as the quality of experience (QoE). The most important tendency of actual network services is maintaining the best QoE with network functions such as admission control, resource management, routing, traffic control, etc. Among of them, we focus here on routing mechanism. We propose in this paper a protocol integrating QoE measurement in routing paradigm to construct an adaptive and evolutionary system. Our approach is based on Reinforcement Learning concept. More concretely, we have used a least squares reinforcement learning technique called Least Squares Policy Iteration. Experimental results showed a significant performance gain over traditional routing protocols.


Quality of Service (QoS) Quality of Experience (QoE) Network Services Inductive Routing System 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Wang, Z., Crowcroft, J.: Quality of service routing for supporting multimedia applications. IEEE Journal on selected areas in communications 14(7), 1228–1234 (1996)CrossRefGoogle Scholar
  2. 2.
    Lagoudakis, M.G., Parr, R.: Least-squares policy iteration. Journal of Machine Learning Research 4, 1149 (2003)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Casetti, C., Favalessa, G., Mellia, M., Munafo, M.: An adaptive routing algorithm for best-effort traffic in integrated-services networks. Teletraffic science and engineering, 1281–1290 (1999)Google Scholar
  4. 4.
    Boyan, J.A., Littman, M.L.: Packet routing in dynamically changing networks: A reinforcement learning approach. Advances in Neural Information Processing Systems, 671 (1994)Google Scholar
  5. 5.
    Caro, G.D., Dorigo, M.: Antnet: A mobile agents approach to adaptive routing. In: The Hawaii International Conference On System Sciences, vol. 31, pp. 74–85 (1998)Google Scholar
  6. 6.
    Peshkin, L., Savova, V.: Reinforcement learning for adaptive routing. In: Intnl Joint Conf on Neural Networks (2002)Google Scholar
  7. 7.
    Mellouk, A., Hoceini, S., Zeadally, S.: Design and performance analysis of an inductive qos routing algorithm. Computer Communications 32, 1371–1376 (2009)CrossRefGoogle Scholar
  8. 8.
    Vleeschauwer, B.D., Turck, F.D., Dhoedt, B., Demeester, P., Wijnants, M., Lamotte, W.: End-to-end qoe optimization through overlay network deployment. In: International Conference on Information Networking (2008)Google Scholar
  9. 9.
    Majd, G., Cesar, V., Adlen, K.: An adaptive mechanism for multipath video streaming over video distribution network (vdn). In: First International Conference on Advances in Multimedia (2009)Google Scholar
  10. 10.
    Wang, P., Wang, T.: Adaptive routing for sensor networks using reinforcement learning. In: IEEE International Conference on Computer and Information Technology (2006)Google Scholar
  11. 11.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of AI Research 4, 237–285 (1996)Google Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. IEEE transactions on neural networks 9 (1998)Google Scholar
  13. 13.
    I.-T. R. P.801, Mean opinion score (mos) terminology, International Telecommunication Union, Geneva (2006)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2011

Authors and Affiliations

  • Hai Anh Tran
    • 1
  • Abdelhamid Mellouk
    • 1
  1. 1.Image, Signal and Intelligent Systems Lab-LiSSi LabUniversity of Paris-Est Creteil Val de Marne (UPEC)Vitry sur SeineFrance

Personalised recommendations