Policy-Aware Language Service Composition
Many language resources are being shared as web services to process data on the Internet. As dataset size keeps growing, language services are experiencing more big data problems, such as the storage and processing overheads caused by the huge amounts of multilingual texts. Parallel execution and cloud technologies are the keys to making service invocation practical. In the Service-Oriented Architecture approach, service providers typically employ policies to limit parallel execution of the services based on arbitrary decisions. In order to attain optimal performance, users need to adapt to the services policies. A composite service is a combination of several atomic services provided by various providers. To use parallel execution for greater composite service efficiency, the degree of parallelism (DOP) of the composite services need to be optimized by considering the policies of all atomic services. We propose a model that embeds service policies into formulae and permits composite service performance to be calculated. From the calculation results, we can predict the optimal DOP for the composite service that allows the best performance to be attained. Extensive experiments are conducted on real-world translation services. The analysis results show that our proposed model has good prediction accuracy in identifying optimal DOPs for composite services.
KeywordsParallel execution policy Performance prediction Degree of parallelism
This research was partly supported by a Grant-in-Aid for Scientific Research (S) (24220002, 2012-2016) and a Grant-in-Aid for Young Scientists (A) (17H04706, 2017-2020) from Japan Society for the Promotion of Science (JSPS).
- 1.Amdahl, G.M.: Validity of the single processor approach to achieving large scale computing capabilities. In: Proceedings of the April 18–20, 1967, Spring Joint Computer Conference, AFIPS ’67 (Spring), pp. 483–485. ACM, New York, NY, USA (1967)Google Scholar
- 3.Raicu, I., Foster, I., Zhao, Y., Szalay, A., Little, P., Moretti, C.M., Chaudhary, A., Thain, D.: Towards data intensive many-task computing. In: Data Intensive Distributed Computing: Challenges and Solutions for Largescale Information Management, vol. 13, no. 3, pp. 28–73 (2012)Google Scholar
- 4.Taylor, I.J., Deelman, E., Gannon, D.B., Shields, M.: Workflows for e-Science: scientific workflows for grids. Springer Publishing Company, Incorporated (2014)Google Scholar
- 5.Pautasso, C., Alonso, G.: Parallel computing patterns for grid workflows. In: 2006 Workshop on Workflows in Support of Large-Scale Science, pp. 1–10 (2006)Google Scholar
- 7.Ishida, T. (ed.): The Language Grid: Service-Oriented Collective Intelligence for Language Resource Interoperability. Springer Science & Business Media (2011)Google Scholar
- 8.Murakami, Y., Lin, D., Ishida, T.: Service-Oriented Architecture for Interoperability of Multilanguage Services, pp. 313–328. Springer, Berlin (2014)Google Scholar
- 10.Trang, M., Murakami, Y., Ishida, T.: Policy-aware optimization of parallel execution of composite services. IEEE Trans. Serv. Comput. PP(99), 109–113 (2017)Google Scholar
- 13.Xuan, T.M., Murakami, Y., Lin, D., Ishida, T.: Integration of workflow and pipeline for language service composition. In: Chair, N.C.C., Choukri, K., Declerck, T., Loftsson, H., Maegaard, B., Mariani, J., Moreno, A., Odijk, J., Piperidis, S. (eds.) Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pp. 3829–3836. European Language Resources Association (ELRA), Reykjavik, Iceland (2014)Google Scholar
- 16.Yu, J., Buyya, R., Ramamohanarao, K.: Workflow Scheduling Algorithms for Grid Computing, pp. 173–214. Springer, Berlin (2008)Google Scholar
- 18.Lin, D., Shi, C., Ishida, T.: Dynamic service selection based on context-aware QoS. In: 2012 IEEE Ninth International Conference on Services Computing, pp. 641–648 (2012)Google Scholar