Skip to main content
Log in

Performance consequence of user space file systems due to extensive CPU sharing in virtual environment

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Recently, FUSE based user space file systems have gained importance due to their ease of implementation. Their lower performance versus implementation benefits, compared to traditional in-kernel file systems, has always been a point of debate among researchers. As FUSE requires additional context switching to perform file related operations, there is a noticeable increase in CPU utilization. In this era of cloud computing, where the focus is shifting on running applications in virtual machines, increased CPU utilization during file operations can have considerable impact on performance, as resources of the hypervisor are shared among virtual machines. This degradation in performance can become a major cause of concern especially when resources are over-committed. There has been ongoing research regarding improvement in performance of user space file systems, and this will help in providing a systematic study on evaluating their performance. For an in-depth examination, we analyzed the performance of FUSE based file systems running in a guest virtual machine and highlighted some of the scenarios, where there could be a major damage to the file system’s performance. We have also showed here, that a careful selection of parameters like block sizes and type of read/write operations can improve performance of these file systems significantly by 40% to 45% even in heavily loaded environment. This study will inspire users and developers to enhance the performance of FUSE based file systems which are now commonly used in cluster of virtual machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Condict, M., Bolinger, D., Mitchell, D., McManus, E.: Microkernel modularity with integrated kernel performance. In: Proceedings of the first symposium on operating systems design and implementation (OSDI 1994), Monterey, CA, November 1994.

  2. Hartig, H., Hohmuth, M., Liedtke, J., Wolter, J., Schonberg, S: The performance of Microkernel-based systems. In: Proceedings of the 16th Symposium on Operating Systems Principles (SOSP ’97), Saint Malo, France, October 1997. ACM.

  3. Bach, M.J.: The Design of the UNIX Operating System. Prentice Hall India Publication, London (2018)

    Google Scholar 

  4. Bent, J., Gibson, G., Grider, G., McClelland, B., Nowoczynski, P., Nunez, J., Polte, M., Wingate, M.: Plfs: a checkpoint filesystem for parallel applications. Technical Report LA-UR 09- 02117, LANL, April 2009. https://institute.lanl.gov/plfs/.

  5. Cornell, B., Dinda, P.A., Bustamante, F.E.: Wayback: a user-level versioning file system for linux. In: Proceedings of the Annual USENIX Technical Conference, FREENIX Track, pages 19– 28, Boston, MA, June 2004. USENIX Association.

  6. Mazieres, D.: A toolkit for user-level file systems. In: Proceedings of the 2001 USENIX Annual Technical Conference, 2001.

  7. Schmuck, F., Haskin, R.: GPFS: a shared-disk file system for large computing clusters. In Proceedings of the First USENIX Conference on File and Storage Technologies (FAST ’02), pages 231–244, Monterey, CA, January 2002. USENIX Association.

  8. Pease, D., Amir, A., Real, L.V., Biskeborn, B., Richmond, M., Abe, A.: The linear tape file system. In: Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), 2010.

  9. The Apache Foundation. Hadoop, January 2010. https://hadoop.apache.org.

  10. Ghemawat, S., Gobioff, H., Leung, S.T.: The Google file system. In: Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP ’03), pages 29–43, Bolton Landing, NY, October 2003. ACM SIGOPS.

  11. GLUSTERfs. https://www.GLUSTER.org/.

  12. Moosefs. https://moosefs.com

  13. ZFS for Linux, January 2016. https://github.com/pscedu/zfs-fuse

  14. NTFS-3G. www.tuxera.com.

  15. C. Li, C. Ding, and K. Shen, Quantifying the cost of context switch, ACM Workshop on Experimental Computer Science, 2007.

  16. Bharath Kumar Reddy Vangoor, Stony Brook University; Vasily Tarasov, To FUSE or Not to FUSE: Performance of User-Space File Systems, 15th USENIX Conference on File and Storage Technologies (FAST ’17).

  17. stress-ng - a tool to load and stress a computer system, https://manpages.ubuntu.com/manpages/bionic/man1/stress-ng.1.html

  18. IOZONE Filesystem Benchmark, https://www.IOZONE.org/

  19. FUSE (Filesystem in Userspace) https://github.com/libfuse/libfuse

  20. Rajgarhia, A., Gehani, A.: Performance and extension of user space file systems. In: Proceedings of the 25th Symposium on Applied Computing. ACM, March 2010.

  21. Tarasov, V., Gupta, A., Sourav, K., Trehan, S., Zadok, E.: Terra incognita: On the practicality of user-space file systems. In HotStorage ’15: Proceedings of the 7th USENIX Workshop on Hot Topics in Storage, Santa Clara, CA, July 2015.

  22. David, F.M., Carlyle, J.C., Campbell, R.H.: Context switch overheads for linux on ARM platforms. In: Proceedings of the 2007 Workshop on Experimental Computer Science, ExpCS ’07, New York, NY, USA, 2007. ACM.

  23. Ishiguro, S., Murakami, J., Oyama, Y., Tatebe, O.: Optimizing local file accesses for fuse-based distributed storage. In: High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:, pages 760–765, Nov 2012.

  24. Narayan, S., Mehta, R.K., Chandy, J.A.: User space storage system stack modules with file level control. In: Proceedings of the 12th Annual Linux Symposium in Ottawa, pages 189–196, 2010.

  25. Zadok, E., Nieh, J.: FiST: a language for stackable file systems. In: Proceedings of the Annual USENIX Technical Conference, pages 55–70, San Diego, CA, June 2000. USENIX Association.

  26. Ishiguro, S., Murakami, J., Oyama, Y., Tatebe, O.: Optimizing local file accesses for FUSE-based distributed storage. In: High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion, pages 760–765. IEEE, 2012.

  27. Le, D., Huang, H., Wang, H.: Understanding performance implications of nested file systems in a virtualized environment. In: Proceedings of the USENIX Conference on File & Storage Technologies (FAST), 2012.

  28. Padula, D., et al.: A User-space Virtualization-aware Filesystem in the proceedings of conaiisi 2015, At Universidad Tecnológica Nacional, Facultad Regional Buenos Aires.

  29. Karollil, A.: RadFS—Virtualizing file systems, Master thesis, The University of British Columbia (2008)

  30. MEMFS: a FUSE Memory File System

  31. TMPFS: a Temporary File System

  32. NFS-FUSE: a FUSE module for NFSv3/4, https://github.com/sahlberg/fuse-nfs

  33. SSHFS: a network filesystem client to connect to SSH servers, https://github.com/libfuse/sshfs

  34. GLUSTERFS, https://docs.GLUSTER.org/en/latest/

  35. CITRIX XENSERVER, https://xenserver.org

  36. Barham, P.: Xen and the art of virtualization. In: SOSP'03 Proceedings of the 19th ACM Symposium on Operating Systems Principles, October 19–22, 2003, Bolton Landing, New York, USA

  37. “Optimizing Oracle VM Server for x86 Performance”, Oracle White Paper, May 2017.

  38. Rahman, H., Wang, G., Chen, J., Jiang, H.: Performance evaluation of hypervisors and the effect of virtual CPU on performance. In: Proceedings of 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovations

  39. Rahman, H., et al.: Performance evaluation of hypervisors and the effect of virtual cpu on performance. In:Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovations.

  40. “Demystifying CPU Ready (%RDY) as a Performance Metric”, QUEST Software White Paper.

  41. Al-Haidari, F., Sqalli, M.H., Salah, K.: Impact of CPU utilization thresholds and scaling size on auto-scaling cloud resources. In: Proceedings of the 2013 IEEE International Conference on Cloud Computing Technology and Science—Volume 02

  42. Souza, A.A.D.P., Netto, M.A.S.: Using application data for SLA-aware Auto-scaling in cloud environments in the proceedings of 2015 IEEE MASCOTS

  43. Xu, M., Tian, W., Buyya, R.: A survey on load balancing algorithms for virtual machines placement in cloud computing. Concurr. Comput. 29, e4123 (2017)

    Article  Google Scholar 

  44. Choudhary, A., et al.: A critical survey of live virtual machine migration techniques. J. Cloud Comput. 6, 23 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gopi Bhatt.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhatt, G., Bhavsar, M. Performance consequence of user space file systems due to extensive CPU sharing in virtual environment. Cluster Comput 23, 3119–3137 (2020). https://doi.org/10.1007/s10586-020-03074-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-020-03074-6

Keywords

Navigation