Abstract
Recently, FUSE based user space file systems have gained importance due to their ease of implementation. Their lower performance versus implementation benefits, compared to traditional in-kernel file systems, has always been a point of debate among researchers. As FUSE requires additional context switching to perform file related operations, there is a noticeable increase in CPU utilization. In this era of cloud computing, where the focus is shifting on running applications in virtual machines, increased CPU utilization during file operations can have considerable impact on performance, as resources of the hypervisor are shared among virtual machines. This degradation in performance can become a major cause of concern especially when resources are over-committed. There has been ongoing research regarding improvement in performance of user space file systems, and this will help in providing a systematic study on evaluating their performance. For an in-depth examination, we analyzed the performance of FUSE based file systems running in a guest virtual machine and highlighted some of the scenarios, where there could be a major damage to the file system’s performance. We have also showed here, that a careful selection of parameters like block sizes and type of read/write operations can improve performance of these file systems significantly by 40% to 45% even in heavily loaded environment. This study will inspire users and developers to enhance the performance of FUSE based file systems which are now commonly used in cluster of virtual machines.
Similar content being viewed by others
References
Condict, M., Bolinger, D., Mitchell, D., McManus, E.: Microkernel modularity with integrated kernel performance. In: Proceedings of the first symposium on operating systems design and implementation (OSDI 1994), Monterey, CA, November 1994.
Hartig, H., Hohmuth, M., Liedtke, J., Wolter, J., Schonberg, S: The performance of Microkernel-based systems. In: Proceedings of the 16th Symposium on Operating Systems Principles (SOSP ’97), Saint Malo, France, October 1997. ACM.
Bach, M.J.: The Design of the UNIX Operating System. Prentice Hall India Publication, London (2018)
Bent, J., Gibson, G., Grider, G., McClelland, B., Nowoczynski, P., Nunez, J., Polte, M., Wingate, M.: Plfs: a checkpoint filesystem for parallel applications. Technical Report LA-UR 09- 02117, LANL, April 2009. https://institute.lanl.gov/plfs/.
Cornell, B., Dinda, P.A., Bustamante, F.E.: Wayback: a user-level versioning file system for linux. In: Proceedings of the Annual USENIX Technical Conference, FREENIX Track, pages 19– 28, Boston, MA, June 2004. USENIX Association.
Mazieres, D.: A toolkit for user-level file systems. In: Proceedings of the 2001 USENIX Annual Technical Conference, 2001.
Schmuck, F., Haskin, R.: GPFS: a shared-disk file system for large computing clusters. In Proceedings of the First USENIX Conference on File and Storage Technologies (FAST ’02), pages 231–244, Monterey, CA, January 2002. USENIX Association.
Pease, D., Amir, A., Real, L.V., Biskeborn, B., Richmond, M., Abe, A.: The linear tape file system. In: Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), 2010.
The Apache Foundation. Hadoop, January 2010. https://hadoop.apache.org.
Ghemawat, S., Gobioff, H., Leung, S.T.: The Google file system. In: Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP ’03), pages 29–43, Bolton Landing, NY, October 2003. ACM SIGOPS.
GLUSTERfs. https://www.GLUSTER.org/.
Moosefs. https://moosefs.com
ZFS for Linux, January 2016. https://github.com/pscedu/zfs-fuse
NTFS-3G. www.tuxera.com.
C. Li, C. Ding, and K. Shen, Quantifying the cost of context switch, ACM Workshop on Experimental Computer Science, 2007.
Bharath Kumar Reddy Vangoor, Stony Brook University; Vasily Tarasov, To FUSE or Not to FUSE: Performance of User-Space File Systems, 15th USENIX Conference on File and Storage Technologies (FAST ’17).
stress-ng - a tool to load and stress a computer system, https://manpages.ubuntu.com/manpages/bionic/man1/stress-ng.1.html
IOZONE Filesystem Benchmark, https://www.IOZONE.org/
FUSE (Filesystem in Userspace) https://github.com/libfuse/libfuse
Rajgarhia, A., Gehani, A.: Performance and extension of user space file systems. In: Proceedings of the 25th Symposium on Applied Computing. ACM, March 2010.
Tarasov, V., Gupta, A., Sourav, K., Trehan, S., Zadok, E.: Terra incognita: On the practicality of user-space file systems. In HotStorage ’15: Proceedings of the 7th USENIX Workshop on Hot Topics in Storage, Santa Clara, CA, July 2015.
David, F.M., Carlyle, J.C., Campbell, R.H.: Context switch overheads for linux on ARM platforms. In: Proceedings of the 2007 Workshop on Experimental Computer Science, ExpCS ’07, New York, NY, USA, 2007. ACM.
Ishiguro, S., Murakami, J., Oyama, Y., Tatebe, O.: Optimizing local file accesses for fuse-based distributed storage. In: High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:, pages 760–765, Nov 2012.
Narayan, S., Mehta, R.K., Chandy, J.A.: User space storage system stack modules with file level control. In: Proceedings of the 12th Annual Linux Symposium in Ottawa, pages 189–196, 2010.
Zadok, E., Nieh, J.: FiST: a language for stackable file systems. In: Proceedings of the Annual USENIX Technical Conference, pages 55–70, San Diego, CA, June 2000. USENIX Association.
Ishiguro, S., Murakami, J., Oyama, Y., Tatebe, O.: Optimizing local file accesses for FUSE-based distributed storage. In: High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion, pages 760–765. IEEE, 2012.
Le, D., Huang, H., Wang, H.: Understanding performance implications of nested file systems in a virtualized environment. In: Proceedings of the USENIX Conference on File & Storage Technologies (FAST), 2012.
Padula, D., et al.: A User-space Virtualization-aware Filesystem in the proceedings of conaiisi 2015, At Universidad Tecnológica Nacional, Facultad Regional Buenos Aires.
Karollil, A.: RadFS—Virtualizing file systems, Master thesis, The University of British Columbia (2008)
MEMFS: a FUSE Memory File System
TMPFS: a Temporary File System
NFS-FUSE: a FUSE module for NFSv3/4, https://github.com/sahlberg/fuse-nfs
SSHFS: a network filesystem client to connect to SSH servers, https://github.com/libfuse/sshfs
GLUSTERFS, https://docs.GLUSTER.org/en/latest/
CITRIX XENSERVER, https://xenserver.org
Barham, P.: Xen and the art of virtualization. In: SOSP'03 Proceedings of the 19th ACM Symposium on Operating Systems Principles, October 19–22, 2003, Bolton Landing, New York, USA
“Optimizing Oracle VM Server for x86 Performance”, Oracle White Paper, May 2017.
Rahman, H., Wang, G., Chen, J., Jiang, H.: Performance evaluation of hypervisors and the effect of virtual CPU on performance. In: Proceedings of 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovations
Rahman, H., et al.: Performance evaluation of hypervisors and the effect of virtual cpu on performance. In:Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovations.
“Demystifying CPU Ready (%RDY) as a Performance Metric”, QUEST Software White Paper.
Al-Haidari, F., Sqalli, M.H., Salah, K.: Impact of CPU utilization thresholds and scaling size on auto-scaling cloud resources. In: Proceedings of the 2013 IEEE International Conference on Cloud Computing Technology and Science—Volume 02
Souza, A.A.D.P., Netto, M.A.S.: Using application data for SLA-aware Auto-scaling in cloud environments in the proceedings of 2015 IEEE MASCOTS
Xu, M., Tian, W., Buyya, R.: A survey on load balancing algorithms for virtual machines placement in cloud computing. Concurr. Comput. 29, e4123 (2017)
Choudhary, A., et al.: A critical survey of live virtual machine migration techniques. J. Cloud Comput. 6, 23 (2017)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Bhatt, G., Bhavsar, M. Performance consequence of user space file systems due to extensive CPU sharing in virtual environment. Cluster Comput 23, 3119–3137 (2020). https://doi.org/10.1007/s10586-020-03074-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-020-03074-6