Advertisement

DFTracker: detecting double-fetch bugs by multi-taint parallel tracking

Research Article
  • 9 Downloads

Abstract

A race condition is a common trigger for concurrency bugs. As a special case, a race condition can also occur across the kernel and user space causing a double-fetch bug, which is a field that has received little research attention. In our work, we first analyzed real-world double-fetch bug cases and extracted two specific patterns for double-fetch bugs. Based on these patterns, we proposed an approach of multi-taint parallel tracking to detect double-fetch bugs. We also implemented a prototype called DFTracker (double-fetch bug tracker), and we evaluated it with our test suite. Our experiments demonstrated that it could effectively find all the double-fetch bugs in the test suite including eight realworld cases with no false negatives and minor false positives. In addition, we tested it on Linux kernel and found a new double-fetch bug. The execution overhead is approximately 2x for single-file cases and approximately 9x for the whole kernel test, which is acceptable. To the best of the authors’ knowledge, this work is the first to introduce multi-taint parallel tracking to double-fetch bug detection—an innovative method that is specific to double-fetch bug features—and has better path coverage as well as lower runtime overhead than the widely used dynamic approaches.

Keywords

multi-taint parallel tracking double fetch race condition between kernel and user time of check to time of use real-world case analysis Clang Static Analyzer 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

The authors would like to thank the anonymous reviewers for their helpful feedback. The work was supported by the National Key Research and Development Program of China (2016YFB0200401).

Supplementary material

11704_2016_6383_MOESM1_ESM.ppt (712 kb)
Supplementary material, approximately 711 KB.

References

  1. 1.
    Leveson N G, Turner C S. An investigation of the therac-25 accidents. Computer, 1993, 26(7): 18–41CrossRefGoogle Scholar
  2. 2.
    Jesdanun A. General electric acknowledges northeastern blackout bug, 2004Google Scholar
  3. 3.
    Net X. Nasdaq CEO blames software design for delayed facebook trading. China Securities Journal, 2012Google Scholar
  4. 4.
    Kasikci B, Zamfir C, Candea G. Data races vs. data race bugs: telling the difference with portend. ACM SIGPLAN Notices, 2012, 47(4): 185–198CrossRefGoogle Scholar
  5. 5.
    Huang J, Meredith P O, Rosu G. Maximal sound predictive race detection with control flow abstraction. ACM SIGPLAN Notices, 2014, 49(6): 337–348CrossRefGoogle Scholar
  6. 6.
    Narayanasamy S, Wang Z, Tigani J, Edwards A, Calder B. Automatically classifying benign and harmful data races using replay analysis. ACM SIGPLAN Notices, 2007, 42(6): 22–31CrossRefGoogle Scholar
  7. 7.
    Dimitrov D, Raychev V, Vechev M, Koskinen E. Commutativity race detection. ACM SIGPLAN Notices, 2014, 49(6): 305–315CrossRefGoogle Scholar
  8. 8.
    Cai X, Gui Y, Johnson R. Exploiting unix file-system races via algorithmic complexity attacks. In: Proceedings of the 30th IEEE Symposium on Security and Privacy. 2009, 27–41Google Scholar
  9. 9.
    Hsiao C H, Yu J, Narayanasamy S, Kong Z, Pereira C L, Pokam G A, Chen PM, Flinn J. Race detection for event-driven mobile applications. ACM SIGPLAN Notices, 2014, 49(6): 326–336CrossRefGoogle Scholar
  10. 10.
    Maiya P, Kanade A, Majumdar R. Race detection for android applications. ACM SIGPLAN Notices. 2014, 49(6): 316–325CrossRefGoogle Scholar
  11. 11.
    ChinaByte. Amazon EC2 reboot to cope with xen vulnerability, 2014Google Scholar
  12. 12.
    Gunawi H S, Hao M, Leesatapornwongsa T, Patana-anake T, Do T, Adityatama J, Eliazar K J, Laksono A, Lukman J F, Martin V, Satria A D. What bugs live in the cloud? a study of 3000+ issues in cloud systems. In: Proceedings of the ACM Symposium on Cloud Computing. 2014Google Scholar
  13. 13.
    Wu Z, Lu K, Wang X, Zhou X, Chen C. Detecting harmful data races through parallel verification. The Journal of Supercomputing, 2015, 71(8): 2922–2943CrossRefGoogle Scholar
  14. 14.
    Serna F J. Ms08-061: the case of the kernel mode double-fetch. 2008Google Scholar
  15. 15.
    Jurczyk M, Coldwind G. Identifying and exploiting windows kernel race conditions via memory access patterns. Syscan 2013 Whitepaper, 2013Google Scholar
  16. 16.
    Eckelmann S. [patch-resend] backports: fix double fetch in hlist_for_each_entry*_rcu, 2014Google Scholar
  17. 17.
    Wilhelm F. Tracing privileged memory accesses to discover software vulnerabilities. Dissertation for the Master’s Degree. Karlsruher: Karlsruher Institut für Technologie, 2015Google Scholar
  18. 18.
    Voung J W, Jhala R, Lerner S. Relay: static race detection on millions of lines of code. In: Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering. 2007, 205–214Google Scholar
  19. 19.
    Pratikakis P, Foster J S, Hicks M. Locksmith: practical static race detection for c. ACM Transactions on Programming Languages and Systems, 2011, 33(1): 3CrossRefGoogle Scholar
  20. 20.
    Huang J, Zhang C. Persuasive prediction of concurrency access anomalies. In: Proceedings of the International Symposium on Software Testing and Analysis. 2011, 144–154Google Scholar
  21. 21.
    Chen J, MacDonald S. Towards a better collaboration of static and dynamic analyses for testing concurrent programs. In: Proceedings of the 6th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging. 2008Google Scholar
  22. 22.
    Engler D, Ashcraft K. Racerx: effective, static detection of race conditions and deadlocks. ACM SIGOPS Operating Systems Review, 2003, 37(5): 237–252CrossRefGoogle Scholar
  23. 23.
    Sen K. Race directed random testing of concurrent programs. ACM SIGPLAN Notices, 2008, 43(6): 11–21CrossRefGoogle Scholar
  24. 24.
    Kasikci B, Zamfir C, Candea G. Racemob: crowdsourced data race detection. In: Proceedings of the 24th ACM symposium on operating systems principles. 2013, 406–422Google Scholar
  25. 25.
    Zhang W, Sun C, Lu S. ConMem: detecting severe concurrency bugs through an effect-oriented approach. ACM SIGARCH Computer Architecture News, 2010, 38(1): 179–192CrossRefGoogle Scholar
  26. 26.
    Zhang W, Lim J, Olichandran R, Scherpelz J, Jin G, Lu S, Reps T. ConSeq: detecting concurrency bugs through sequential errors. ACM SIGPLAN Notices, 2011, 46(3): 251–264CrossRefGoogle Scholar
  27. 27.
    Yu J, Narayanasamy S, Pereira C, Pokam G. Maple: a coveragedriven testing tool for multithreaded programs. ACM SIGPLAN Notices, 2012, 47(10): 485–502CrossRefGoogle Scholar
  28. 28.
    Bishop M, Dilger M. Checking for race conditions in file accesses. Computing Systems, 1996, 2(2): 131–152Google Scholar
  29. 29.
    Watson R N. Exploiting concurrency vulnerabilities in system call wrappers. In: Proceedings of the 1st USENIX Workshop on Offensive Technologies. 2007Google Scholar
  30. 30.
    Yang J, Cui A, Stolfo S, Sethumadhavan S. Concurrency attacks. In: Proceedings of the 4th USENIX Workshop on Hot Topics in Parallelism. 2012Google Scholar
  31. 31.
    Chen H, Wagner D. Mops: an infrastructure for examining security properties of software. In: Proceedings of the 9th ACM Conference on Computer and Communications Security. 2002, 235–244Google Scholar
  32. 32.
    Cowan C, Beattie S, Wright C, Kroah-Hartman G. RaceGuard: kernel protection from temporary file race vulnerabilities. In: Proceedings of USENIX Security Symposium. 2001, 165–176Google Scholar
  33. 33.
    Lhee K S, Chapin S J. Detection of file-based race conditions. International Journal of Information Security, 2005, 4(1–2): 105–119CrossRefGoogle Scholar
  34. 34.
    Payer M, Gross T R. Protecting applications against tocttou races by user-space caching of file metadata. ACM SIGPLAN Notices, 2012, 47(7): 215–226CrossRefGoogle Scholar
  35. 35.
    Cox M J. Bug 166248-can-2005-2490 sendmsg compat stack overflow, 2005Google Scholar
  36. 36.
    Wang P. Double-fetch bug in drivers/misc/mic/host/mic_virtio.c of linux-4.5, 2016Google Scholar
  37. 37.
    Wang P. Double-fetch bug in drivers/s390/char/sclp_ctl.c of linux-4.5, 2016Google Scholar
  38. 38.
    Wang P. Double-fetch bug in drivers/platform/chrome/cros_ec_dev.c of linux-4.6, 2016Google Scholar
  39. 39.
    Wang P. Double-fetch bug in kernel/auditsc.c of linux-4.6, 2016Google Scholar
  40. 40.
    Wang P. Double-fetch bug in drivers/scsi/aacraid/commctrl.c of linux-4.5, 2016Google Scholar
  41. 41.
    Erickson J, Musuvathi M, Burckhardt S, Olynyk K. Effective data-race detection for the kernel. In: Proceedings of the 9th USENIX Symposium on Operating Systems Design and Implementation. 2010, 1–16Google Scholar
  42. 42.
    Fonseca P, Rodrigues R, Brandenburg B B. Ski: exposing kernel concurrency bugs through systematic schedule exploration. In: Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation. 2014, 415–431Google Scholar
  43. 43.
    Yang J, Twohey P, Engler D, Musuvathi M. Using model checking to find serious file system errors. ACM Transactions on Computer Systems, 2006, 24(4): 393–423CrossRefGoogle Scholar
  44. 44.
    Engler D, Musuvathi M. Static analysis versus software model checking for bug finding. In: Proceedings of the International Workshop on Verification, Model Checking, and Abstract Interpretation. 2004, 191–210CrossRefGoogle Scholar
  45. 45.
    Xie Y, Chou A, Engler D. Archer: using symbolic, path-sensitive analysis to detect memory access errors. ACM SIGSOFT Software Engineering Notes, 2003, 28(5): 327–336CrossRefGoogle Scholar
  46. 46.
    Wu Z, Lu K, Wang X, Zhou X. Collaborative technique for concurrency bug detection. International Journal of Parallel Programming, 2015, 43(2): 260–285CrossRefGoogle Scholar

Copyright information

© Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Science and Technology on Parallel and Distributed Processing LaboratoryNational University of Defense TechnologyChangshaChina
  2. 2.College of ComputerNational University of Defense TechnologyChangshaChina
  3. 3.Collaborative Innovation Center of High Performance ComputingNational University of Defense TechnologyChangshaChina

Personalised recommendations