Advertisement

Security and Trust Verification of IoT SoCs

  • Alif AhmedEmail author
  • Farimah Farahmandi
  • Yousef Iskander
  • Prabhat Mishra
Chapter
  • 604 Downloads
Part of the Internet of Things book series (ITTCC)

Abstract

System-on-Chips (SoCs) are widely used in designing Internet-of-Things (IoT) devices. In order to ensure the security of IoT devices, it is crucial to guarantee the trustworthiness of SoCs. Verifying the trust in SoCs is a major challenge due to their long and globally distributed supply chain. Malicious components can be inserted in different stages of the design cycle. These malicious functionalities work as a backdoor to severely affect the security of the design by giving control of the system to adversaries. The threat creates a critical need for designing new validation approaches that are capable of identifying hidden Trojans. Existing validation techniques cannot efficiently activate and detect Trojans since Trojans are designed to be inactive most of the time and triggered using very rare events. For example, if an adversary wants to hide a Trojan in register-transfer level (RTL) designs, rare branches would be an ideal choice to host Trojans. In this chapter, we introduce a Trojan activation technique that utilizes an effective combination of symbolic simulation as well as concrete execution to identify Trojans that are hidden under rare branches and assignments. The technique is scalable as it considers one path at a time instead of considering the whole design. It uses satisfiability modulo theories (SMT) solvers to solve the path constraints in order to generate a valid test to explore a new path in the design. The exploration continues until all of the rare branches in the design are activated in the search for hidden Trojans.

References

  1. 1.
    Opencores website (2017). https://www.opencores.org
  2. 2.
    TrustHub website (2017). https://www.trust-hub.org
  3. 3.
    Ahmed, A., Farahmandi, F., Mishra, P.: Directed test generation using concolic testing on RTL models. In: 2018 Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 1538–1543. IEEE (2018)Google Scholar
  4. 4.
    Ahmed, A., Mishra, P.: QUEBS: Qualifying event based search in concolic testing for validation of RTL models. In: 2017 IEEE 35th International Conference on Computer Design (ICCD), pp. 185–192. IEEE (2017)Google Scholar
  5. 5.
    Banga, M., Hsiao, M.S.: Trusted RTL: trojan detection methodology in pre-silicon designs. In: 2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), pp. 56–59. IEEE (2010)Google Scholar
  6. 6.
    Biere, A., Cimatti, A., Clarke, E.M., Fujita, M., Zhu, Y.: Symbolic model checking using SAT procedures instead of BDDs. In: Proceedings of the 36th Annual ACM/IEEE Design Automation Conference, pp. 317–320. ACM (1999)Google Scholar
  7. 7.
    Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: ASE, pp. 443–446 (2008)Google Scholar
  8. 8.
    Chakraborty, R.S., Narasimhan, S., Bhunia, S.: Hardware trojan: threats and emerging solutions. In: 2009 IEEE International High Level Design Validation and Test Workshop, HLDVT 2009, pp. 166–171. IEEE (2009)Google Scholar
  9. 9.
    Chakraborty, R.S., Wolff, F., Paul, S., Papachristou, C., Bhunia, S.: Mero: a statistical approach for hardware trojan detection. In: Cryptographic Hardware and Embedded Systems-CHES 2009, pp. 396–410. Springer (2009)Google Scholar
  10. 10.
    Chandra, S., et al.: Snugglebug: a powerful approach to weakest preconditions. In: SIGPLAN, vol. 44, pp. 363–374 (2009)CrossRefGoogle Scholar
  11. 11.
    Charreteur, F., Gotlieb, A.: Constraint-based test input generation for java bytecode. In: ISSRE, pp. 131–140 (2010)Google Scholar
  12. 12.
    Chen, M., Mishra, P.: Functional test generation using efficient property clustering and learning techniques. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 29(3), 396–404 (2010)CrossRefGoogle Scholar
  13. 13.
    Chen, M., Mishra, P.: Property learning techniques for efficient generation of directed tests. IEEE Trans. Comput. 60(6), 852–864 (2011)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Chen, M., Qin, X., Koo, H.-M., Mishra, P.: System-Level Validation: High-Level Modeling and Directed Test Generation Techniques. Springer Science & Business Media (2012)Google Scholar
  15. 15.
    Chen, M., et al.: Automatic RTL test generation from systemC TLM specifications. TECS 11, 38 (2012)CrossRefGoogle Scholar
  16. 16.
    Corno, F., et al.: RT-level ITC’99 benchmarks and first ATPG results. IEEE Des. Test Comput. 17(3), 44–53 (2000)CrossRefGoogle Scholar
  17. 17.
    Cruz, J., Farahmandi, F., Ahmed, A., Mishra, P.: Hardware trojan detection using ATPG and model checking. In: International Conference on VLSI Design (2018)Google Scholar
  18. 18.
    Dinges, P., Agha, G.: Targeted test input generation using symbolic-concrete backward execution. In: ASE, pp. 31–36 (2014)Google Scholar
  19. 19.
    Dutertre, B., De Moura, L.: The Yices SMT solver. 2:1–2 (2006). Tool paper. http://yices.csl.sri.com/tool-paper.pdf
  20. 20.
    Farahmandi, F., Huang, Y., Mishra, P.: Trojan localization using symbolic algebra. In: 2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 591–597. IEEE (2017)Google Scholar
  21. 21.
    Farahmandi, F., Mishra, P.: Automated test generation for debugging arithmetic circuits. In: DATE, pp. 1351–1356 (2016)Google Scholar
  22. 22.
    Farahmandi, F., Mishra, P.: Automated debugging of arithmetic circuits using incremental gröbner basis reduction. In: 2017 IEEE 35th International Conference on Computer Design (ICCD), pp. 193–200. IEEE (2017)Google Scholar
  23. 23.
    Farahmandi, F., Mishra, P.: FSM anomaly detection using formal analysis. In: 2017 IEEE 35th International Conference on Computer Design (ICCD), pp. 313–320. IEEE (2017)Google Scholar
  24. 24.
    Godefroid, P., et al.: DART: directed automated random testing. In: ACM SIGPLAN Notices, vol. 40, pp. 213–223 (2005)CrossRefGoogle Scholar
  25. 25.
    Guo, X., Dutta, R.G., Jin, Y., Farahmandi, F., Mishra, P.: Pre-silicon security verification and validation: a formal perspective. In: Proceedings of the 52nd Annual Design Automation Conference, p. 145. ACM (2015)Google Scholar
  26. 26.
    Hicks, M., Finnicum, M., King, S.T., Martin, M.M., Smith, J.M.: Overcoming an untrusted computing base: detecting and removing malicious hardware automatically. In: 2010 IEEE Symposium on Security and Privacy (SP), pp. 159–172. IEEE (2010)Google Scholar
  27. 27.
    Hu, W., et al.: Detecting hardware trojans with gate-level information-flow tracking. Computer 44–52 (2016)CrossRefGoogle Scholar
  28. 28.
    Koo, H.-M., Mishra, P.: Functional test generation using design and property decomposition techniques. ACM Trans. Embed. Comput. Syst. (TECS) 8(4), 32 (2009)Google Scholar
  29. 29.
    Li, Y., et al.: Steering symbolic execution to less traveled paths. In: ACM SIGPLAN Notices, vol. 48, pp. 19–32 (2013)CrossRefGoogle Scholar
  30. 30.
    Liu, L., Vasudevan, S.: STAR: generating input vectors for design validation by static analysis of RTL. In: HLDVT, pp. 32–37 (2009)Google Scholar
  31. 31.
    Liu, L., Vasudevan, S.: Efficient validation input generation in RTL by hybridized source code analysis. In: DATE, pp. 1–6 (2011)Google Scholar
  32. 32.
    Liu, L., et al.: Scaling input stimulus generation through hybrid static and dynamic analysis of RTL. TODAES 20, 4 (2014)CrossRefGoogle Scholar
  33. 33.
    Lyu, Y., Qin, X., Chen, M., Mishra, P.: Directed test generation for validation of cache coherence protocols. IEEE Trans. Comput. Aided Des. Integr, Circuits Syst (2018)CrossRefGoogle Scholar
  34. 34.
    Mishra, P., Dutt, N.: Specification-driven directed test generation for validation of pipelined processors. ACM Trans. Des. Autom. Electron. Syst. (TODAES) 13(3), 42 (2008)Google Scholar
  35. 35.
    Mukherjee, R., et al.: Hardware verification using software analyzers. In: ISVLSI, pp. 7–12 (2015)Google Scholar
  36. 36.
    Park, S., et al.: Carfast: achieving higher statement coverage faster. In: FSE, p. 35 (2012)Google Scholar
  37. 37.
    Pnueli, A.: The temporal semantics of concurrent programs. Theor. Comput. Sci. 13(1), 45–60 (1981)MathSciNetCrossRefGoogle Scholar
  38. 38.
    Qin, X., Mishra, P.: Directed test generation for validation of multicore architectures. ACM Trans. Des. Autom. Electron. Syst. (TODAES) 17(3), 24 (2012)Google Scholar
  39. 39.
    Qin, X., Mishra, P.: Scalable test generation by interleaving concrete and symbolic execution. In: VLSID, pp. 104–109 (2014)Google Scholar
  40. 40.
    Rajendran, J., Vedula, V., Karri, R.: Detecting malicious modifications of data in third-party intellectual property cores. In: Proceedings of the 52nd Annual Design Automation Conference, p. 112. ACM (2015)Google Scholar
  41. 41.
    Salmani, H., et al.: On design vulnerability analysis and trust benchmarks development. In: ICCD, pp. 471–474 (2013)Google Scholar
  42. 42.
    Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: CAV, pp. 419–423 (2006)Google Scholar
  43. 43.
    Sen, K., et al.: CUTE: a concolic unit testing engine for C. In: ACM SIGSOFT Software Engineering Notes, vol. 30, pp. 263–272 (2005)CrossRefGoogle Scholar
  44. 44.
    Seo, H., Kim, S.: How we get there: a context-guided search strategy in concolic testing. In: FSE, pp. 413–424 (2014)Google Scholar
  45. 45.
    Sturton, C., Hicks, M., Wagner, D., King, S.T.: Defeating UCI: building stealthy and malicious hardware. In: 2011 IEEE Symposium on Security and Privacy (SP), pp. 64–77. IEEE (2011)Google Scholar
  46. 46.
    Tehranipoor, M., Koushanfar, F.: A survey of hardware trojan taxonomy and detection. IEEE Des. Test Comput. 27(1) (2010)CrossRefGoogle Scholar
  47. 47.
    Waksman, A., Suozzo, M., Sethumadhavan, S.: Fanci: identification of stealthy malicious logic using boolean functional analysis. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, pp. 697–708. ACM (2013)Google Scholar
  48. 48.
    Williams, S.: Icarus Verilog (2006). http://iverilog.icarus.com/
  49. 49.
    Zhang, J., Yuan, F., Wei, L., Liu, Y., Xu, Q.: Veritrust: verification for hardware trust. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 34(7), 1148–1161 (2015)CrossRefGoogle Scholar
  50. 50.
    Zhang, J., Yuan, F., Xu, Q.: Detrust: defeating hardware trust verification with stealthy implicitly-triggered hardware trojans. In: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 153–166. ACM (2014)Google Scholar
  51. 51.
    Zhang, X., Tehranipoor, M.: Case study: detecting hardware trojans in third-party digital IP cores. In: 2011 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), pp. 67–70. IEEE (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Alif Ahmed
    • 1
    Email author
  • Farimah Farahmandi
    • 1
  • Yousef Iskander
    • 2
  • Prabhat Mishra
    • 1
  1. 1.Department of Computer and Information Science and EngineeringUniversity of FloridaGainesvilleUSA
  2. 2.Advanced Security Research Group, Cisco SystemsSan JoseUSA

Personalised recommendations