Advertisement

Using Bayesian networks and virtual coverage to hit hard-to-reach events

  • Shai Fine
  • Laurent Fournier
  • Avi Ziv
Regular Paper

Abstract

Reaching hard-to-reach coverage events is a difficult task that requires both time and expertise. Data-driven coverage directed generation (CDG) can assist in the task when the coverage events are part of a structured coverage model, but is a priori less useful when the target events are singular and not part of a model. We present a data-driven CDG technique based on Bayesian networks that can improve the coverage of cross-product coverage models. To improve the capability of the system, we also present virtual coverage models as a means for enabling data-driven CDG to reach singular events. A virtual coverage model is a structured coverage model (e.g., cross-product coverage) defined around the target event, such that the target event is a point in the structured model. The CDG system can exploit this structure to learn how to reach the target event from covered points in the structured model. A case study using CDG and virtual coverage to reach a hard-to-reach event in a multi-processor system demonstrates the usefulness of the proposed method.

Keywords

Functional verification Functional coverage Coverage directed generation Bayesian networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adir, A., Bin, E., Peled, O., Ziv, A.: Piparazzi: a test generator for micro-architecture flow verification. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 23–28, November 2003Google Scholar
  2. 2.
    Azatchi H., Fournier L., Marcus E., Ur S., Ziv A., Zohar K.: Advanced analysis techniques for cross-product coverage. IEEE Trans. Comput. 55(11), 1367–1379 (2006)CrossRefGoogle Scholar
  3. 3.
    Barker T.: Quality by Experimental Design. CRC, Boca Raton (1994)Google Scholar
  4. 4.
    Bergeron J.: Writing Testbenches: Functional Verification of HDL Models. Kluwer Academic Publishers, Dordrecht (2000)zbMATHGoogle Scholar
  5. 5.
    Bin E., Emek R., Shurek G., Ziv A.: Using a constraint satisfaction formulation and solution techniques for random test program generation. IBM Syst. J. 41(3), 386–402 (2002)CrossRefGoogle Scholar
  6. 6.
    Bose, M., Shin, J., Rudnick, E.M., Dukes T., Abadir, M.: A genetic approach to automatic bias generation for biased random instruction generation. In: Proceedings of the 2001 Congress on Evolutionary Computation CEC 2001, pp. 442–448, May 2001Google Scholar
  7. 7.
    Braun, M., Fine, S., Ziv, A.: Enhancing the efficiency of bayesian network based coverage directed test generation.In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 75–80, November 2004Google Scholar
  8. 8.
    Campenhout, D.V., Mudge, T., Hayes, J.P.: High-level test generation for design verification of pipelined microprocessors. In: Proceedings of the 36th Design Automation Conference, pp. 185–188, June 1999Google Scholar
  9. 9.
    Cover T.M., Thomas J.A.: Elements of Information Theory. Wiley, New York (1991)CrossRefzbMATHGoogle Scholar
  10. 10.
    Cowell R.G., Dawid A.P., Lauritzen S.L., Spiegelhalter D.J.: Probabilistic Networks and Expert Systems. Springer-Verlag, New York (1999)zbMATHGoogle Scholar
  11. 11.
    Eisner C., Fisman D.: A Practical Introduction to PSL. Springer, New York (2006)Google Scholar
  12. 12.
    Elidan, G., Lotner, N., Friedman, N., Koller, D.: Discovering hidden variables: a structure-based approach. In: Proceedings of the 13th Annual Conference on Neural Information Processing Systems, pp. 479–485 (2000)Google Scholar
  13. 13.
    Fine S., Freund A., Jaeger I., Mansour Y., Naveh Y., Ziv A.: Harnessing machine learning to improve the success rate of stimuli generation. IEEE Trans. Comput. 55(11), 1344–1355 (2006)CrossRefGoogle Scholar
  14. 14.
    Fine, S., Ziv, A.: Coverage directed test generation for functional verification using Bayesian networks. In: Proceedings of the 40th Design Automation Conference, pp. 286–291, June 2003Google Scholar
  15. 15.
    Fine, S., Ziv, A.: Enhancing the control and efficiency of the covering process. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 96–101, November 2003Google Scholar
  16. 16.
    Fournier, L., Arbetman, Y., Levinger, M.: Functional verification methodology for microprocessors using the Genesys test-program generator. In: Proceedings of the 1999 Design, Automation and Test in Europe Conference, pp. 434–441, March 1999Google Scholar
  17. 17.
    Fournier, L., Ziv, A.: Using virtual coverage to hit hard-to-reach events. In: Haifa Verification Conference, pp. 104–119 (2007)Google Scholar
  18. 18.
    Grinwald, R., Harel, E., Orgad, M., Ur, S., Ziv, A.: User defined coverage—a tool supported methodology for design verification. In: Proceedings of the 35th Design Automation Conference, pp. 158–165, June 1998Google Scholar
  19. 19.
    Hartman, A., Ur, S., Ziv, A.: Short vs long size does make a difference. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 23–28, November 1999Google Scholar
  20. 20.
    Heckerman, D.: A tutorial on learning with bayesian networks. Technical report, Microsoft Research, Redmond, Washington (1996)Google Scholar
  21. 21.
    Heckerman D., Mamdani A., Wellman M.: Real-world applications of Bayesian networks. Commun. ACM 38(3), 24–30 (1995)CrossRefGoogle Scholar
  22. 22.
    Hsiou-Wen, H., Eder, K.: Test directive generation for functional coverage closure using inductive logic programming. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 11–18 (2006)Google Scholar
  23. 23.
    Iwashita, H., Kowatari, S., Nakata, T., Hirose, F.: Automatic test program generation for pipelined processors. In: Proceedings of the International Conference on Computer Aided Design, pp. 580–583, Novemnber 1994Google Scholar
  24. 24.
    Nativ, G., Mittermaier, S., Ur, S., Ziv, A.: Cost evaluation of coverage directed test generation for the IBM mainframe. In: Proceedings of the 2001 International Test Conference, pp. 793–802, October 2001Google Scholar
  25. 25.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference. Morgan Kaufmann (1988)Google Scholar
  26. 26.
    Piziali A.: Functional Verification Coverage Measurement and Analysis. Springer, New York (2004)Google Scholar
  27. 27.
    Planitkar S.: Design Verification with e. Prentice Hall, Englewood Cliffs (2003)Google Scholar
  28. 28.
    Tasiran, S., Fallah, F., Chinnery, D.G., Weber, S.J., Keutzer, K.: A functional validation technique: biased-random simulation guided by observability-based coverage. In: Proceedings of the 2001 International Conference on Computer Design, pp. 82–88, September 2001Google Scholar
  29. 29.
    Ur, S., Yadin, Y.: Micro-architecture coverage directed generation of test programs. In: Proceedings of the 36th Design Automation Conference, pp. 175–180, June 1999Google Scholar
  30. 30.
    Wagner I., Bertacco V., Austin T.: Microprocessor verification via feedback-adjusted Markov models. IEEE Trans. Comput-Aided Des. Integr. Circuits Syst. 26(6), 1126–1138 (2007)CrossRefGoogle Scholar
  31. 31.
    Wile B., Goss J.C., Roesner W.: Comprehensive Functional Verification—The Complete Industry Cycle. Elsevier, Amsterdam (2005)Google Scholar
  32. 32.
    Wright S.: Correlation and causation. J. Agric. Res. 20, 557–585 (1921)Google Scholar
  33. 33.
    Yang, C.H., Dill, D.L.: Validation with guided search of the state space. In: Proceedings of the 35th Design Automation Conference, pp. 599–604, June 1998Google Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  1. 1.IBM Research Laboratory in HaifaHaifaIsrael

Personalised recommendations