Advertisement

Automatic Boosting of Cross-Product Coverage Using Bayesian Networks

  • Dorit Baras
  • Laurent Fournier
  • Avi Ziv
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5394)

Abstract

Closing the feedback loop from coverage data to the stimuli generator is one of the main challenges in the verification process. Typically, verification engineers with deep domain knowledge manually prepare a set of stimuli generation directives for that purpose. Bayesian networks based CDG (coverage directed generation) systems have been successfully used to assist the process by automatically closing this feedback loop. However, constructing these CDG systems requires manual effort and a certain amount of domain knowledge from a machine learning specialist. We propose a new method that boosts coverage at early stages of the verification process with minimal effort, namely a fully automatic construction of a CDG system that requires no domain knowledge. Experimental results on a real-life cross-product coverage model demonstrate the efficiency of the proposed method.

Keywords

Feature Selection Mutual Information Bayesian Network Coverage Event Coverage Attribute 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wile, B., Goss, J.C., Roesner, W.: Comprehensive Functional Verification -The Complete Industry Cycle. Elsevier, Amsterdam (2005)Google Scholar
  2. 2.
    Piziali, A.: Functional Verification Coverage Measurement and Analysis. Springer, Heidelberg (2004)Google Scholar
  3. 3.
    Fine, S., Ziv, A.: Coverage directed test generation for functional verification using Bayesian networks. In: Proceedings of the 40th Design Automation Conference, pp. 286–291 (2003)Google Scholar
  4. 4.
    Fournier, L., Ziv, A.: Using virtual coverage to hit hard-to-reach events. In: Yorav, K. (ed.) HVC 2007. LNCS, vol. 4899, pp. 104–119. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  5. 5.
    Wagner, I., Bertacco, V., Austin, T.: Microprocessor verification via feedback-adjusted Markov models. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 26(6), 1126–1138 (2007)CrossRefGoogle Scholar
  6. 6.
    Bose, M., Shin, J., Rudnick, E.M., Dukes, T., Abadir, M.: A genetic approach to automatic bias generation for biased random instruction generation. In: Proceedings of the 2001 Congress on Evolutionary Computation CEC 2001, pp. 442–448 (2001)Google Scholar
  7. 7.
    Hsiou-Wen, H., Eder, K.: Test directive generation for functional coverage closure using inductive logic programming. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 11–18 (2006)Google Scholar
  8. 8.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference. Morgan Kaufmann, San Francisco (1988)zbMATHGoogle Scholar
  9. 9.
    Fine, S., Ziv, A.: Enhancing the control and efficiency of the covering process. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 96–101 (2003)Google Scholar
  10. 10.
    Fine, S., Freund, A., Jaeger, I., Mansour, Y., Naveh, Y., Ziv, A.: Harnessing machine learning to improve the success rate of stimuli generation. IEEE Transactions on Computers 55(11), 1344–1355 (2006)CrossRefGoogle Scholar
  11. 11.
    Cooper, G.F., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Journal of Machine Learning 9(4), 309–347 (1992)zbMATHGoogle Scholar
  12. 12.
    Laskey, K.B., Myers, J.W.: Population markov chain monte carlo. Journal of Machine Learning 50(1-2), 175–196 (2003)CrossRefzbMATHGoogle Scholar
  13. 13.
    Chickering, D.: Optimal structure identification with greedy search. Journal of Machine Learning Research 3, 507–554 (2002)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Friedman, N.: The Bayesian structural EM algorithm. In: Proc. 14th Conf. on Uncertainty in Artificial Intelligence, pp. 129–138 (1998)Google Scholar
  15. 15.
    Ur, S., Yadin, Y.: Micro-architecture coverage directed generation of test programs. In: Proceedings of the 36th Design Automation Conference, pp. 175–180 (1999)Google Scholar
  16. 16.
    Rusakov, D., Geiger, D.: Asymptotic model selection for naive Bayesian networks. J. Mach. Learn. Res. 6, 1–35 (2005)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  18. 18.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1-2), 273–324 (1997)CrossRefzbMATHGoogle Scholar
  19. 19.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory. John Wiley, Chichester (1991)CrossRefzbMATHGoogle Scholar
  20. 20.
    Heckerman, D.: A tutorial on learning with Bayesian networks. Technical report, Microsoft Research, Redmond, Washington (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Dorit Baras
    • 1
  • Laurent Fournier
    • 1
  • Avi Ziv
    • 1
  1. 1.IBM Research Laboratory in HaifaIsrael

Personalised recommendations