An Architecture Independent Approach to Emulating Computation Intensive Workload for Early Integration Testing of Enterprise DRE Systems

  • James H. Hill
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5870)


Enterprise distributed real-time and embedded (DRE) systems are increasingly using high-performance computing architectures, such as dual-core architectures, multi-core architectures, and parallel computing architectures, to achieve optimal performance. Performing system integration tests on such architectures in realistic operating environments during early phases of the software lifecycle, i.e., before complete system integration time, is becoming more critical. This helps distributed system developers and testers evaluate and locate potential performance bottlenecks before they become too costly to locate and rectify. Traditional approaches either (1) rely heavility on simulation techiques or (2) are too low-level and fall outside the domain knowledge distributed system developers and testers. Consequently, it is hard for distributed system developers and testers to produce realistic operating conditions for early integration testing of such systems.

This papers provides two contributions to facilitating early system integration testing of enterprise DRE systems. First, it provides a generalized technique for emulating computation intensive workload irrespective of the target architecture. Secondly, this paper illustrates how the emulation technique is used to evaluating different high-performance computing architectures in early phases of the software lifecycle. The technique presented in this paper is empirically and quantitatively evaluated in the context of a representative enterprise DRE system from the domain of shipboard computing environments.


Execution Time System Developer Calibration Factor Average Execution Time Computing Architecture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Biesbrouck, M.V., Eeckhout, L., Calder, B.: Representative Multiprogram Workloads for Multithreaded Processor Simulation. In: IEEE 10th International Symposium on Workload Characterization, September 2007, pp. 193–203 (2007)Google Scholar
  2. 2.
    Bohacek, S., Hespanha, J., Lee, J., Obraczka, K.: A hybrid systems modeling framework for fast and accurate simulation of data communication networks. In: Proceedings of ACM SIGMETRICS 2003 (June 2003)Google Scholar
  3. 3.
    Buck, J.T., Ha, S., Lee, E.A., Messerschmitt, D.G.: Ptolemy: A Framework for Simulating and Prototyping Heterogeneous Systems. In: International Journal of Computer Simulation, Special Issue on Simulation Software Development Component Development Strategies, April 4 (1994)Google Scholar
  4. 4.
    Carzaniga, A., Rosenblum, D.S., Wolf, A.L.: Design and Evaluation of a Wide-Area Event Notification Service. ACM Transactions on Computer Systems 19(3), 332–383 (2001)CrossRefGoogle Scholar
  5. 5.
    de Lima, G.A., Burns, A.: An optimal fixed-priority assignment algorithm for supporting fault-tolerant hard real-time systems. IEEE Transactions on Computers 52(10), 1332–1346 (2003)CrossRefGoogle Scholar
  6. 6.
    Haghighat, A., Nikravan, M.: A Hybrid Genetic Algorithm for Process Scheduling in Distributed Operating Systems Considering Load Balancing. In: Proceedings of Parallel and Distributed Computing and Networks (February 2005)Google Scholar
  7. 7.
    Hauswirth, M., Diwan, A., Sweeney, P.F., Mozer, M.C.: Automating Vertical Profiling. In: Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA 2005), pp. 281–296. ACM Press, New York (2005)CrossRefGoogle Scholar
  8. 8.
    Hill, J.H., Gokhale, A.: Model-driven Engineering for Early QoS Validation of Component-based Software Systems. Journal of Software (JSW) 2(3), 9–18 (2007)Google Scholar
  9. 9.
    Hill, J.H., Gokhale, A.: Model-driven Specification of Component-based Distributed Real-time and Embedded Systems for Verification of Systemic QoS Properties. In: Proceeding of the Workshop on Parallel, Distributed, and Real-Time Systems (WPDRTS 2008), Miami, FL (April 2008)Google Scholar
  10. 10.
    Hill, J.H., Gokhale, A.: Towards Improving End-to-End Performance of Distributed Real-time and Embedded Systems using Baseline Profiles. In: Software Engineering Research, Management and Applications, SERA 2008 (2008); Special Issue of Springer Journal of Studies in Computational Intelligence 150(14), 43–57 (2008)Google Scholar
  11. 11.
    Hill, J.H., Slaby, J., Baker, S., Schmidt, D.C.: Applying System Execution Modeling Tools to Evaluate Enterprise Distributed Real-time and Embedded System QoS. In: Proceedings of the 12th International Conference on Embedded and Real-Time Computing Systems and Applications, Sydney, Australia (August 2006)Google Scholar
  12. 12.
    Hill, M.D.: Opportunities Beyond Single-core Microprocessors. In: Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 97–97. ACM Press, New York (2008)CrossRefGoogle Scholar
  13. 13.
    Institute, S.E.: Ultra-Large-Scale Systems: Software Challenge of the Future. Technical report, Carnegie Mellon University, Pittsburgh, PA, USA (June 2006)Google Scholar
  14. 14.
    Jeong, H.J., Lee, S.H.: A Workload Generator for Database System Benchmarks. In: Proceedings of the 7th International Conference on Information Integration and Web-based Applications & Services, September 2005, pp. 813–822 (2005)Google Scholar
  15. 15.
    KleinOsowski, A., Lilja, D.J.: MinneSPEC: A New SPEC Benchmark Workload for Simulation-Based Computer Architecture Research. IEEE Computer Architecture Letters 1(1), 7 (2002)CrossRefGoogle Scholar
  16. 16.
    Lédeczi, Á., Bakay, Á., Maróti, M., Völgyesi, P., Nordstrom, G., Sprinkle, J., Karsai, G.: Composing Domain-Specific Design Environments. Computer 34(11), 44–51 (2001)CrossRefGoogle Scholar
  17. 17.
    Masters, M.W., Welch, L.R.: Challenges For Building Complex Real-time Computing Systems. Scientific International Journal for Parallel and Distributed Computing 4(2) (2001)Google Scholar
  18. 18.
    Menasce, D.A., Dowdy, L.W., Almeida, V.A.F.: Performance by Design: Computer Capacity Planning By Example. Prentice Hall PTR, Upper Saddle River (2004)Google Scholar
  19. 19.
    Rittel, H., Webber, M.: Dilemmas in a General Theory of Planning. Policy Sciences, 155–169 (1973)Google Scholar
  20. 20.
    Smith, C., Williams, L.: Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software. Addison-Wesley Professional, Boston (2001)Google Scholar
  21. 21.
    White, B., Lepreau, J., Stoller, L., Ricci, R., Guruprasad, S., Newbold, M., Hibler, M., Barb, C., Joglekar, A.: An integrated experimental environment for distributed systems and networks. In: Proc. of the Fifth Symposium on Operating Systems Design and Implementation, pp. 255–270. USENIX Association, Boston (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • James H. Hill
    • 1
  1. 1.Department of Computer and Information ScienceIndiana University/Purdue University at IndianapolisIndianapolisUSA

Personalised recommendations