Computational Theorizing: A Formal Framework of Organizational Performance

  • Zhiang Lin
  • Kathleen M. Carley
Part of the Information and Organization Design Series book series (INOD, volume 3)


As discussed in Chapters 1 and 2, organizational performance is affected by multiple factors including stress, organizational design, and task environment. These factors are conceptually independent, but the effect of such factors and their interactions cannot be known unless we can examine them in an organizational setting. In this chapter, we will describe a simulated environment in which organizations are making decisions under these factors (Figure 3.1).


Time Pressure Organizational Performance Organizational Design Middle Manager Good Performer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 11.
    This task is also being used by Lesgold, Levine and Carley in a series of human experiments focused on determining hierarchical performance under stress. This task can be thought of as a ternary version of the binary choice task used in Carley (1990,1992).Google Scholar
  2. 12.
    The results show that the true states of the aircraft do not shift during each problem.Google Scholar
  3. 13.
    We can think of these problems as independent of each other.Google Scholar
  4. 14.
    In the real world, organizations usually have to react to crises within a short period of time, so there is little chance to learn and then use the newly learned knowledge during that period. Instead, organizations tend to apply what they have learned before to deal with crises.Google Scholar
  5. 15.
    In this book, we set two seconds as one time unit. This is according to the initial lab experiments using human subjects on decision making conducted by Kathleen Carley and her associates at Carnegie Mellon University. In the experiment each subject processes 120 problems in about 40 minutes, which is about 20 seconds for each problem. For every problem, a subject reads three pieces of information (3 units), makes experiential decision (6 units), and passes the decision (1 unit). Thus if we let x as the seconds in each time unit, we have 3x + 6x + x = 20, or x = 2.Google Scholar
  6. 16.
    We may relax this assumption to make it a non-uniform distribution by assuming that certain problems appear more often than others. However, by doing so, we may further complicate the study.Google Scholar
  7. 17.
    Missing information is a problem for many organizations. For example, in China, lack of information on the date and amount of rain in the 1991 season left the land unprepared. The countryside was devastated by the unexpected flood.Google Scholar
  8. 18.
    Incorrect information frequently results in costly mistakes. For example, the failure of the Nazi Germans on D-day was due, at least in part, to their “information” that Calai was the place the Allies would invade instead of Normandy. Incorrect information detected by the allied troops is also, at least partially, why friendly fire resulted in the cause of one in four casualties during the Gulf war.Google Scholar
  9. For example, the Americans were unprepared when the Japanese attacked Pearl Harbor, in part, because some officers were on leave.Google Scholar
  10. 20.
    Prior to the Challenger accident (Rogers at al., 1989), there was communication breakdown between the contractor Thiokol and NASA management, resulting in information about the Oring failing to be communicated. Communication breakdowns are also quite common in war—time when military units must remain radio silent in order to preserve their secrecy.Google Scholar
  11. 21.
    For example, in the chemical explosion disaster in Flixborough, Britain in 1974 (Lagadec, 1981) a new technician, who had little experience dealing with chemicals, was virtually unable to handle the situation and his lack of experience accelerated the disaster.Google Scholar
  12. 22.
    We have also examined an alternative matrix structure, in which only six of the nine baseline analysts report to two managers, while the 3 remaining analysts report to a single manager. The performance of organizations with this structure is between that reported for the hierarchy and matrix.Google Scholar
  13. 23.
    In dealing with ternary choices, the simple majority rule for dealing with binary choices has to be slightly modified to make it applicable.Google Scholar
  14. 24.
    The task decomposition scheme has also been referred to as the information access structure (e.g., Carley, 1991a, 1992) or task process structure (Mackenzie, 1978). We use the term task decomposition scheme to (1) emphasize the role of task environment in organizational performance, and (2) to clearly differentiate ties between people and data (the task decomposition scheme) and ties between people and people (the organizational structure).Google Scholar
  15. 25.
    We have also examined two other task decomposition schemes: segregated—2 and overlapped—2. The segregated—2 scheme differs from the segregated—1 scheme (which is labeled as segregated in this book) in terms of which analyst sees which specific characteristic. The results based on the segregated—2 task decomposition scheme, however, are close to the segregated scheme examined in this book and so suggest that the exact order of information is not highly critical. In the overlapped—2 case, each analyst has access to three pieces of information, such that two pieces of information are shared (overlapped) with the next analyst. The results for this scheme are similar to the simple overlap pattern examined in this book.Google Scholar
  16. 26.
    In this book, each division consists of three baseline analysts with a manager. This is true for hierarchy and matrix structures. But in team with voting and team with a manager structures, the distinctions among divisions are not as apparent.Google Scholar
  17. 27.
    During training, organizations are supposed to have no time pressure to make decisions and learn from feedback. There was no time constraint. Each agent’s memory includes information only on task categorization experience, not time pressure, though they may be trained to be faster.Google Scholar
  18. 28.
    Each aircraft is said to be unique if the characterization as of its nine characteristics are not repeated elsewhere. The value of two characteristics may be different when the characterization is the same. For example, Speed as 300 miles/hr and as 250 mile/hr are both characterized as low value or friendly.Google Scholar
  19. 29.
    For example, in the Pittsburgh Oil Spill case (Comfort et al., 1989), personnel were not trained according to strict procedures, but mainly followed their previous experience.Google Scholar
  20. 30.
    This process is similar to those in neural network studies, in which the network nodes are repeatedly given different problems and feedback for them to learn.Google Scholar
  21. 31.
    As in the procedure for experiential training, we may also be interested in seeing how a probability approach (which is actually stored in the DYCORP framework) instead of the dominance approach that we have so far studied will affect organizational performance. By a probability approach we mean that the agent chooses either “friendly”, “neutral”, or “hostile” according to the probability distribution of the correct decisions for that sub-task in the past. For example, for a sub-task: “Speed = High”, “Altitude = Medium”, and “Size = Large”, if the number of true decisions as “friendly”, “neutral”, and “hostile” are 300, 600, 900 respectively, then the agent using a dominance approach as described in this book will always choose “hostile” as his or her decision, while the agent using a probability approach will pick up decision “hostile” 50% of the time, “neutral” 33% of the time, and “friendly” 17% of the time.Google Scholar
  22. 32.
    For example, in the Vincennes case (Rochlin, 1991), radar operators were trained by following strict procedures.Google Scholar
  23. 33.
    For example, in the Chernobyl case (Silver, 1987), the agents in the organization were proactive, that is, they tended to make decisions on their own, though some were incorrect. While in the Hinsdale Telecommunication Outage case (Pauchant et al., 1992), the agents in the organization tended to act only when ordered to, thus they were reactive.Google Scholar
  24. 34.
    For a baseline analyst, he or she does not have to ask for information as there is no subordinate.Google Scholar
  25. 35.
    We assume that an interruption can only occur from an upper level manager, and that the top manager has no upper level manager.Google Scholar
  26. 36.
    The top manager asks information first, then tries to read decisions from subordinates.Google Scholar
  27. 37.
    We have also examined a non-decomposable rule where Sum = Fl * F2 * F3 + F3 + F4 * F5 * F6 + F6 + F7 * F8 * F9 + F9. This rule generates results similar to that of the non-decomposable rules described. The fact that the results are similar suggests that decomposability in general is more of a problem than the specific type of decomposability.Google Scholar
  28. 38.
    Fi is a task component, here a characteristic of the aircraft.Google Scholar
  29. 39.
    In the simulation described, we further categorize those problems whose sum equals 17, such that some are friendly, and others are neutral. Similarly, for those problems whose sum is 19, we categorize them such that some are hostile and others are neutral. This categorization is necessary so that the number of problems in each category is approximately one third of the total problems, thus forming a dispersed task environment.Google Scholar
  30. 40.
    Detailed data for compiling Figure 3.10 are in Table B. 1 of Appendix B.Google Scholar
  31. 41.
    Detailed data for compiling Figure 3.11 are in Table B.2 of Appendix B.Google Scholar
  32. 42.
    Detailed data for compiling Figure 3.12 are in Table B.3 of Appendix B.Google Scholar
  33. 43.
    Detailed data for compiling Figure 3.13 are in Table B.4 of Appendix B.Google Scholar
  34. 44.
    Detailed data for compiling Figure 3.14 are in Table B.5 of Appendix B.Google Scholar
  35. 45.
    Detailed data for compiling Figure 3.15 are in Table B.6 of Appendix B.Google Scholar
  36. 46.
    Detailed data for compiling Figure 3.16 are in Table B.7 of Appendix B.Google Scholar
  37. 47.
    Detailed data for compiling Figure 3.17 are in Table B.8 of Appendix B.Google Scholar
  38. 48.
    Detailed data for compiling Figure 3.18 are in Table B.9 of Appendix B.Google Scholar
  39. 49.
    Detailed data for compiling Figure 3.19 are in Table B. 10 of Appendix BGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2003

Authors and Affiliations

  • Zhiang Lin
    • 1
  • Kathleen M. Carley
    • 2
  1. 1.School of ManagementUniversity of Texas at DallasRichardsonUSA
  2. 2.School of Computer ScienceCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations