Emergence in Organizations and Human Collective Intelligence
Stephen J. Guastello
Edited by John D. Lee and Alex Kirlik
Print Publication Date: Feb 2013Subject: Psychology, Cognitive PsychologyOnline Publication Date: May 2013DOI: 10.1093/oxfordhb/9780199757183.013.0037
York UniversityAUTOMATICALLY SIGNED IN
In This Article
- Principles of Nonlinear Dynamical Systems
- Cognitive Workload, Resilience, and Fatigue
- Collective Intelligence and Collective Action
- Emergency Response
- Learning Organizations
- Summary of Common Themes
- Future Directions
Go to page:
Abstract and Keywords
The traditional approach to human factors engineering focuses on interactions at the person-machine interface. When multiple person-machine interfaces are involved, however, emergent phenomena are produced by a combination of myriad interactions among agents within a system, far-from-equilibrium conditions, and processes of self-organization. Emergent phenomena vary in complexity from sandpile avalanches to phase shifts to hierarchical structures with or without top-down supervenience effects. They build on elementary nonlinear dynamics such as attractors, bifurcations, chaos, and catastrophes. Individuals, groups, and organizations all exhibit analogous dynamical cognitive processes, although events at the collective levels cannot be reduced to simply the sum of the individual parts. This chapter describes emergent phenomena that are mostly closely concerned with cognitive processes: collective cognition and action, networked systems, creative problem solving, team coordination, emergency response and sensemaking, dynamic decisions, diffusion of innovation, and organizational learning strategies. The characteristics of complex adaptive systems are inherent in each case.
The cover image on Kelly’s (1994) book Out of Control depicted a stylized office building designed as a grid with large windows and huge bees flying in and out. The implicit message was that work processes resemble a swarm of insects more closely than a machine that is designed for exact reproductions of objects. The queen bee does not give orders to the worker bees. The worker bees figure out their own work routines, which we observe as a swarm, based on one-on-one interactions with each other.
Emergent phenomena should be anticipated whenever multiple person-machine systems (PMSs) are interacting. They cannot be readily decomposed into more elementary precursors or causes. They often occur suddenly, hence the word “emergency,” but suddenness is not a necessary feature of emergence. The earliest concept of emergence dates back to a philosophical work by Lewes in 1875 (Goldstein, 2011). It crossed into social science in the early 20th century, when Durkheim wanted to identify sociological phenomena that could not be reduced to the psychology of individuals (Sawyer, 2005). The existence of groups, organizations, and social institutions are examples; bilateral interactions among individuals eventually give rise to norms, deeper patterns, and other forms of superordinate structure. In the famous dictum of the Gestalt psychologists, “The whole is greater than the sum of its parts.” Thus some of the ideas described in this chapter are more than a century old, but it was not until the 1980s that social scientists began to acquire the analytic tools to exploit them fully (Guastello, 2009a).
There are many examples of emergent phenomena in organizations, and they vary in complexity. One of (p. 535) the simpler processes is the sandpile avalanche (Bak, 1996): If we drizzle more sand on top of an existing sandpile, the sandpile will become larger until a critical point when it avalanches and settles into a distribution of large and small piles. This chapter is necessarily constrained to those that have a significant cognitive component. After introducing a few more constructs from nonlinear dynamical systems (NDS) theory that are intrinsic to explaining how emergent events occur, this chapter expands on collective intelligence, creative problem solving, the dynamics of multiple PMSs, team coordination, and emergency response (ER), and organizational learning.
Principles of Nonlinear Dynamical Systems
NDS theory is a general systems theory that explains processes of change in nonliving systems and in living systems ranging from microbiological to macroeconomic phenomena. Most of its important principles can be found in extended (Sprott, 2003) or concise form (Guastello & Liebovitch, 2009) elsewhere, and there are numerous connections among them. A compendium of techniques for testing hypotheses about NDS processes in the behavioral sciences can be found in Guastello and Gregson (2011). Principles that are most proximally related to emergent phenomena are described next.
An attractor is a piece of topological space wherein an object enters and does not leave unless an exceptional force is applied to the object. The simplest attractor is the fixed point. An illustrative example is the attraction of metal filings to a magnet. The asymptote at the end of a learning curve is another example. There are different mathematical functions that describe the varieties of the movement of objects into and within an attractor; fixed points, oscillators, and chaos are the most notable. Behaviorally, we can observe the temporal pattern of events and make statistical associations between the data and the descriptive equations (Guastello, 2002, 2005a, 2009a; Guastello & Gregson, 2011).
A bifurcation involves the change in a dynamical field, such as the splitting of the field into two or more attractor regions or the changing of an attractor from one type to another. The logistic map (Figure 36.1) is one of the more famous bifurcations in NDS and was studied extensively by May (1976). It represents the transition from a fixed point attractor to a limit cycle, and from a limit cycle into chaos all through one equation:
Y2 = BY1(1 – Y1), 0 < Y1 < 1,
Figure 36.1 The logistic map bifurcation.
where Y is the order parameter and B is the control parameter. In the region of Figure 36.1 labeled as Period 1, the system is globally stable with dynamics that are characteristic of a fixed point attractor. The bifurcation point marks the transition from Period 1 to Period 2 where the attractor becomes a limit cycle as B increases in value.
Toward the end of Period 2, where B has increased further in value, the bifurcation pattern bifurcates again, dividing the system into four smaller regions. Here one observes cycles within cycles, or period doubling. When B continues to increase, the system bifurcates again so that there are oscillations within oscillations within oscillations again.
The system enters Period 3 when the control parameter B approaches a value of 4.0. Period 3 is full-scale chaos. The behavior of Y bears little resemblance to the relative order of Periods 1 or 2. Of further interest are the windows of relative order that striate the chaotic region. The windows contain ordered trajectories that create a path, with additional bifurcations, from one chaotic period to another.
The logistic map can be expanded into a cubic form to reflect dynamics of even greater complexity:
Y2 = BY(1 – Y1)(1 – Y1).
Gregson (1992, 2009) used the cubic logistic structure as the basis for developing a series of models for nonlinear psychophysical processes where the signal can have multidimensional properties and vary continuously over space and time. Alternatively, the perceiver could be allowed to move around instead of being confined to sitting in a chair watching experimental stimuli on a screen.
Bifurcations are prominent in catastrophe models that describe and explain events that involve discontinuous change. A few examples are considered later in this chapter. (p. 536)
Chaos is perhaps the trademark of NDS, where seemingly random events are actually describable by simple equations or systems of simple equations. One of its central properties is sensitivity to initial conditions wherein small deviations in the initial states of two objects become progressively larger as the dynamics of the system unfold after many iterations of the basic underlying function (Dore, 2009; Lorenz, 1963). Two other prominent properties of chaos are its boundedness and the nonrepetition of values over iterations. Boundedness and nonrepetition are matters of degree, but the essential ideas are that, for all the apparent randomness, the values of the critical behavior stay within confined limits and the pattern of numbers does not repeat even though there might appear to be some rough repetition in the numeric time series.
There are about five dozen varieties of chaotic behavior known mathematically (and catalogued in Sprott, 2003), although the behavioral science applications are more concentrated on the presence of chaos overall and the level of turbulence involved in the chaotic system (Guastello, 2009a). Turbulence can be quantified by metrics such as the fractal dimension, the Lyapunov exponent, the Hurst exponent, and various definitions of entropy.
Self-organization is a phenomenon in which a system in a state of chaos, high entropy, or far-from-equilibrium conditions shapes its own structure, usually by building feedback loops among the subsystems. The feedback loops control and stabilize the system in a state of lower entropy. Positive feedback loops facilitate growth, development, or radical change in the extreme. Negative feedback loops have the net impact of inhibiting change. The spontaneous assumption of order occurs without the assistance of outside agents, or managers in organizations, and has been dubbed “order for free” (Kauffman 1993, 1995).
There are a few different processes of self- organization; three of the most prominent varieties can be summarized briefly as follows: In the rugged landscape scenario, (biological) entities disperse across a landscape with the intention of finding suitable ecological niches (Kauffman, 1993, 1995). The distribution of entities depends on the number of traits the entities require to live successfully in a niche; there would be a greater quantity of entities living in niches that require only one trait, whereas there would progressively fewer living in niches that require two, three, or more such traits. The ruggedness of the landscape is the result of the complexity of interaction of entities within a niche. More complex interactions produce a more rugged local landscape and a greater barrier to entry, whereas a low density of interactions results in a shallow local landscape and greater ease of entry. Entities that have found a relatively suitable niche often explore other possible niches using different search strategies.
In the sandpile scenario, if we drizzle more sand on top of an existing sandpile, the sandpile will become larger until it reaches a critical point when it avalanches. At that point it produces a distribution of large and small piles, such that there is a greater number of small piles and a small number of large piles. The distribution of pile sizes conforms to a power-law distribution:
Freq(X) = aXB,
where X is the size of the pile and B is the shape parameter of the distribution. B is negative and falls within the range of 1 <|B| < 2 for self-organizing phenomena. Not all self-organizing phenomena take the form of power law distributions (Guastello, 2005b), although a good many phenomena encountered in cognitive, organizational, and other societal processes do so (Andriani & McKelvey, 2011; Hollis, Kloos, & Van Orden, 2009; Bak, 1996; West & Deering, 1995).
The third viewpoint is centered on the nature of dissipative systems (Haken, 1984; Prigogine & Stengers, 1984). Having arrived at a high level of entropy, one would think that the system would suffer from heat death by dissipating its energy. What happens instead, however, is that the system reorganizes into a new form that preserves the system’s life and restores relative equilibrium conditions. In doing so, the system forms a hierarchical structure whereby some subsystems act more like drivers, and others like slaves. Drivers initiate and send information according to their own temporal dynamics, and slaves react to drivers’ input and dynamics while attempting to carry out their own functions.
Figure 36.2 Coupled oscillators.
As an illustrative example, consider the case of three work processes, each done by a different person or group of people, organized in a series as shown in Figure 36.2. Imagine that each one in isolation would function as an oscillator or pendulum. When Pendulum 1 oscillates, the middle one moves faster, and its motion pattern becomes more complex than (p. 537) strictly periodic; as a further result, the third swings chaotically. Opportunities for conflict can arise as Pendulum 3 does not like being jerked around, and probably cannot function well with all the entropy or unpredictability associated with the motion of the system it is experiencing. In human terms, the uncertainty associated with entropy is equivalent to the experience of risk, which the people or groups that reside later in the chain would like to control (Guastello, 2009b).
An important connection between emergence and other NDS principles was formed when Holland’s (1995) computer simulations illustrated how the local interaction among agents gave rise to self-organized structure. In fact, “complexity theory” got its name from the central idea that the number of interactions among many agents in a large system was too numerous to calculate individually and that simulation programs were needed that would consider the possibilities, calculate interactions among agents, and produce a final picture of the interaction results. In stronger cases of emergent order, the supervenience principle engages, whereby the superordinate structure maintains a downward causality on the behaviors and interactions among the lower-level agents (Sawyer, 2005). The top-down effect is essentially a driver-slave relationship.
Whereas self-organization is a process, emergence is a result of the process. McKelvey and Lichtenstein (2007) outlined four different types of emergence. The simplest was the avalanche with the power-law distribution. The second was the phase shift. The internal organization that would be required for a species to differentiate into more specific types is more complex than breaking into little pieces; phase shifts also occur when the organisms hop from one niche to another suddenly. More generally, a phase shift is a reorganization of the internal structure of the system. Still it is not necessary for a hierarchical internal structure to occur.
The third level of emergence is the formation of a hierarchical relationship among the parts of the system. Driver-slave relationships are examples. A different type of example occurs when a person or group collects information about its environment and forms a mental model of the situation. The mental model does not exhibit the top-down supervenient force until people start acting on the model and the model persists after some of the original group members have been replaced. The presence of an active top-down component reflects the fourth level of complexity for emergent events. Arguably, the dynamics of bottom-up and top-down influences are matters of degree and relative balance.
Goldstein (2011) indicated that emergent phenomena could be still more complicated. First, the automatic internal organization that characterizes Kauffman’s (1993) “order for free” might have been overemphasized by some writers. Boundary conditions shape emergent behavior too. Second, in a complex hierarchical system, there can be numerous subnetworks of agents that are drawn from different hierarchical levels. The subnetworks can be connected horizontally and interact in all sorts of combinations.
Degrees of Freedom
Self-organizing dynamics are typically observed as information flows among the subsystems. The concept of degrees of freedom was first introduced in conjunction with psychomotor movements (Bernstein, 1967; Marken, 1991; Rosenbaum, Slotta, Vaughn, & Plamondon, 1991; Turvey, 1990), and it also explains how fixed and variable upper limits to cognitive channel capacity are both viable (Guastello, Boeh, Shumaker, & Schimmels, 2012; Guastello, Boeh et al., in press; Guastello, Gorin et al., 2012). In any particular complex movement, each limb of the body is capable of moving in a limited number of ways, and the movements made by one limb restrict or facilitate movement by other limbs. For this reason, we do not walk by stepping both feet forward simultaneously, for instance. More generically, degrees of freedom are the number of component parts, such as muscles or neural networks, that could function differently to produce the final performance result.
The notion of internally connected nodes of movement is substantially simpler and more efficient than assuming that all elements of movement are controlled by a central executive function. An (p. 538) individual would explore several possible combinations of movement elements when learning a movement for the first time. Once learning sets in, however, the movement combinations gravitate toward conserving degrees of freedom, which is in essence a path of least resistance (Hong, 2010). The gravitation is actually a self-organization dynamic that is associated with phase shifts. Some variability in the movement still persists, however, which facilitates new adaptive responses. Sufficiently large changes in goals or demands produce a phase shift in the motor movements, which are observed as discontinuous changes.
Cognitive behaviors are thought to operate on more or less the same principle with regard to the early and later stages of schematic development, the role of executive functions, and the principle of conserving degrees of freedom (Guastello, Gorin et al., 2012; Hollis, Kloos, & Van Orden, 2009). Furthermore, cognition is tied to action in many cases, so that the entire array of relevant degrees of freedom now pertains to the perception-action sequences (Renaud, Chartier, & Albert, 2009).
In the case of work overload resulting from a fixed upper limit of human cognitive channel capacity (cf. Kantowitz, 1985), the discontinuity in performance would be the simple result of hitting a barrier. As such there would be little room for the kind of elasticity associated with variable upper limits. If variable upper limits were operating, however, the principle of conserving degrees of freedom would have a few implications; in the case of adding tasks or other demands to existing tasks, a change in one cognitive-behavioral motion would impact the other motions in the system or sequence. If it were possible to conserve degrees of freedom further, a phase shift in the cognition-action sequence would result. For example, an increased demand for visual search could result in a shift from an exhaustive search to an optimized first-terminating strategy (Townsend & Wenger, 2004).
Catastrophe Models and Phase Shifts
Phase shifts, which have been mentioned already in other contexts, can be represented as mathematical models that involve two control parameters. Catastrophe theory itself was introduced by Thom (1975) to describe discontinuous changes of events, where the system of events can vary in complexity. The cusp model is one of the simpler models and one that has seen voluminous uses in many disciplines over the years. For some broader background on the applications and analysis, see Guastello and Gregson (2011). Applications in this chapter include cognitive workload, fatigue, resilience, and diffusion of innovation.
The response surface for the cusp model is three-dimensional and describes two stable states of behavior (Figure 36.3). Change between the two states is a function of two control parameters, asymmetry (a) and bifurcation (b). At low values of b, change is smooth, that is, y is a continuous and monotonic function of a. At high values of b, the relationship between a and y is potentially discontinuous, depending on the values of a. At the lower end of the a scale, the a-y relationship depends on the level of b such that the a-y relationship gets increasingly positive as b decreases. This is the traditional interaction effect. Something similar occurs at the upper end of the a scale. In the middle of the a scale, however, y is not a continuous function of a only when b is low. When b is high, y changes suddenly (i.e., catastrophically) as a function of a. Said another way, at low values of a when b is high, changes occur around the lower mode and are relatively small in size. At middle values of a, changes occur between modes and are relatively large, assuming b is also large. At high values of a, changes occur around the upper mode and are again small.
The cusp response surface is the set of points where
df(y)/dy = dy/dt = y3 – by – a. (1)
Figure 36.3 The cusp catastrophe model with labeling for cognitive workload and performance.
Figure 36.4 The cusp catastrophe potential function associated with phase shifts.
Change in behavior is denoted by the path of a control point over time. The point begins on the upper sheet denoting behavior of one type and is observed in that behavioral modality for a period of time. During that time, its coordinates on a and b are changing when suddenly it reaches a fold line and drops to the lower value of the behavior, which is qualitatively different and where it remains. Reversing direction, the point is observed in the lower mode (p. 539) until coordinates change to a critical pair of values; at that moment the point jumps back to the upper mode. There are two thresholds for behavior change, one ascending and one descending. The phenomenon of hysteresis simultaneously refers to relatively frequent changes between the two behavioral states and the two different thresholds for change.
The shaded area of the surface is the region of inaccessibility in which very few points fall. Whereas the stable states are attractors, the inaccessible region is a repeller: Points are deflected in any direction if they veer too close to the repeller region. Statistically, one would observe an anti-mode between the two stable states that would correspond to the shaded region of the surface.
Figure 36.4 illustrates the potential energy function for phase shifts as they have been defined thus far. The potential function shown in Figure 36.4 depicts the case where the bifurcation effect is strong, and the two low-entropy wells are separated. A control point that is situated in the middle is trying to move from one well (attractor) to another. Its entropy level is sufficiently strong and afforded by high-bifurcation conditions. It needs only a tiny push from the asymmetry variable to land in one of the two attractor states.
Complex Adaptive Systems
A complex adaptive system (CAS) is a living system that maintains a far-from-equilibrium status even though its behavior is stable overall (Waldrop, 1992). Although the system might have self-organized its resources to engage in a strategy for survival, it is ready to adapt to environmental or internal stimuli at a moment’s notice. When it adapts, it reorganizes its communication, feedback, or work flow patterns to respond to the new situation and engage in a pertinent response. We would observe relatively greater levels of entropy in the behavior of a healthy CAS, and less entropy, or more rigidity and stereotypic behavior, in a less functional system (Goldberger et al., 2002).
The entropy in the system is observable in subtle ways. When we repeat a motion, such as a hand gesture, the gesture does not turn out exactly the same way each time. The repetitions are similar enough for all intents and purposes, but the residual variation is a tell-tale sign that the system is capable of modifying the motion to meet variations in circumstances (Hollis et al., 2009).
Cognitive and motor processes are both embedded and embodied. Closed systems exhibit embedded processes. Once the behavior sequence is set in motion, it continues according to its intrinsic dynamics. An embedded system, in contrast, is open to environmental influences, hence the adaptation. The environmental influences might be assimilated with only small variations in the system’s behavior, or they might require accommodation whereby the system reorganizes itself in some way to execute the task. In human cognition, the executive function is probably operating to a much greater extent in the cases of accommodation, whereas automaticity prevails in the cases of assimilation.
The perceptual, cognitive, and psychomotor aspects of an automatic process often do not begin as an automatic process. The perceptual, cognitive, and psychomotor parts of the CAS interact with each other, shape each other, exchange information, and are not replaceable or removable without fundamentally altering the dynamics of the system as a whole. With repetition and practice, the individual parts of the behavior organize into one flowing unit, which we recognize as part of the learning process. Trial and error at both the neurological and behavioral level give way to a self-organized combination of neural networks and events. Thus “stable” does not mean “without variability.” An element of variability is necessary if it will ever be possible for the person, group, or organization to attain greater levels of performance (Abbott, Button, Pepping, & Collins, 2005; Mayer-Kress, Newell, & Liu, 2009).
Attempts to correct flaws or otherwise change a part of the CAS often do not succeed because the parts adapt in such a way as to protect the system from intrusions from outside the system. A CAS can thus be considered resilient to the extent that the self-repairing feature is operating (Hollnagel, Woods, & Leveson, 2006; Sheridan, 2008).
A comparison between the propositions for a theory of adaptation in organizations that were advanced by Burke et al. (2006) (p. 540) and those of the theory of the complex adaptive system for organizations advanced nearly a decade earlier by Dooley (1997) appeared elsewhere recently (Guastello, 2009c). The aspects of the CAS’ adaptive capability that are more germane to cognitive processes are considered next.
Group members scan the environment and develop schemata (Dooley, 1997). A schema is essentially the same as a mental model, although it places some additional emphasis on the actions that could be taken in response to the requirements of the mental model (Newell, 1991). Schemata define rules of interaction with other agents that exist within the work group or outside the work group boundaries (Dooley, 1997). A group’s schemata are often built from existing building blocks, and new ones are inevitably brought into the group when members arrive. They often take the form of particular individual differences in job knowledge and experience.
When schemata change, requisite variety, robustness, and reliability are ideally enhanced (Dooley, 1997). Reliability denotes error-free action in the usual sense. Robustness denotes the ability of the system to withstand unpredictable shock from the environment. Requisite variety refers to Ashby’s (1956) Law: For the effective control of a system, the complexity of the controller must be at least equal to the complexity of the system that is being controlled. Complexity in this context refers the number of system states, which are typically conceptualized as discrete outcomes.
The dynamics of agent interaction and problem solving give rise to the development of schemata. Once adopted, they are expected to have a supervenience effect on the further actions of the agents. A schema that is deployed or changed against a context that contains little history or precedent, as in a group’s early stages of life, might have a different impact if it were deployed in a context where a supervenience effect was occurring.
Cognitive Workload, Resilience, and Fatigue
One of the chronic problems with research in cognitive workload and fatigue is that it is generally difficult to separate the impact of workload, fatigue, other forms of stress, and practice within the conventional experimental paradigms (Hancock & Desmond, 2001; Ackerman, 2011). A viable solution, however, is afforded by the use of two catastrophe models, one for workload effects and one for fatigue (Guastello, 2003, 2006; Guastello, Boeh, Schimmels, et al., 2012; Guastello, Boeh, Shumaker, et al., 2011). They have similar structures but derive from different underlying dynamics. Although the primary research has been conducted at the individual level of analysis, the possibility of collective implications has been noted.
The cognitive workload model is analogous to Euler buckling of an elastic beam as larger loads are placed on the beam. A more elastic beam will waffle when heavily loaded, but an inelastic beam will snap under the same conditions. The application that produced the model (Guastello, 1985) was based on a study of physical labor in a wheelbarrow obstacle course. Employees in a steel manufacturing facility completed the obstacle course three times with increasing loads in their wheelbarrows. The addition of weights had the effect of separating people who displayed no slowing in their response times as loads were added from people who exhibited a sharp increase in their response times under the same conditions. The amount of change in response time was governed by a group of physiological variables, which, when taken together, indicated a condition comparable to elasticity in materials science. In the buckling model, the amount of weight on the proverbial beam was the asymmetry parameter, and the elasticity measurement was the bifurcation parameter.
For load stress, the asymmetry parameter is once again the load amount, and the bifurcation parameter is the elasticity variable, which takes the form of “coping strategies” psychologically (Guastello, Boeh, Schimmels, et al., 2012; Guastello, Boeh, Shumaker, et al., 2012; Figure 36.3). The role of coping strategies or elasticity as the bifurcation factor, which could vary across individuals and perhaps situations, explains why both variable upper limits and fixed upper limits have been reported in the experimental literature. In one recent experiment, trait anxiety acted as the bifurcation variable in a memory task where the load was augmented by competition and incentive conditions; high anxiety was less flexible (Guastello, Boeh, Schimmels, et al., 2012).
Thompson (2010) applied essentially the same cusp model to the possible results of high-impact decisions that are made under conditions of stress. He observed that otherwise capable leaders sometimes make disastrous decisions. Any of the load or environmental stressors that are known in stress research could be part of the asymmetry parameter. (p. 541) He recommended emotional intelligence as the primary variable that captures the elasticity that is needed to respond to the load demands.
The notion of coping strategies in the face of severe stress has also been interpreted as resilience in socio-technical systems (Hollnagel, Woods, & Leveson, 2006; Seligman & Matthews, 2011; Sheridan, 2008), and the connection between catastrophes and the idea of resilience is now crossing over into clinical psychology and medicine (Guastello, in press; Pincus, 2010).
Resilience in socio-technical systems poses questions such as: How well can a system rebound from a threat or assault? Can it detect critical situations before they fully arrive? Importantly, several chapters in Hollnagel et al. (2006) described “emergence” scenarios where subcritical events combined to produce explicitly the situations that required adaptation. Although the PMS was not explicitly characterized as a CAS, that was apparently the intended meaning. Many of the initial examples of resilience in organizations were post-hoc interpretations of events, as acknowledged by their authors, but it now appears that NDS theory can now make them analytical.
Fatigue is defined as the loss of work capacity over time for both cognitive and physical labor (Ackerman, 2011; Guastello, 1995; Guastello & McGee, 1987). Depletion of work capacity is typically assessed by a work curve that plots performance over time; there is a sharp drop in performance when fatigue sets in that is also coupled with a higher level of performance variability over time. Not everyone experiences a decline as a result of the same expenditures, however. Some show an increase in physical strength akin to “just getting warmed up,” while others show stably high or lower performance levels for the duration of the work period. Thus Ioteyko (1920) introduced a cubic polynomial function to account for the classic and more common work curve as well as all the other variations.
Figure 36.5 The cusp catastrophe model for cognitive fatigue.
The cubic function was essentially the structure of the cusp catastrophe model for fatigue (Guastello & McGee, 1987; Guastello, Boeh, Schimmels, et al., in press; Guastello, Boeh, Shumaker, et al., 2012), shown in Figure 36.5. The fatigue model has the same cusp structure as the buckling model for workload, but the variables that contribute to the control parameters are different. Work capacity is the dependent measure that displays two stable states. Capacity and performance at a single point in time are not always easy to distinguish, but in principle it is the capacity that is subject to dramatic or small change over time. The total quantity of work done would be the main contributor to the bifurcation parameter; if the individual did not accomplish much in a fixed amount of time, there would be little fatigue in the sense of work capacity.
The asymmetry parameter would be a compensatory strength measure. For instance, in the original example (Guastello & McGee, 1987), laborers displayed differences in arm strength as a result of about two hours worth of standard mill labor tasks, which were primarily demanding on arm strength. They were measured on isometric arm strength and leg strength using a dynamometer before and after the work session. Leg strength showed little change after the work session, which was not surprising, but it did act as a compensation factor for arm strength; those with greater leg strength experienced less fatigue in their arms, all other things (such as total work accomplished) being equal.
In a study of cognitive fatigue, Guastello, Boeh, Shumaker, et al. (2012) found that arithmetic ability showed a compensatory effect on fatigue in an episodic memory task. A later experiment showed that episodic memory showed a compensatory effect on a pictorial memory task (Guastello, Boeh, Schimmels, et al., 2012).
The principle of degrees of freedom is thought to operate in fatigue dynamics as well. Not only does performance drop precipitously in the classic work curve, but it becomes more variable as well. Hong (2010) suggested that the increase in performance variability during the low-production period arises from an internal search for a possible reconfiguration of degrees of freedom. There are two plausible scenarios: According to the redistribution principle, the individual is searching for a lower-entropy (p. 542) means of accomplishing the same task or goal. If a loss of total entropy was occurring, however, the individual would be not only trying to regroup internal resources but also reducing responsiveness to the total complexity of task situations and demands, gravitating toward what amounts to the easier task options or situations, or to simpler tasks.
Collective Intelligence and Collective Action
The concept of collective intelligence originated with studies of social insects, particularly ants (Sulis, 1997, 2009). The concept crossed over to human cognitive phenomena when it became apparent that decentralized networks of people produce ideas, plans, and coordinated actions without being present in the same physical location simultaneously. The interaction among people is greatly facilitated by computer-based systems such standard email, listservers, and web-based technologies (Guastello & Philippe, 1997; also see Bockelman Morrow & Fiore, this handbook). The growth of virtual communities gravitates to an attractor that represents a stable population. The study of collective communication patterns in technology-driven systems, which often facilitate easy tracking of specific statements and their temporal sequencing, has led to a rethinking of human interactions in real time as well (Gureckis & Goldstone, 2006). The same phenomena are sometimes known as distributed cognition.
One should bear in mind that the boundaries usually associated with “organization” are semipermeable, meaning that a great deal of information flows across organizational boundaries and might not be centralized within an organization at all. This phenomenon, together with analogies to insect colonies, was the underlying theme in Kelly (1994). With decentralized or network-based communication and decision patterns, the notion of controlling a human system necessarily changes. Consistent with the idea behind ant colonies, the top-down control that is usually inherent in organizational structures simply does not operate well any longer: Events self-organize from the bottom up. The next section of this chapter considers some selected themes: basic principles of collective intelligence, creative problem solving, team coordination, and the learning organization.
Principles of Collective Intelligence
An ant colony displays some highly organized activities such as foraging, nest building and maintenance, and travel. At the same time, each ant does not have a big plan in its little head. Rather, each ant is equipped with elementary schemata that synchronize with those of other ants when larger-scale events occur. Sulis (2009) identified several principles of ant collective intelligence from which it is possible to extrapolate analogous functions in human systems. The first two are interactive determinism and self-organization, which were described in general systems form already: The interaction among individuals gives rise to the final collective result. The final result stabilizes in interesting ways and without external intervention; in other words, the queen ant or bee is not barking (buzzing) orders to the others. The stabilization of a collective action pattern is a phase transition.
Recalling the counterpoint made earlier about embedded and embodied cognition, the embodied portion operates automatically, assimilating nuances in the environment. The embedded portion is aware of the nuances in the environment and permits adaptations or accommodations to be made. Environmental nuances, nonetheless, have an impact on the final collective result; the phenomenon is known as stochastic determinism.
Probability structures that underlie collective outcomes remain stable over time, however, and are regarded as nondispersive. They remain as such until a substantial adaptation is needed, such as some regions of the environment become favored or disfavored. This broken ergodicity occurs at the collective level. Similar disruptions occur at the individual level as an experience of one agent impacts the behavior of another, thereby amplifying the feedback to a third. With enough uncontrolled fluctuation, the foraging path can shift suddenly. Hence broken symmetry is possible as well.
Some further principles are likely to take different forms in human in contrast to insect contexts. One is salience: The environmental cues that could induce broken symmetry are likely to work to the extent that they are sufficiently visible compared to other cues. In human contexts salience is complicated by meaning, which can be operationally defined as the connection to other ideas, personal intentions, and system goals (Kohonen, 1989). Ants do not appear to express much variety in personal intention, but humans do so regularly. Humans and human systems often have several competing goals.
Also important is that computational experiments assume that individual agents are destined to interact. Sometimes that is the case, of course, but people also reserve the choice to interact or not. (p. 543) Whatever rules they invoke to decide whether to interact are likely to play out in emergent patterns of social organization eventually (Sulis, 2008, 2009; Trofimova, 2002).
Multiple Person-Machine Systems
DeGreene (1991) made three points that are relevant to the present exposition. First, the early person-machine systems were conceptualized at the interface level, where bilateral interactions between one person and one machine occur. Information that was considered relevant was predominately atheoretical in nature and mostly geared toward defining critical numeric values for one function or another.
The newer technologies have enabled a greater number of functions to be controlled by multiple person-machine systems. As depicted in Figure 36.6, information flows the between the person and machine pretty much as usual, but there are also flows between machines and between humans. The machines are linked into networks. People can communicate either in real space-time or through networks via machines. Information from one PMS affects another, and delays in communication can result in various forms of uncoordinated action. The information that transfers is often far more complex in nature.
Figure 36.6 The extended person-machine system.
Second, he observed that although the concept of the system had been introduced at an earlier time (Meister, 1977), the full implications of the system concept had not been fully explored. The concepts of self-organization, the complex adaptive system, and others considered in this chapter have been possible and practicable only in the last 15 years or so. One of the more exotic applications of both the extended person-machine system and NDS is inherent in the problem of how to control a fleet of robots (Trianni, 2008). Each is an autonomous agent, which is itself complex with respect to cognitive and psychomotor components. (Not all concepts of robotic devices involve humanoid appearance or functionality.) Self-organizing properties require information loops between the units, and the current challenge is to develop sensors and response structures that keep the cluster of robots functioning even if one of them should become impaired. A human controller is still involved, especially where the goals of the system’s actions need to be defined for a given purposes, but how much control could be, or should be, allocated to the human “executive”? How much of the behavior of the system is going to be self-organized? How much broken symmetry can be tolerated? Several science fiction movies have centered on this theme, with nothing good happening to the humans.
Third, chaotic sequences of events do occur. Sources of chaos are potentially inherent in two places. One is in the information flow that is conveyed in the machine’s displays. The other is in the output of the PMS.
Stimuli arrive over time in many cases. Humans are highly variable in their ability to interpret patterns arising from chaotic or simpler nonlinear sources (Guastello, 2002; Heath, 2002; Ward & West, 1998). Historical and predictive displays have been in use for decades, with growing levels of sophistication. Some thought has been given to the use of chaotic controllers, which could take two basic forms (Guastello, 2006; Jagacinski & Flach, 2003). One form might be designed to regularize the flow of information to the operator. The other would recognize and interpret patterns and identify options to the user. There do not appear to be known cases of chaotic controllers of either type in a PMS in operation at this time, however.
Self-organization occurs when any multiple PMS increases its rate of interaction. The chaos in its function over time would take the form of unstable patterns that form and reform, and would characterize a learning process. Self-organization would set in as a stable coordinated pattern coalesces (Guastello, Bock, Caldwell, & Bond, 2005). Synchronicity can be produced even in nonliving systems with only minimum requirements—two coupled oscillators, a feedback channel between them, and a control parameter that speeds up the oscillations (Strogatz, 2003). The oscillators synchronize when they speed up fast enough. The principle also has been demonstrated with electrical circuits and mechanical clocks.
Complex systems that require decision making commonly involve intentional behavior and feedback loops between the person, machine, and physical environment that is ultimately the objective of control. Depending on the sensitivity of the real system to small control motions, we could end up with (p. 544) chaotic control motions. Simply driving a stable system into oscillation might be enough of a cause for alarm, and possibly damage the equipment. For reasons like these a form of resistance is often built into control systems to dampen the velocity, acceleration, or jerky movements that might be have been induced by the operator.
Creative Problem Solving
Creativity is a complex phenomenon involving divergent thinking skills, some personality traits that are commonly associated with creative individuals across many professions, an environment rich in substantive and interpersonal resources, and cognitive style. Cognitive style is a combination of personality and cognition; it refers to how people might use their talents rather than the quantity of such talents. According to an early version of chance-configuration theory (Simonton, 1988), creative products are the result of a random idea generation process. Greater quantities of ideas are generated from enriched personal and professional environments. Idea elements recombine into configurations during the idea generation process. When the creative thinker latches on to a new configuration and explores it as a possible solution to a problem, self-organization of the idea elements takes place, producing the experience of insight.
In the context of NDS, however, the generation and recombination of idea elements is chaotic rather than random (Guastello, 2002). The self-organization of idea elements is largely a response to a chaotic system state. The idea elements, meanwhile, are generated by deterministic human systems, whether individually or in groups. The individuals filter out some ideas and attract others depending on their goals for problem solving. They also organize idea elements according to their own unique mental organization and experience; some of these mental organizations, or semantic lattices (Hardy, 1998), are shared with other people in the society or with other problem solvers in the group, whereas other mental organizations are more unique. The process of idea generation activates and retraces the paths that the individuals have mentally created already among idea elements, prior to any one particular problem-solving event (Guastello, 1995, 1998).
The dynamics of creative problem solving in groups that were working together in a real-time experiment were explained by a six-dimensional mushroom catastrophe model; the model featured two dependent measures (order parameters) that exhibited discontinuous change, and four control variables that governed the actual change (Guastello, 1995). In the experimental task, the participants were organized into groups of eight people who were told that they were influential personages from a hypothetical island nation. Their task was to organize a plan for developing the island’s commercial and social service infrastructure and to allocate an impending budget. At different times during their discussion, the groups received “news bulletins” of events occurring on the island that, in principle, could wreak havoc with their partially formed plans and compel an adaptive response. Participants completed a normal range personality test prior to the discussion. After the discussion they completed a questionnaire in which each person in the group rated each other person in the group on a number of variables related to communication patterns during the discussion.
The order parameters were two simultaneous and interacting clusters of social interaction patterns, which were isolated through factor analysis of the post-game questionnaire. General Participation included information giving, asking questions, and statements of agreement with other people’s ideas; it was a variable that exhibited two stable states. Especially Creative Participation included statements that initiated courses of action for the group, elaboration of ideas, and rectifying intellectual conflicts; it displayed one stable state with instability at the high contribution end of the scale. Two of the four system control parameters, both of which brought the system closer to critical points where discontinuous changes in activity levels could occur, were occupied by personality traits. One cluster of traits distinguished high-production participants from low-production participants on the factor for general contributions. Assertiveness distinguished participants who most often gave especially creative responses from others.
The other two control parameters were bifurcation variables. One bifurcation variable was the overall group activity level, which could be considered a social dynamic by itself, that affected the level of especially creative behaviors. The other bifurcation effect was captured by the effect of the “news bulletins.” News bulletins promoted different levels of general participation, but not especially creative participation; in other words, they generated more talk, but not necessarily more action.
Other studies have also explored whether computer-facilitated communication can enhance the group’s overall level of production compared to (p. 545) the production of a collection of non-interacting individuals, so long as the group is large enough to produce a critical mass of ideas. Computer-based media can facilitate chaotic levels of idea production. In this situation, chaotic refers to bursts of high and low idea production of ideas over time on the part of either individuals or groups. Larger changes in production by individuals are associated with greater quantities of ideas that are produced by other group members in between two successive inputs from a particular person. The packets of production by an individual were variably long or short, but the overall trend was increasing.
At the group level of analysis, greater productivity is associated with a relatively complex problem task, where the task can be broken down into subtopics. In an illustrative analysis, group members, who were working on genuine business problems, could work on any subtopic in any order they chose, define them as they chose, go back and forth among the subtopics, and so on (Guastello, 1998). The number of active topics increased and decreased over time in a periodic fashion. The level of output by the group was chaotic overall, but it also showed periodic rises and drops in activation level in accordance with the change in the number of active topics. Thus the result, in the thinking of synergetics (Haken, 1984), is a coupled dynamic consisting of a periodic driver and a chaotic slave. Separate nonlinear equations that characterized the driver and the slave were derived statistically. The driver was the number of active threads or subthemes in the discussion at a particular interval of time. The slave was the overall level of output, in which the driver was one of two control variables. The second control variable was the particular problem-solving conversation in play; there were three conversations studied simultaneously.
One conclusion from this line of research was that a critical mass of ideas was necessary to generate sufficient entropy, which in turn facilitated the production of new ideas. The second conclusion was that the problem should be unpacked into subthemes. If the problem itself involves how to operate on an already-complex system, it would stand to reason that the puzzle pieces of the discussion would need to self-organize as well.
Coordination occurs when group members make the same or compatible responses at the right time for optimal production. According to game theory, and contrary to conventional thinking, there is more than one type of coordination. In game-theoretical scenarios, individuals make decisions based on the utilities associated with their options. The Prisoner’s Dilemma game involves choices between cooperation and competition (Axelrod, 1984). Games in experiments can be played iteratively over time and can include large numbers of people interacting according to the same rules. Eventually, long-run patterns of cooperation and competition emerge, along with meta-rules by which players respond to defectors. The meta-rules serve to restrain symmetry breaking to varying extents (Maynard-Smith, 1982). The important point for present purposes is that cooperation emerges as a dominant strategy to the extent that players make cooperative responses simultaneously.
Other important games are strictly cooperative and do not involve competitive responses between the players as options. Two strictly cooperative games, Intersection and Stag Hunt, have received some attention in the human performance research. Intersection is considered next. Stag Hunt is described in conjunction with emergency response in a later section of this chapter.
The Intersection game requires group members to take the correct actions in the correct sequence, and to figure out the correct sequence, similar to what occurs in a four-way stop intersection. If the drivers correctly perceive the turn-taking system adopted by the preceding drivers and follow the sequence, then all cars pass through the intersection in a minimum amount of time with the lowest odds of a collision. In a real-life intersection, any of several possible rule systems could be adopted by the drivers, and each driver approaching the intersection needs to observe the strategy that is actually in effect, and then make the correct move. If a car tries to go through the intersection out of turn, then an accident could occur, or, in the more common occurrences, other players would need to revert to ad lib turn-taking to untangle the confusion at the intersection.
The process of group coordination involves the development of nonverbal communication links among the participants. These links evolve with repeated practice with each other. The evolution of the links is essentially a self-organization process. Furthermore, the basic process of coordination is non-hierarchical, meaning that a leader, who might usually contribute task-structuring activities of some sort, is not required. This state of affairs is not unlike flocks of birds, herds of beasts, or schools of fish, which operate without leaders. (p. 546)
Experimental Intersection games have involved card games instead of crashing cars. Participants are required to figure out and implement a coordination rule in order for the group to acquire performance points. Note the contrast between the intersection approach and conventional thinking about shared mental models. Although shared mental models are still implicit in a successful example of intersection coordination, the mental models are acquired on the fly by the group rather than by having the model handed to them by a discussion leader.
The results of Intersection game experiments to date show that if the experimental task is not excessively difficult, the group will display a coordination learning curve (Guastello & Guastello, 1998). The coordination acquired during one task session will transfer to the learning and performance curve of a second task. If the task is too difficult, self-organization will not be complete, and the time series of coordination data will be chaotic. The acquisition of coordination is a form of synchronization wherein the group members entrain their behaviors to the others’ in the group. Psychologically, it is a form of implicit learning; participants are learning a process or procedure for interacting with each other while they are trying to figure out the solution to an explicit problem.
A coordinated group can withstand changes in personnel up to a point before coordination breaks down (Guastello et al., 2005). Verbalization enhances performance to some extent, but not necessarily the level to which leaders emerge from the social unit (Guastello & Bond, 2007).
Group size acts as a control parameter in ant collective intelligence (Sulis, 2009). A critical mass of ants is required to produce a sufficient momentum of interactions to get a nest-building project going. The same principle appears to resonate in research with human creative problem-solving groups. Groups outperform the best-qualified individuals if the groups are large enough to produce a critical mass of ideas (Dennis & Valacich, 1993). Groups also have the potential for outperforming individuals because they can review the available information more thoroughly, retrieve errors more reliably, and rely on the skills and knowledge bases of more people to formulate a solution to a problem (Laughlin, 1996); here the critical mass of people probably varies with the complexity of the information processing task.
Campion, Papper, and Medsker (1996) observed that groups need to be large enough to contain enough resources to get their work accomplished, but not so large as to induce coordination difficulties. Social loafing is more likely in larger groups, however. Loafers or free riders would find utility in joining the group with the expectation that someone in the group would get the job done, and all members would benefit. Hierarchical group structures can introduce more opportunities for inefficiency (Guastello, 2002).
By using a research hypothesis concerning group size, one can assess the potential trade-off between critical mass and deficits in coordination (Guastello, 2010a). If there is a group emergence effect at all, there would be an optimal group size associated with the teams’ performance. If larger groups perform better, a group dynamic is operating that would be consistent with the critical mass principle. If mid-size groups perform better, critical mass would be associated with the mid-size groups and coordination loss with the larger groups. If smaller groups perform better, the group dynamics would reflect widespread loafing. If there were no effect for group size, then the teams’ task was carried out by the most competent individuals; it would then be debatable whether the others were loafing or just not competent at the task.
Emergency situations, by definition, involve serious time urgency. Natural disasters, terrorist attacks, and some military operations are characterized by sudden onset, fast-changing situations, and unplanned physical focus points and times of day. Highly coordinated and adaptive responses by first responder teams are absolutely necessary to mobilize resources to maximize the number of lives saved and minimize damage to property or to the emergency response (ER) resources themselves (Comfort, 1999; Koehler, 1995, 1996). Three sets of principles that bear directly on the ER issues, in addition to team coordination, are considered next: time ecologies, situational awareness, and dynamic decisions.
ER systems, like other types of public policies, operate on multiple time horizons, or time ecologies (Koehler, 1999). At the slowest time horizon, something akin to senior management is identifying and interpreting risks of an outbreak of a natural or other type of disaster. The time horizon is occupied by foresight and action planning over a period of time that could extend for many years. Organizations or (p. 547) other socio-political systems that fail at this level are seriously impaired when an actual disaster strikes and the focus of attention shifts to the more immediate time horizons (Comfort, 1996; Pauchant & Mitroff, 1992; Reason, 1997), as when hurricane Katrina struck New Orleans in 2005 (Cigler, 2007; Derthick, 2007; van Heerden, 2007).
The mid-range time horizon initiates when the disaster actually strikes. According to Comfort (1996), the horizon for rescuing people from an earthquake region is about five days. The majority of people rescued who survive are rescued within the first two days, and the odds of survival given rescue decay sharply afterward. Meanwhile, all the shock elements of unplanned physical locations, time of day, availability or impairment of medical or transportation resources, fires and explosions, are generally fast-changing situations that require instant adaptive responses.
The situation is chaotic in the literal sense, as the flows of goods, services, fuel, and communication are seriously disrupted. Sensitivity to initial conditions figures prominently in the unfolding of events (Koehler, 1995, 1996). According to Farazmand (2007), “Crises are borne out of short chains of events, often unpredicted and unexpected, but they develop with dynamic and unfolding events over months, days, hours, or even minutes. They disrupt the routine events of life and governance, disturb established systems, and cause severe anxieties; they produce dynamics that no one can predict or control” (p. 150). Many, sometimes hundreds of formal and informal organizations and citizen groups mobilize and coordinate (self-organize) their resources and capabilities over the short time horizon (Comfort, 1996; Morris, 1906; Morris, Morris, & Jones, 2007). Furthermore, a complex socio-technical system that is suddenly placed in a state of high entropy can produce surprises of its own, with collateral demands for quick and effective adaptive responses (McDaniel & Driebe, 2005; Sellnow, Seeger, & Ulmer, 2002). Although the skill for managing chaos is thought to be in short supply in the population of management personnel (Guastello, 2002), Morris et al. (2007) cited many specific examples where the U.S. Coast Guard and U.S. Air Force coordinated their actions in the Katrina disaster very effectively.
Events occurring at the micro-level time horizon operate at the scale of hours and minutes. Koehler (1996) emphasized the critical and problematic nature of timing at this level of activity. For instance, one decision maker can ascertain that a hospital emergency room has a certain amount of carrying capacity at a particular moment, and then dispatch some casualties to that hospital. By the time the batch of casualties arrives, other decision makers may have had the same idea and dispatched more casualties to the same location, thus producing a bottleneck. Other critical events are connected to the discovery of new casualties or the prevention of concomitant disasters, such as fires in the wake of an earthquake, or the change in the path of a forest fire caused by a sudden shift in the winds. Human communication and the physical movement of people and equipment are not always fast enough to compensate. Koehler (1996) also observed that the psychological representation of time by disaster respondents and victims is strongly constricted to the needs of the present moment. The ability to see the future, even in the short horizon of a disaster response, is greatly impaired.
Situation Awareness and Sensemaking
Situation awareness and sensemaking are two collective cognitive processes that are critical in both emergency and normal times of operation. Situation awareness research in human factors engineering is centered on the design and use of computer interfaces and information systems that might be used by operatives in dispersed physical locations (Endsley, Bolte, & Jones, 2003; Riley, Endsley, Boldstad, & Cuevas, 2006; also see Endsley, this handbook). Situation awareness is usually regarded as a process that can be assisted by technology, rather than a particular outcome (Wickens, 2008). Dynamic situations are of particular interest (Durso & Sethumadhavan, 2008), although geographic position systems in use a decade ago were notably effective in mitigating the damages of an earthquake (Comfort, 1996). Effective situation awareness requires the right information at the right time. The computer equipment that is typically involved is essentially augmenting basic human perceptual processes.
Sensemaking (Weick, 2005) is an aspect of situation awareness that places joint emphasis on the process of gathering relevant information and the cognitive integration process that occurs shortly afterward. Expectations affect the information that one seeks. Expectations that are based on what is already known produce some automatic actions that might not have the desired effect if the interpretation of the situation is wrong. Preparedness for the unknown, surprising, or emergent events could produce a more advantageous result. Weick (p. 548) used the Centers for Disease Control’s (CDC) initial diagnosis of what turned out to be West Nile virus as an illustrative example. The correct diagnosis was obtained once the CDC became aware of lab tests that did not fit the original hypothesis and new information about West Nile virus that was not previously on record. The West Nile virus was not known in the Western Hemisphere up until that point. Arrival at the correct diagnosis was facilitated by coordinated communications among the responding agents.
The CDC’s experience raised the issue of how best to prepare for an emergent disease epidemic or bioterrorist attack. Preparedness for the unknown is the hallmark of a CAS. One does not prepare for the new disease exactly, according to Weick (2005). Rather, one prepares a reasonable strategy for finding out what it is and formulating an appropriate response, which includes coordinated actions.
Sometimes the situation report is accurate, but making sense out of it is an independent challenge. In the case of the floods that swamped the Red River Valley in 1997, the National Weather Service provided accurate reports and forecasts of river water levels and when they were expected to overflow the dam. Prompt sensemaking was required to respond to power outages and fires, evacuate a hospital, and combat poisonings from household chemicals (Sellnow et al., 2002), although the efforts were not entirely successful. Sellnow et al. emphasized the importance of sensemakers’ ability to think through the intricacies of a complex socio-technical system.
Sometimes the deficits in sensemaking do not reside with the ER teams, but rather with the populations that they try to serve. The tsunami that occurred in Southeast Asia in 2004 produced a dilemma in risk perception for many people. In the early stages of the event, water receded from the shore, exposing coral and other underwater attractions. People were attracted to the shoreline to gawk. Then, too late for many, they noticed the wall of water arriving and eventually interpreted the situation as dangerous (Guastello et al., 2008), although the published photographs showed that some people still did not get the message when the rushing water was imminent (p. 115–116). Guastello et al. developed a cusp catastrophe model (an NDS process involving two attractors, a saddle, a bifurcation, and two control parameters) for risk perception that was based on previously known catastrophe models for approach and avoidance behavior and the percep tion of ambiguous stimuli. Other principles from the social psychology of group dynamics, notably social comparison theory, were also relevant: The sheer quantity of people making the wrong choice could be enough to induce additional casualties from more wrong choices. The social dynamics of risk perception indicated the importance of an intervention at the time and location where group decisions were being made (p. 121).
Dynamic decisions involve a series of decisions that are not independent of each other, a problem situation that changes either autonomously or by virtue of person-system interaction, and decisions that are made in real time (Brehmer, 2005, p. 77). The time-phased inflow of information induces dynamics that increase the complexity of the decision situation. Currently we know that time pressure, feedback delays, and reliability of incoming information place demands on the human operator that affect his or her performance (Brehmer, 1987; Jobidon, Rousseau, & Breton, 2005; Omodei, McLennan, & Wearing, 2005).
The computer programs that are typically used to generate scenarios for the study of dynamic decisions are alternatively known as scaled worlds or low-fidelity simulations (Schiflett, Elliott, Salas, & Coovert, 2004). As such there is a reduced concern for the realism of the peripheral features of the scenarios and a strong emphasis on the psychological constructs that the experimenter wants to assess. Realism is thus regarded as relative to the research objectives (Cooke & Shope, 2004). The systems lend themselves to reprogramming for desired experimental conditions. The game used in the Stag Hunt experiments, and which was also used again in the ER study, was essentially a low-fidelity simulation, but it was one that is operable without a computer system or the need to reprogram one. It also allowed for more natural interaction among the team players.
There has been some expressed concern, however, about whether the unreliability of the performance measures that have been used in research on dynamic decisions, which are typically a single number at the end of the simulation, is undermining attempts to test conventional hypotheses such as the relationship between general intelligence and performance (Brehmer, 2005; Elg, 2005). NDS theory would suggest here that the apparent unreliability of simulator performance measures could be related to the time-phased nature of the task and might not be a psychometric problem at all. As with other forms of individual and group learning, chaotic behavior occurs before the self-organization (p. 549) and stabilization at the levels of neural networks, individual behavior, and group work performance (Guastello et al., 2005). Unlike the typical learning experiments, however, the specific decisions within dynamic decision sets are not independent of each other. Choices made in an early stage can affect options and utilities of options later on. Thus a dynamic decision set is not subgame perfect, and is less so in situations where the natural disaster or human attackers are not adopting a dominant strategy in response to the ER team.
The interactions among the ER team members are not subgame perfect either. The natural disaster itself, however, does adopt a dominant strategy, although it is one of total indifference to the humans. As the situation becomes more degrees removed from subgame perfection and players delay longer in adopting a dominant strategy, the final results of the scenario become less predictable from information about utilities, options, and strategies available early in the scenario. The qualitative change in the dynamics of the learning system suggests further that the performance measures that are generated under a regime of instability or chaos are qualitatively different from those generated from a regime of self-organized stability.
Stag Hunt is a strictly cooperative game in which players, in essence, choose between joining the group (analogous to hunting stag) and going off on their own (analogous to hunting rabbits). Players adopt a dominant strategy that depends on how they perceived the efficacy of the group compared to their own individual efforts. A potential negative outcome in Stag Hunt is social loafing, or the free rider syndrome, where participants join teams that are likely to be successful with the intention of letting the others do the work. The syndrome tends to become stronger from moment to moment, or from decision to decision, when the group receives feedback that its performance is taking a downturn (Guastello & Bond, 2004).
The dynamics of Stag Hunt games are prominent in ER. The group’s results will be optimal if everyone in the group pulls together on each part of the job. As mentioned earlier, the efficacy of the group can become challenged in the face of negative turns of events. A recent study (Guastello, 2010a) examined the impact of team size and performance feedback on adaptation levels, and performance of emergency response (ER) teams was examined. Performance was measured in an experimental dynamic decision task where ER teams of different sizes worked against an attacker who was trying to destroy a city. The complexity of the teams’ and attackers’ adaptation strategies and the role of the opponents’ performance were assessed by nonlinear regression analysis; the analysis featured the use of the Lyapunov exponent (a measure of turbulence in a time series) associated with the performance trends. The results showed that teams were more readily influenced by the attackers’ performance than vice versa. Teams of 9 or 12 participants were more likely to prevail against the attacker compared to teams of 4 or 6 participants; only teams of 12 people were effective at dampening the adaptive responses of the attacker, however. In all cases, when the attackers scored points, the teams’ performance on the next move declined. The attackers’ performance patterns showed greater levels of turbulence, which was interpreted as adaptability, than the teams’ overall.
Finally for this chapter we consider the situation in which the organization acts a whole unit. Any teams or work groups that are involved could be coordinated within a larger hierarchy of activities. The notion of a learning organization became a fashionable view of an organization shortly before the notion of the organization as a complex adaptive system took hold (Seo, Putnam, & Bartunek, 2004). In its earlier manifestations, learning organizations were those that had evolved processes or structures analogous to individual perception, cognition, memory, and adaptation processes. In later study, learning processes in organizations are seen to promote self-organization of dominant strategies or schemata from the bottom up. Individuals and teams adopt processes that produce ideas, schemata, mental models, and meanings that are eventually shared with other teams until some become dominant enough in the organization to shape new schemata for newcomers or new responses to new challenges (Van de Ven & Hargrave, 2004).
The perception, situation awareness, or sensemaking processes in organizational contexts require information exchange networks that extend outside the organization to other organizations in the same industry, other organizations in different industries, and of course customers. Van de Ven gave an example of a successful use of bottom-up development of wind turbine technology. Danish industries started with relatively simple technology and, through close interaction with customers and their needs, shaped (p. 550) a premier technology that is financially successful for the organizations involved. Would-be competitors from the United States, however, took an isolationist strategy and attempted to leapfrog the stages of development by developing an advanced technology quickly. They maintained little communication with customers and were generally unsuccessful in their efforts.
Diffusion of Innovation
The creative products alluded to earlier in this chapter will diffuse if they are successful. Diffusion usually takes the form of buying and adopting a product, but it could also mean adopting an idea in some other way. The most widely cited model for diffusion of innovation is the S-curve model that was introduced by Rogers (1962) and developed through numerous editions over the years. The idea is depicted in Figure 36.7. When viewed over time, there are early adopters who respond quickly to the idea. Then there is the bulk of the population that responds more gradually at first, but with a sudden shift to widespread adoption. The sudden shift is thought to be inherent in the shape of a normal distribution viewed as a cumulative function. Finally there are those who are slowest to adopt; they join in around the time the market for the product is more or less saturated.
Figure 36.7 S-curve for innovation diffusion.
Of course one cannot adopt even the most advantageous product or idea unless one hears about it first. Thus networks of agents are thought to facilitate the communications about the innovation, and hence the adoption process (Valente, 1995). Information flows fast to the extent that the network is tightly coupled, meaning that the density of interactions among agents is high. The downside, however, is that tightly coupled networks eventually run out of fresh ideas because all agents acquire all the ideas. Loosely coupled networks, however, can run less quickly, but because they are more diffuse they have access to more sources of novelty, and thus have more to report when needed (Frantz & Carley, 2009).
There is a tendency in the literature on diffusion of innovation to assume that the innovation will diffuse and should diffuse, even though many innovations fail, and that somehow something is wrong with the people or organizations that do not adopt the innovation. Failures may be related instead to better alternatives available, or good reasons for resistance to a particular innovation, cost being one of those reasons. Thus Jacobsen and Guastello (2007) proposed a cusp catastrophe model (Figure 36.8) to describe when a particular agent will adopt an innovation. One the one hand the model still reflects the S-shape, but it is the result of a more complex process with two control parameters. Positive expectations about the innovation, which are predicated on seeking information and actually finding it, lead the agent to the inflection point where the adoption could occur. A resistance factor, however, separates those who adopt at that point and those who let it go by. Adoption of an innovation in the face of strong resistance forces is likely to result in a stable adoption; the agents must really want it. Weak resistance might seem favorable to adoption at first blush, but it actually permits the unthinkable in prior models: buying it and not using it, or adopting it and exchanging it for something else. Thus the adoption dynamic is, in principle, reversible.
Figure 36.8 Cusp catastrophe model for the diffusion of innovations. Reprinted with permission from Jacobsen and Guastello (2007, p. 503) with permission of the Society for Chaos Theory in Psychology & Life Sciences.
In their assessment of the adoption behavior of 13 energy-saving innovations by large commercial or governmental facilities, Jacobsen and Guastello (2007) found that seven fit the cusp (p. 551) model remarkably well, five were more consistent with a linear model of attitude and behavior, and one fit a power-law distribution better than the other alternatives. The cusp model was most apparent for innovations that had longer amounts of time between their first introduction to the markets and the time of the survey. The model was least descriptive of innovations that were either very new, and thus did not have sufficient time to diffuse, or simply drowned out by more attractive alternatives.
An organization co-evolves with a changing environment. The organization itself emerges as a means of channeling energy into the production of products and services and fitting them to potential markets. Numerous decisions need to be made concerning the nature and scope of the market, possible product features, pricing, advertising strategies, and so forth. (Ergonomics are, of course, very important features of product design.) There is a learning process involved in isolating the most profitable combinations.
Allen (2009) reported a simulation study that examined four learning strategies and the relative effectiveness of each: (a) Darwinian learning, where the organizations start with a random strategy, organizations with good strategies survive, and organizations with poor strategies go bankrupt and are replaced by new organizations with random strategies; (b) imitate the winner, which means that organizations copy others in the environment that have apparently functional strategies; (c) trial and error, where organizations explore possible strategies, try some, observe results, re-evaluate, and perhaps try something else while continuing to consider new options; and (d) mixed strategies, where all of the previous three exist in the organizational environment. Results showed that Darwinian learning produced the worst results for the industry as a whole with the largest proportion of bankruptcies. Imitating the winner worked much better, although it was subject to large fluctuations in profitability levels; it involved imitating the winner’s limitations too. Overall the greatest success was recorded for the trial-and-error strategy, where agents learn from their mistakes and continually seek out possible improvements by exploring what Allen (2009) characterized as their landscape of opportunities.
Summary of Common Themes
The cognitive functions that are usually associated with individuals all have counterparts at the group and organizational system levels. Schemata that emerge at the group or organizational levels result from a combination of high entropy and interaction among agents, thereby facilitating the dominant schemata within the collective. Once a dominant schema emerges, it has a downward influence that directs or limits the schemata of individuals within the system.
Symmetry breaking is also possible, however, and it often occurs in response to adaptive pressures. Creative problem solving is a concerted attempt to develop new schemata. There are two flows of ideas within the group that self-organize into a solution to a problem. The capability of a person, group, or organization to break symmetry and self-organize a response is an expected feature of a healthy CAS.
Coordination itself is a learning process wherein the group members entrain their behaviors to each other. This facet of emergence goes beyond the simple interaction processes that are usually encapsulated in agent-based computer modeling programs. Furthermore, there are several different coordination processes relevant to collective group behavior, not just one as commonly assumed.
ER efforts by or within organizations involve several classes of NDS processes simultaneously. Situation awareness, sensemaking, creative problem solving, coordination, and dynamic decisions all contribute to the final result. The first four involve self-organizing dynamics. All involve sensitivity to initial conditions, which is a hallmark feature of chaos. It is unlikely that all agents will have a firm grasp of the entire situation or action plan at all times simultaneously, but if they do so at the collective level, the ER efforts should be as successful as the situation allows.
At the organizational level, successful adaptation requires information flows outside the organization’s boundaries to other organizations within the industry, organizations in other industries, and the customer bases. Flows consolidate in the form of networks. Diffusion of innovation is predicated on information flow on the one hand, and resistance forces on the other. In the classic view the two forces are oppositional. In the NDS view, however, they play separate roles that affect the stability of adoption and screen the innovations that could be adopted.
There are different possible processes of learning and evolution in organizations. Simulation studies show that the most effect type of learning involves continual exploration of new ideas, trial and error, and subsequent improvement. In other words, (p. 552) organizations need to function as a CAS instead of simply imitating the winner.
There are numerous opportunities for future research on emergence phenomena in organizations, and it would be helpful to consider them in categories. First, there is a problem that actually emanates from basic cognitive theory, which is the extent to which human thought is representational (Dietrich & Markman, 2000) or computational (Gluck & Pew, 2005) in nature. According to Sulis (2009), collective intelligence in ants is computational and not representational, yet human thought processes consist of both. We can then ask how the two principles might balance in human collective intelligence and how different dynamics might ensue from different tasks with different proportions of each type of thinking.
Second, it is well known that leaders emerge from leaderless groups as the group works together for a while. Although the literature on the nonlinear dynamics of leadership emergence is substantial and growing (Guastello, 2007, 2009c, 2010b, 2011; Hazy, 2008), the topic was not included in this chapter because the cognitive features of the process have not been sufficiently specified. It would be reasonable to anticipate, however, that the cognitive processes that are part of the emergence of leaders, and the other types of emergence that have been considered here, would be substantially connected by similarities in the nonlinear dynamic processes involved. For instance, we might ask how the role of the executive function in individual cognitive processes corresponds to the contribution of leadership roles in groups, or executives in organizations more broadly.
The third class of problems in nonlinear dynamics in cognition pertains to cognitive workload, fatigue, and stress. Although some viable models have been developed empirically, the range of tasks and situations is limited; the current status of the work has stood unaltered for quite some time (Guastello, 2003, 2006), but has recently resumed, as reported here. There are thus plenty of opportunities for new research on these models. What combinations of tasks and environmental constraints induce fatigue or compensate for it beyond what we already know about resource allocation (Wickens, 2002)? How can the degrees of freedom principle suggest improvements for task design, task allocation, or task switching? Does anything emerge in these situations at the collective level? What constitutes the capacity for resilience or elasticity, and is it the same thing in every circumstance?
Fourth, Dooley (2009) observed that the empirical studies on emergence in organizations (rather than simply the group level of analysis) are in very short supply relative to the expansiveness of the theoretical works on the topic. Greater reliance on simulation strategies, such as those found in Allen (2009) concerning learning dynamics or Frantz and Carley (2009) concerning network dynamics, could produce some important breaking developments. Another strategy would utilize communication analysis techniques to decipher the process by which meaning is made in organizations, and how situational conditions could affect the development of meaning (Dooley & Corman, 2004; Dooley, Corman, McPhee, & Kuhn, 2003).
Finally, the concept of dynamic decisions and its experimental platform present considerable opportunities for NDS analysis. The one study on the subject (Guastello, 2010a) isolated NDS concepts and an experimental design that could inform a wide range of new studies. It is probable that the methodological issues that are currently encountered in dynamic decision research could be resolved by incorporating NDS principles and analysis.
Abbott A., Button C., Pepping G. -J., & Collins D. (2005). Unnatural selection: Talent identification and development in sport. Nonlinear Dynamics, Psychology, and Life Sciences, 9, 61–88.Find this resource:
Ackerman P. L. (2011). Cognitive fatigue. Washington, DC: American Psychological Association.Find this resource:
Allen P. A. (2009). Complexity, evolution, and organizational behavior. In S. J. Guastello, M. Koopmans, & D. Pincus. (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 452–474). New York, NY: Cambridge University Press.Find this resource:
Ashby W. R. (1956). Introduction to cybernetics. New York, NY: Wiley.Find this resource:
Andriani P., & McKelvey B. (2011). From skew distributions to power-law science. In P. Allen, S. Maguire, & B. McKelvey (Eds.), The Sage handbook of complexity and management (pp. 254–273). Thousand Oaks, CA: Sage.Find this resource:
Axelrod R. (1984). The evolution of cooperation. New York, NY: Basic Books.Find this resource:
Bak P. (1996). How nature works: The science of self-organized criticality. New York, NY: Springer-Verlag/Copernicus.Find this resource:
Bernstein N. (1967). The coordination and regulation of movements. Oxford, England: Pergamon.Find this resource:
Brehmer B. (1987). Development of mental models for decision in technological systems. In J. Rasmussen, K. Duncan, & J. Leplat (Eds.), New technology and human error (pp. 111–120). New York, NY: Wiley.Find this resource:
Brehmer B. (2005). Micro-worlds and the circular relation between people and their environment. Theoretical Issues in Ergonomic Science, 6, 73–94. (p. 553)Find this resource:
Burke C. S., Stagl K. C., Salas E., Pierce L., & Kendall D. (2006). Understanding team adaptation: A conceptual analysis and model. Journal of Applied Psychology, 91, 1189–1207.Find this resource:
Campion M. A., Papper E. M., & Medsker G. J. (1996). Relations between work team characteristics and effectiveness: A replication and extension. Personnel Psychology, 49, 429–452.Find this resource:
Cigler B. A. (2007). The “big questions” of Katrina and the 2005 great flood of New Orleans. Public Administration Review, 67, 64–76.Find this resource:
Comfort L. (1996). Self-organization in disaster response: Global strategies to support local action. In G. Koehler (Ed.), What disaster response management can learn from chaos theory (pp. 94–112). Sacramento, CA: California Research Bureau, California State Library.Find this resource:
Comfort L. (1999). Nonlinear dynamics in disaster response: The Northbridge California earthquake, January 17, 1994. In E. Elliott & L. D. Kiel (Eds.), Nonlinear dynamics, complexity, and public policy (pp. 139–152). Commack, NY: Nova Science.Find this resource:
Cooke N. J., & Shope S. M. (2004). Designing a synthetic task environment. In S. G. Schiflett, L. R. Elliott, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation, and applications (pp. 263–296). Burlington, VT: Ashgate.Find this resource:
DeGreene K. B. (1991). Emergent complexity and person-machine systems. International Journal of Man-Machine Studies, 35, 219–234.Find this resource:
Derthick M. (2007). Where federalism didn’t fail. Public Administration Review, 67, 36–47.Find this resource:
Dennis A. R., & Valacich J. S. (1993). Computer brainstorms: More heads are better than one. Journal of Applied Psychology, 78, 531–537.Find this resource:
Dietrich E., & Markman A. B. (2000). Cognitive dynamics: Conceptual and representational change in humans and machines. Mahwah, NJ: Erlbaum.Find this resource:
Dooley K. J. (1997). A complex adaptive systems model of organization change. Nonlinear Dynamics, Psychology, and Life Sciences, 1, 69–97.Find this resource:
Dooley K. J. (2009). Organizational psychology. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 434–451). New York, NY: Cambridge University Press.Find this resource:
Dooley K. J., & Corman S. (2004). Dynamic analysis of news streams: Institutional versus environmental effects. Nonlinear Dynamics, Psychology, and Life Sciences, 8, 403–428.Find this resource:
Dooley K. J., Corman S., McPhee R. D., & Kuhn T. (2003). Modeling high-resolution broadband discourse in complex adaptive systems. Nonlinear Dynamics, Psychology, and Life Sciences, 8, 403–428.Find this resource:
Dore M. H. I. (2009). The impact of Edward Lorenz: An introductory overview. Nonlinear Dynamics, Psychology, and Life Sciences 13, 243–247.Find this resource:
Durso F. T., & Sethumadhavan A. (2008). Situation awareness: Understanding dynamic environments. Human Factors, 50, 442–448.Find this resource:
Elg F. (2005). Leveraging intelligence for high performance in complex dynamic systems requires balanced goals. Theoretical Issues in Ergonomic Science, 6, 63–72.Find this resource:
Endsley M. R. Bolte, B., & Jones D. G. (2003). Designing for situation awareness. Philadelphia, PA: Taylor & Francis.Find this resource:
Farazmand A. (2007). Learning from the Katrina crisis: A global and international perspective with implications for future crisis management. Public Administration Review, 67, 149–159.Find this resource:
Frantz T. L., & Carley K. M. (2009). Agent-based modeling within a dynamic network. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 475–505). New York, NY: Cambridge University Press.Find this resource:
Gluck K. A., & Pew R. W. (2005). Modeling human behavior with integrated cognitive architectures. Mahwah, NJ: Erlbaum.Find this resource:
Goldberger A. L., Amaral L. A. N., Hausdorff J. M., Ivanov P. C., Peng C. K., & Stanley H. E. (2002). Fractal dynamics in physiology: Alterations with disease and aging. Proceedings of the National Academy of Sciences, 99, 2466–2472.Find this resource:
Goldstein J. (2011). Emergence in complex systems. In P. Allen, S. Maguire, & B. McKelvey (Eds.), The Sage handbook of complexity and management (pp. 65–78). Thousand Oaks, CA: Sage.Find this resource:
Gregson R. A. M. (1992). n-Dimensional nonlinear psychophysics. Mahwah, NJ: Erlbaum.Find this resource:
Gregson R. A. M. (2009). Psychophysics. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 108–131). New York, NY: Cambridge University Press.Find this resource:
Guastello S. J. (1985). Euler buckling in a wheelbarrow obstacle course: A catastrophe with complex lag. Behavioral Science, 30, 204–212.Find this resource:
Guastello S. J. (1995). Chaos catastrophe and human affairs. Mahwah, NJ: Erlbaum.Find this resource:
Guastello S. J. (1998). Creative problem solving groups at the edge of chaos. Journal of Creative Behavior, 32, 38–57.Find this resource:
Guastello S. J. (2002). Managing emergent phenomena: Nonlinear dynamics in work organizations. Mahwah, NJ: Erlbaum.Find this resource:
Guastello S. J. (2003). Nonlinear dynamics, complex systems, and occupational accidents. Human Factors in Manufacturing, 13, 293–304.Find this resource:
Guastello S. J. (2005a). Nonlinear models for the social sciences. In S. A. Whelan (Ed.), The handbook of group research and practice (pp. 251–272). Thousand Oaks, CA: Sage.Find this resource:
Guastello S. J. (2005b). Statistical distributions and self-organizing phenomena: What conclusions should be drawn? Nonlinear Dynamics, Psychology, and Life Sciences, 9, 463–478.Find this resource:
Guastello S. J. (2006). Human factors engineering and ergonomics: A systems approach. Mahwah, NJ: Erlbaum.Find this resource:
Guastello S. J. (2007). Nonlinear dynamics and leadership emergence. Leadership Quarterly, 18, 357–369.Find this resource:
Guastello S. J. (2009a). Chaos as a psychological construct: Historical roots, principal findings, and current growth directions. Nonlinear Dynamics, Psychology, and Life Sciences, 13, 289–310.Find this resource:
Guastello S. J. (2009b). Chaos and conflict: Recognizing patterns. Emergence: Complexity in Organizations, 10(4), 1–9.Find this resource:
Guastello S. J. (2009c). Group dynamics: Adaptability, coordination, and leadership emergence. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 402–433). New York, NY: Cambridge University Press.Find this resource:
Guastello S. J. (2010a). Nonlinear dynamics of team performance and adaptability in emergency response. Human Factors, 52, 162–172. (p. 554)Find this resource:
Guastello S. J. (2010b). Self-organization and leadership emergence in emergency response teams. Nonlinear Dynamics, Psychology, and Life Sciences, 14, 179–204.Find this resource:
Guastello S. J. (2011). Leadership emergence in engineering design teams. Nonlinear Dynamics, Psychology, and Life Sciences, 15, 87–104.Find this resource:
Guastello S. J. (in press). Modeling illness and recovery with nonlinear dynamics. In J. Sturmberg & C. M. Martin (Eds.), Handbook on complexity in health. New York, NY: Springer.Find this resource:
Guastello S. J., Bock B., Caldwell P., & Bond R. W., Jr. (2005). Origins of group coordination: Nonlinear dynamics and the role of verbalization. Nonlinear Dynamics, Psychology, and Life Sciences, 9, 175–208.Find this resource:
Guastello S. J., Boeh H., Schimmels M., Gorin H., Huschen S. Davis, E.,… Poston K. (2012). Cusp catastrophe models for cognitive workload and fatigue in a verbally cued pictorial memory task. Human Factors.Find this resource:
Guastello S. J., Boeh H., Shumaker C., & Schimmels M. (2012). Catastrophe models for cognitive workload and fatigue. Theoretical Issues in Ergonomics Science, 13. 586–602Find this resource:
Guastello S. J., & Bond R. W., Jr. (2004). Coordination in Stag Hunt games with application to emergency management. Nonlinear Dynamics, Psychology, and Life Sciences, 8, 345–374.Find this resource:
Guastello S. J., & Bond R. W., Jr. (2007). The emergence of leadership in coordination-intensive groups. Nonlinear Dynamics, Psychology, and Life Sciences, 11, 91–117.Find this resource:
Guastello S. J., Gorin H., Huschen S. Peters, N. E., Fabisch M., & Poston K. (2012). New paradigm for task switching strategies while performing multiple tasks: Entropy and symbolic dynamics analysis of voluntary patterns. Nonlinear Dynamics, Psychology, and Life Sciences, 16, 471–497Find this resource:
Guastello S. J., & Gregson R. A. M. (Eds.). (2011). Nonlinear dynamical systems analysis for the behavioral sciences using real data. Boca Raton, FL: CRC Press /Taylor & Francis.Find this resource:
Guastello S. J., & Guastello D. D. (1998). Origins of coordination and team effectiveness: A perspective from game theory and nonlinear dynamic s . Journal of Applied Psychology, 83, 423–437.Find this resource:
Guastello S. J., Koehler G., Koch B., Koyen J., Lilly A., Stake C., & Wozniczka J. (2008). Risk perception when the tsunami arrived. Theoretical Issues in Ergonomic Science, 9, 95–114.Find this resource:
Guastello S. J., & Liebovitch L. S. (2009). Introduction to nonlinear dynamics and complexity. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 1–40). New York, NY: Cambridge University Press.Find this resource:
Guastello S. J., & McGee D. W. (1987). Mathematical modeling of fatigue in physically demanding jobs. Journal of Mathematical Psychology, 31, 248–269.Find this resource:
Guastello S. J., & Philippe P. (1997). Dynamics in the development of large problem solving groups and virtual communities. Nonlinear Dynamics, Psychology, and Life Sciences, 1, 123–149.Find this resource:
Gureckis T. M., & Goldstone R. L. (2006). Thinking in groups. Pragmatics & Cognition, 14, 293–311.Find this resource:
Haken H. (1984). The science of structure: Synergetics. New York, NY: Van Nostrand Reinhold.Find this resource:
Hancock P. A., & Desmond P. A. (Eds.). (2001). Stress, workload, and fatigue. Mahwah, NJ: Erlbaum.Find this resource:
Hardy C. (1998). Networks of meaning: A bridge between mind and matter. Westport, CT: Praeger.Find this resource:
Hazy J. K. (2008). Toward a theory of leadership in complex systems: Computational modeling explorations. Nonlinear Dynamics, Psychology, and Life Sciences, 12, 281–310.Find this resource:
Heath R. A. (2002). Can people predict chaotic sequences? Nonlinear Dynamics, Psychology, and Life Sciences, 6, 37–54.Find this resource:
Holland J. H. (1995). Hidden order: How adaptation builds complexity. Cambridge, MA: Perseus.Find this resource:
Hollis G., Kloos H., & Van Orden G. C. (2009). Origins of order in cognitive activity. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 206–241). New York, NY: Cambridge University Press.Find this resource:
Hollnagel E., Woods D. D., & Leveson N. (Eds.). (2006). Resilience engineering. Burlington, VT: Ashgate.Find this resource:
Hong S. L. (2010). The entropy conservation principle: Applications in ergonomics and human factors. Nonlinear Dynamics, Psychology, and Life Sciences, 14, 291–315.Find this resource:
Ioteyko J. (1920). La fatigue [Fatigue] (2nd ed.) Paris, France: Flammarion.Find this resource:
Jacobsen J. J., & Guastello S. J. (2007). Nonlinear models for the adoption and diffusion of innovations for industrial energy conservation. Nonlinear Dynamics, Psychology, and Life Sciences, 11, 499–520.Find this resource:
Jagacinski R. J., & Flach J. M. (2003). Control theory for humans: Quantitative approaches to modeling performance. Mahwah, NJ: Erlbaum.Find this resource:
Jobidon M. -E., Rousseau R., & Breton R. (2005). The effect of variability in temporal information on the control of a dynamic task. Theoretical Issues in Ergonomic Science, 6, 49–62.Find this resource:
Kantowitz B. H. (1985). Channels and stages in human information processing: A limited analysis of theory and methodology. Journal of Mathematical Psychology, 29, 135–174.Find this resource:
Kauffman S. A. (1993). Origins of order: Self-organization and selection in evolution. New York, NY: Oxford University Press.Find this resource:
Kauffman S. A. (1995). At home in the universe: The search for laws of self-organization and complexity. New York, NY: Oxford University Press.Find this resource:
Kelly K. (1994). Out of control: The new biology of machines, social systems, and the economic world. Reading, MA: Addison-Wesley.Find this resource:
Koehler G. (1995). Fractals and path-dependent processes: A theoretical approach for characterizing emergency medical responses to major disasters. In R. Robertson & A. Combs (Eds.), Chaos theory in psychology and the life sciences (pp. 199–216). Hillsdale, NJ: Erlbaum.Find this resource:
Koehler G. (1996). What disaster response management can learn from chaos theory. In G. Koehler (Ed.), What disaster response management can learn from chaos theory (pp. 2–41). Sacramento, CA: California Research Bureau, California State Library.Find this resource:
Koehler G. (1999). The time compacted globe and the high tech primitive at the millennium. In E. Elliott & L. D. Kiel (Eds.), Nonlinear dynamics, complexity, and public policy (pp. 153–174). Commack, NY: Nova Science.Find this resource:
Kohonen T. (1989). Self-organization and associative memory (3rd ed.). New York, NY: Springer-Verlag.Find this resource:
Laughlin P. R. (1996). Group decision making and collective induction. In E. Witte & J. H. Davis (Eds.), Understanding group behavior: Consensual action by small groups (pp. 61–80). Mahwah, NJ: Erlbaum.Find this resource:
Lorenz E. N. (1963). Deterministic periodic flow. Journal of the Atmospheric Sciences, 20, 130–141. (p. 555)Find this resource:
Mayer-Kress G., Newell K. M., & Liu Y-T. (2009). Nonlinear dynamics of motor learning. Nonlinear Dynamics, Psychology, and Life Sciences, 13, 3–26.Find this resource:
McDaniel R. R., Jr., & Driebe D. J. (Eds.). (2005). Uncertainty and surprise in complex systems. New York, NY: Springer.Find this resource:
McKelvey B., & Lichetenstein B. B. (2007). Leadership in the four stages of emergence. In J. K. Hazy, B. B. Lichtenstein, & J. Goldstein (Eds.), Complex systems leadership theory (pp. 93–107). Litchfield Park AZ: ISCE.Find this resource:
Marken R. S. (1991). Degrees of freedom in behavior. Psychological Science, 2, 86–91.Find this resource:
May R. M. (1976). Simple mathematical models with very complex dynamics. Nature, 261, 459–467.Find this resource:
Maynard-Smith J. (1982). Evolution and the theory of games. Cambridge, England: Cambridge University Press.Find this resource:
Meister D. (1977). Implications of the system concept for human factors research methodology. Proceedings of the Human Factors Society, 21, 453–456.Find this resource:
Morris C. (1906). The San Francisco calamity by earthquake and fire. City Unknown: W. E. Scull.Find this resource:
Morris J. C., Morris E. D., & Jones D. M. (2007). Reaching for the philosopher’s stone: Contingent coordination and the military’s response to hurricane Katrina. Public Administration Review, 67, 94–106.Find this resource:
Newell K. M. (1991). Motor skill acquisition. Annual Review of Psychology, 42, 213–237.Find this resource:
Omodei M. M., McLennan J., & Wearing A. J. (2005). How expertise is applied in real-world decision environments: Head-mounted video and cued recall as a methodology for studying routines of decision making. In T. Betsch & S. Haberstroh (Eds.), The routines of decision making (pp. 271–288). Mahwah, NJ: Erlbaum.Find this resource:
Pauchant T. C., & Mitroff I. I. (1992). Transforming the crisis-prone organization. San Francisco, CA: Jossey-Bass.Find this resource:
Prigogine I., & Stengers I. (1984). Order out of chaos: Man’s new dialog with nature. New York, NY: Bantam.Find this resource:
Reason J. (1997). Managing the risks of organizational accidents. Brookfield, VT: Ashgate.Find this resource:
Renaud P., Chartier S., & Albert G. (2009). Embodied and embedded: The dynamics of extracting perceptual visual invariants. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 177–205). New York, NY: Cambridge University Press.Find this resource:
Riley J. M., Endsley M. R., Boldstad C. A., & Cuevas H. M. (2006). Collaborative planning and situation awareness in Army command and control. Ergonomics, 49, 1139–1153.Find this resource:
Rogers E. M. (1962). The diffusion of innovations. New York, NY: Free Press.Find this resource:
Rosenbaum D. A., Slotta J. D., Vaughn J., & Plamondon R. (1991). Optimal movement selection. Psychological Science, 2, 92–101.Find this resource:
Sawyer R. K. (2005). Social emergence: Societies as complex systems. New York, NY: Cambridge University Press.Find this resource:
Schiflett S. G., Elliott L. R., Salas E., & Coovert M. D. (Eds.). (2004). Scaled worlds: Development, validation, and applications. Burlington, VT: Ashgate.Find this resource:
Sellnow T. L., Seeger M. W., & Ulmer R. R. (2002). Chaos theory, informational needs, and natural disasters. Journal of Applied Communications Research, 30, 269–292.Find this resource:
Seo M.-G., Putnam L. L., & Bartunek J. M. (2004). Dualities and tensions of planned organizational change. In M. S. Poole & A. H. Van de Ven (Eds.), Handbook of organizational change and innovation (pp. 73–107). New York, NY: Oxford University Press.Find this resource:
Sheridan T. B. (2008). Risk, human error, and system resilience: Fundamental ideas. Human Factors, 50, 418–426.Find this resource:
Simonton D. K. (1988). Creativity, leadership, and change. In R. J. Sternberg (Ed.), The nature of creativity: Contemporary psychological perspectives (pp. 286–426). Cambridge, MA: MIT Press.Find this resource:
Sprott J. C. (2003). Chaos and time-series analysis. New York, NY: Oxford University Press.Find this resource:
Strogatz S. (2003). Sync: The emerging science of spontaneous order. New York, NY: Hyperion.Find this resource:
Sulis W. (1997). Fundamental concepts of collective intelligence. Nonlinear Dynamics, Psychology, and Life Sciences, 1, 35–54.Find this resource:
Sulis W. (2008). Stochastic phase decoupling in dynamical networks. Nonlinear Dynamics, Psychology, and Life Sciences, 12, 327–358.Find this resource:
Sulis W. (2009). Collective intelligence: Observations and models. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 41–72). New York, NY: Cambridge University Press.Find this resource:
Thom R. (1975). Structural stability and morphegenesis. New York, NY: Benjamin-Addison-Wesley.Find this resource:
Thompson H. L. (2010). The stress effect: Why smart leaders make dumb decisions—and what to do about it. San Francisco: Jossey-Bass.Find this resource:
Townsend J. T., & Wenger M. J. (2004). A theory of interactive parallel processing: New capacity measures and predictions for a response time inequality series. Psychological Review, 30, 708–719.Find this resource:
Trianni V. (2008). Evolutionary swarm robotics: Evolving self-organizing behaviors in groups of autonomous robots. Berlin, Germany: Springer.Find this resource:
Trofimova I. (2002). Sociability, diversity and compatibility in developing systems: EVS approach. In J. Nation, I. Trofimova, J. Rand, & W. Sulis (Eds.), Formal descriptions of developing systems (pp. 231–248). Dordrecht, The Netherlands: Kluwer.Find this resource:
Turvey M. T. (1990). Coordination. American Psychologist, 45, 938–953.Find this resource:
Valente T. (1995). Network models of the diffusion of innovations. Cresskill, NJ: Hampton Press.Find this resource:
Van de Ven A. H., & Hargrave T. J. (2004). Social, technical and institutional change: A literature review and synthesis. In M. S. Poole & A. H. Van de Ven (Eds.), Handbook of organizational change and innovation (pp. 259–303). New York, NY: Oxford University Press.Find this resource:
van Heerden I. L. I. (2007). The failure of the New Orleans levee system following hurricane Katrina and the pathway forward. Public Administration Review, 67, 24–35.Find this resource:
Waldrop M. M. (1992). Complexity: The emerging science at the edge of order and chaos. New York, NY: Simon & Schuster.Find this resource:
Ward L. M., & West R. L. (1998). Modeling human chaotic behavior: Nonlinear forecasting analysis of logistic iteration. Nonlinear Dynamics, Psychology, and Life Sciences, 2, 261–282.Find this resource:
West B. J., & Deering B. (1995). The lure of modern science: Fractal thinking. Singapore: World Scientific.Find this resource:
Weick K. E. (2005). Managing the unexpected: Complexity as distributed sensemaking. In R. R. McDaniel, Jr. & D. J. Driebe (Eds.), Uncertainty and surprise in complex systems (pp. 51–65). New York, NY: Springer.Find this resource:
Wickens C. D. (2008). Situation awareness: Review of Mica Endsley’s 1995 articles on situation awareness theory and measurement. Human Factors, 50, 397–403.Find this resource:Stephen J. Guastello
Stephen J. Guastello, Department of Psychology, Marquette University, Milwaukee, WI