Emergence in Organizations and Human Collective Intelligence

.

Emergence in Organizations and Human Collective Intelligence  

Stephen J. Guastello

The Oxford Handbook of Cognitive Engineering

Edited by John D. Lee and Alex Kirlik

Print Publication Date: Feb 2013Subject: Psychology, Cognitive PsychologyOnline Publication Date: May 2013DOI: 10.1093/oxfordhb/9780199757183.013.0037

York UniversityAUTOMATICALLY SIGNED IN

Sign in to an additional subscriber account

In This Article

Go to page:

Abstract and Keywords

The traditional approach to human factors engineering focuses on interactions at the person-machine interface. When multiple person-machine interfaces are involved, however, emergent phenomena are produced by a combination of myriad interactions among agents within a system, far-from-equilibrium conditions, and processes of self-organization. Emergent phenomena vary in complexity from sandpile avalanches to phase shifts to hierarchical structures with or without top-down supervenience effects. They build on elementary nonlinear dynamics such as attractors, bifurcations, chaos, and catastrophes. Individuals, groups, and organizations all exhibit analogous dynamical cognitive processes, although events at the collective levels cannot be reduced to simply the sum of the individual parts. This chapter describes emergent phenomena that are mostly closely concerned with cognitive processes: collective cognition and action, networked systems, creative problem solving, team coordination, emergency response and sensemaking, dynamic decisions, diffusion of innovation, and organizational learning strategies. The characteristics of complex adaptive systems are inherent in each case.

Keywords: self-organizationcomplex adaptive systememergencechaosnonlinear dynamicsgroup or team dynamics

The cover image on Kelly’s (1994) book Out of Control depicted a stylized office building designed as a grid with large windows and huge bees flying in and out. The implicit message was that work processes resemble a swarm of insects more closely than a machine that is designed for exact reproductions of objects. The queen bee does not give orders to the worker bees. The worker bees figure out their own work routines, which we observe as a swarm, based on one-on-one interactions with each other.

Emergent phenomena should be anticipated whenever multiple person-machine systems (PMSs) are interacting. They cannot be readily decomposed into more elementary precursors or causes. They often occur suddenly, hence the word “emergency,” but suddenness is not a necessary feature of emergence. The earliest concept of emergence dates back to a philosophical work by Lewes in 1875 (Goldstein, 2011). It crossed into social science in the early 20th century, when Durkheim wanted to identify sociological phenomena that could not be reduced to the psychology of individuals (Sawyer, 2005). The existence of groups, organizations, and social institutions are examples; bilateral interactions among individuals eventually give rise to norms, deeper patterns, and other forms of superordinate structure. In the famous dictum of the Gestalt psychologists, “The whole is greater than the sum of its parts.” Thus some of the ideas described in this chapter are more than a century old, but it was not until the 1980s that social scientists began to acquire the analytic tools to exploit them fully (Guastello, 2009a).

There are many examples of emergent phenomena in organizations, and they vary in complexity. One of (p. 535) the simpler processes is the sandpile avalanche (Bak, 1996): If we drizzle more sand on top of an existing sandpile, the sandpile will become larger until a critical point when it avalanches and settles into a distribution of large and small piles. This chapter is necessarily constrained to those that have a significant cognitive component. After introducing a few more constructs from nonlinear dynamical systems (NDS) theory that are intrinsic to explaining how emergent events occur, this chapter expands on collective intelligence, creative problem solving, the dynamics of multiple PMSs, team coordination, and emergency response (ER), and organizational learning.

Principles of Nonlinear Dynamical Systems

Elementary Dynamics

NDS theory is a general systems theory that explains processes of change in nonliving systems and in living systems ranging from microbiological to macroeconomic phenomena. Most of its important principles can be found in extended (Sprott, 2003) or concise form (Guastello & Liebovitch, 2009) elsewhere, and there are numerous connections among them. A compendium of techniques for testing hypotheses about NDS processes in the behavioral sciences can be found in Guastello and Gregson (2011). Principles that are most proximally related to emergent phenomena are described next.

An attractor is a piece of topological space wherein an object enters and does not leave unless an exceptional force is applied to the object. The simplest attractor is the fixed point. An illustrative example is the attraction of metal filings to a magnet. The asymptote at the end of a learning curve is another example. There are different mathematical functions that describe the varieties of the movement of objects into and within an attractor; fixed points, oscillators, and chaos are the most notable. Behaviorally, we can observe the temporal pattern of events and make statistical associations between the data and the descriptive equations (Guastello, 20022005a2009a; Guastello & Gregson, 2011).

bifurcation involves the change in a dynamical field, such as the splitting of the field into two or more attractor regions or the changing of an attractor from one type to another. The logistic map (Figure 36.1) is one of the more famous bifurcations in NDS and was studied extensively by May (1976). It represents the transition from a fixed point attractor to a limit cycle, and from a limit cycle into chaos all through one equation:

Y2 = BY1(1 – Y1), 0 < Y1 < 1, Emergence in Organizations and Human
            Collective Intelligence

Figure 36.1 The logistic map bifurcation.

where Y is the order parameter and B is the control parameter. In the region of Figure 36.1 labeled as Period 1, the system is globally stable with dynamics that are characteristic of a fixed point attractor. The bifurcation point marks the transition from Period 1 to Period 2 where the attractor becomes a limit cycle as B increases in value.

Toward the end of Period 2, where B has increased further in value, the bifurcation pattern bifurcates again, dividing the system into four smaller regions. Here one observes cycles within cycles, or period doubling. When B continues to increase, the system bifurcates again so that there are oscillations within oscillations within oscillations again.

The system enters Period 3 when the control parameter B approaches a value of 4.0. Period 3 is full-scale chaos. The behavior of Y bears little resemblance to the relative order of Periods 1 or 2. Of further interest are the windows of relative order that striate the chaotic region. The windows contain ordered trajectories that create a path, with additional bifurcations, from one chaotic period to another.

The logistic map can be expanded into a cubic form to reflect dynamics of even greater complexity:

Y2 = BY(1 – Y1)(1 – Y1).

Gregson (19922009) used the cubic logistic structure as the basis for developing a series of models for nonlinear psychophysical processes where the signal can have multidimensional properties and vary continuously over space and time. Alternatively, the perceiver could be allowed to move around instead of being confined to sitting in a chair watching experimental stimuli on a screen.

Bifurcations are prominent in catastrophe models that describe and explain events that involve discontinuous change. A few examples are considered later in this chapter. (p. 536)

Chaos is perhaps the trademark of NDS, where seemingly random events are actually describable by simple equations or systems of simple equations. One of its central properties is sensitivity to initial conditions wherein small deviations in the initial states of two objects become progressively larger as the dynamics of the system unfold after many iterations of the basic underlying function (Dore, 2009; Lorenz, 1963). Two other prominent properties of chaos are its boundedness and the nonrepetition of values over iterations. Boundedness and nonrepetition are matters of degree, but the essential ideas are that, for all the apparent randomness, the values of the critical behavior stay within confined limits and the pattern of numbers does not repeat even though there might appear to be some rough repetition in the numeric time series.

There are about five dozen varieties of chaotic behavior known mathematically (and catalogued in Sprott, 2003), although the behavioral science applications are more concentrated on the presence of chaos overall and the level of turbulence involved in the chaotic system (Guastello, 2009a). Turbulence can be quantified by metrics such as the fractal dimension, the Lyapunov exponent, the Hurst exponent, and various definitions of entropy.

Complex Dynamics

Self-organization is a phenomenon in which a system in a state of chaos, high entropy, or far-from-equilibrium conditions shapes its own structure, usually by building feedback loops among the subsystems. The feedback loops control and stabilize the system in a state of lower entropy. Positive feedback loops facilitate growth, development, or radical change in the extreme. Negative feedback loops have the net impact of inhibiting change. The spontaneous assumption of order occurs without the assistance of outside agents, or managers in organizations, and has been dubbed “order for free” (Kauffman 19931995).

There are a few different processes of self- organization; three of the most prominent varieties can be summarized briefly as follows: In the rugged landscape scenario, (biological) entities disperse across a landscape with the intention of finding suitable ecological niches (Kauffman, 19931995). The distribution of entities depends on the number of traits the entities require to live successfully in a niche; there would be a greater quantity of entities living in niches that require only one trait, whereas there would progressively fewer living in niches that require two, three, or more such traits. The ruggedness of the landscape is the result of the complexity of interaction of entities within a niche. More complex interactions produce a more rugged local landscape and a greater barrier to entry, whereas a low density of interactions results in a shallow local landscape and greater ease of entry. Entities that have found a relatively suitable niche often explore other possible niches using different search strategies.

In the sandpile scenario, if we drizzle more sand on top of an existing sandpile, the sandpile will become larger until it reaches a critical point when it avalanches. At that point it produces a distribution of large and small piles, such that there is a greater number of small piles and a small number of large piles. The distribution of pile sizes conforms to a power-law distribution:

Freq(X) = aXB,

where X is the size of the pile and B is the shape parameter of the distribution. B is negative and falls within the range of 1 <|B| < 2 for self-organizing phenomena. Not all self-organizing phenomena take the form of power law distributions (Guastello, 2005b), although a good many phenomena encountered in cognitive, organizational, and other societal processes do so (Andriani & McKelvey, 2011; Hollis, Kloos, & Van Orden, 2009; Bak, 1996; West & Deering, 1995).

The third viewpoint is centered on the nature of dissipative systems (Haken, 1984; Prigogine & Stengers, 1984). Having arrived at a high level of entropy, one would think that the system would suffer from heat death by dissipating its energy. What happens instead, however, is that the system reorganizes into a new form that preserves the system’s life and restores relative equilibrium conditions. In doing so, the system forms a hierarchical structure whereby some subsystems act more like drivers, and others like slaves. Drivers initiate and send information according to their own temporal dynamics, and slaves react to drivers’ input and dynamics while attempting to carry out their own functions. Emergence in Organizations and Human
            Collective Intelligence

Figure 36.2 Coupled oscillators.

As an illustrative example, consider the case of three work processes, each done by a different person or group of people, organized in a series as shown in Figure 36.2. Imagine that each one in isolation would function as an oscillator or pendulum. When Pendulum 1 oscillates, the middle one moves faster, and its motion pattern becomes more complex than (p. 537) strictly periodic; as a further result, the third swings chaotically. Opportunities for conflict can arise as Pendulum 3 does not like being jerked around, and probably cannot function well with all the entropy or unpredictability associated with the motion of the system it is experiencing. In human terms, the uncertainty associated with entropy is equivalent to the experience of risk, which the people or groups that reside later in the chain would like to control (Guastello, 2009b).

An important connection between emergence and other NDS principles was formed when Holland’s (1995) computer simulations illustrated how the local interaction among agents gave rise to self-organized structure. In fact, “complexity theory” got its name from the central idea that the number of interactions among many agents in a large system was too numerous to calculate individually and that simulation programs were needed that would consider the possibilities, calculate interactions among agents, and produce a final picture of the interaction results. In stronger cases of emergent order, the supervenience principle engages, whereby the superordinate structure maintains a downward causality on the behaviors and interactions among the lower-level agents (Sawyer, 2005). The top-down effect is essentially a driver-slave relationship.

Whereas self-organization is a process, emergence is a result of the process. McKelvey and Lichtenstein (2007) outlined four different types of emergence. The simplest was the avalanche with the power-law distribution. The second was the phase shift. The internal organization that would be required for a species to differentiate into more specific types is more complex than breaking into little pieces; phase shifts also occur when the organisms hop from one niche to another suddenly. More generally, a phase shift is a reorganization of the internal structure of the system. Still it is not necessary for a hierarchical internal structure to occur.

The third level of emergence is the formation of a hierarchical relationship among the parts of the system. Driver-slave relationships are examples. A different type of example occurs when a person or group collects information about its environment and forms a mental model of the situation. The mental model does not exhibit the top-down supervenient force until people start acting on the model and the model persists after some of the original group members have been replaced. The presence of an active top-down component reflects the fourth level of complexity for emergent events. Arguably, the dynamics of bottom-up and top-down influences are matters of degree and relative balance.

Goldstein (2011) indicated that emergent phenomena could be still more complicated. First, the automatic internal organization that characterizes Kauffman’s (1993) “order for free” might have been overemphasized by some writers. Boundary conditions shape emergent behavior too. Second, in a complex hierarchical system, there can be numerous subnetworks of agents that are drawn from different hierarchical levels. The subnetworks can be connected horizontally and interact in all sorts of combinations.

Degrees of Freedom

Self-organizing dynamics are typically observed as information flows among the subsystems. The concept of degrees of freedom was first introduced in conjunction with psychomotor movements (Bernstein, 1967; Marken, 1991; Rosenbaum, Slotta, Vaughn, & Plamondon, 1991; Turvey, 1990), and it also explains how fixed and variable upper limits to cognitive channel capacity are both viable (Guastello, Boeh, Shumaker, & Schimmels, 2012; Guastello, Boeh et al., in press; Guastello, Gorin et al., 2012). In any particular complex movement, each limb of the body is capable of moving in a limited number of ways, and the movements made by one limb restrict or facilitate movement by other limbs. For this reason, we do not walk by stepping both feet forward simultaneously, for instance. More generically, degrees of freedom are the number of component parts, such as muscles or neural networks, that could function differently to produce the final performance result.

The notion of internally connected nodes of movement is substantially simpler and more efficient than assuming that all elements of movement are controlled by a central executive function. An (p. 538) individual would explore several possible combinations of movement elements when learning a movement for the first time. Once learning sets in, however, the movement combinations gravitate toward conserving degrees of freedom, which is in essence a path of least resistance (Hong, 2010). The gravitation is actually a self-organization dynamic that is associated with phase shifts. Some variability in the movement still persists, however, which facilitates new adaptive responses. Sufficiently large changes in goals or demands produce a phase shift in the motor movements, which are observed as discontinuous changes.

Cognitive behaviors are thought to operate on more or less the same principle with regard to the early and later stages of schematic development, the role of executive functions, and the principle of conserving degrees of freedom (Guastello, Gorin et al., 2012; Hollis, Kloos, & Van Orden, 2009). Furthermore, cognition is tied to action in many cases, so that the entire array of relevant degrees of freedom now pertains to the perception-action sequences (Renaud, Chartier, & Albert, 2009).

In the case of work overload resulting from a fixed upper limit of human cognitive channel capacity (cf. Kantowitz, 1985), the discontinuity in performance would be the simple result of hitting a barrier. As such there would be little room for the kind of elasticity associated with variable upper limits. If variable upper limits were operating, however, the principle of conserving degrees of freedom would have a few implications; in the case of adding tasks or other demands to existing tasks, a change in one cognitive-behavioral motion would impact the other motions in the system or sequence. If it were possible to conserve degrees of freedom further, a phase shift in the cognition-action sequence would result. For example, an increased demand for visual search could result in a shift from an exhaustive search to an optimized first-terminating strategy (Townsend & Wenger, 2004).

Catastrophe Models and Phase Shifts

Phase shifts, which have been mentioned already in other contexts, can be represented as mathematical models that involve two control parameters. Catastrophe theory itself was introduced by Thom (1975) to describe discontinuous changes of events, where the system of events can vary in complexity. The cusp model is one of the simpler models and one that has seen voluminous uses in many disciplines over the years. For some broader background on the applications and analysis, see Guastello and Gregson (2011). Applications in this chapter include cognitive workload, fatigue, resilience, and diffusion of innovation.

The response surface for the cusp model is three-dimensional and describes two stable states of behavior (Figure 36.3). Change between the two states is a function of two control parameters, asymmetry (a) and bifurcation (b). At low values of b, change is smooth, that is, y is a continuous and monotonic function of a. At high values of b, the relationship between a and y is potentially discontinuous, depending on the values of a. At the lower end of the a scale, the a-y relationship depends on the level of b such that the a-y relationship gets increasingly positive as b decreases. This is the traditional interaction effect. Something similar occurs at the upper end of the a scale. In the middle of the a scale, however, y is not a continuous function of a only when b is low. When b is high, y changes suddenly (i.e., catastrophically) as a function of a. Said another way, at low values of a when b is high, changes occur around the lower mode and are relatively small in size. At middle values of a, changes occur between modes and are relatively large, assuming b is also large. At high values of a, changes occur around the upper mode and are again small.

The cusp response surface is the set of points where

df(y)/dy = dy/dt = y3 – by – a. (1) Emergence in Organizations and Human
            Collective Intelligence

Figure 36.3 The cusp catastrophe model with labeling for cognitive workload and performance. Emergence in Organizations and Human
            Collective Intelligence

Figure 36.4 The cusp catastrophe potential function associated with phase shifts.

Change in behavior is denoted by the path of a control point over time. The point begins on the upper sheet denoting behavior of one type and is observed in that behavioral modality for a period of time. During that time, its coordinates on a and b are changing when suddenly it reaches a fold line and drops to the lower value of the behavior, which is qualitatively different and where it remains. Reversing direction, the point is observed in the lower mode (p. 539) until coordinates change to a critical pair of values; at that moment the point jumps back to the upper mode. There are two thresholds for behavior change, one ascending and one descending. The phenomenon of hysteresis simultaneously refers to relatively frequent changes between the two behavioral states and the two different thresholds for change.

The shaded area of the surface is the region of inaccessibility in which very few points fall. Whereas the stable states are attractors, the inaccessible region is a repeller: Points are deflected in any direction if they veer too close to the repeller region. Statistically, one would observe an anti-mode between the two stable states that would correspond to the shaded region of the surface.

Figure 36.4 illustrates the potential energy function for phase shifts as they have been defined thus far. The potential function shown in Figure 36.4 depicts the case where the bifurcation effect is strong, and the two low-entropy wells are separated. A control point that is situated in the middle is trying to move from one well (attractor) to another. Its entropy level is sufficiently strong and afforded by high-bifurcation conditions. It needs only a tiny push from the asymmetry variable to land in one of the two attractor states.

Complex Adaptive Systems

complex adaptive system (CAS) is a living system that maintains a far-from-equilibrium status even though its behavior is stable overall (Waldrop, 1992). Although the system might have self-organized its resources to engage in a strategy for survival, it is ready to adapt to environmental or internal stimuli at a moment’s notice. When it adapts, it reorganizes its communication, feedback, or work flow patterns to respond to the new situation and engage in a pertinent response. We would observe relatively greater levels of entropy in the behavior of a healthy CAS, and less entropy, or more rigidity and stereotypic behavior, in a less functional system (Goldberger et al., 2002).

The entropy in the system is observable in subtle ways. When we repeat a motion, such as a hand gesture, the gesture does not turn out exactly the same way each time. The repetitions are similar enough for all intents and purposes, but the residual variation is a tell-tale sign that the system is capable of modifying the motion to meet variations in circumstances (Hollis et al., 2009).

Cognitive and motor processes are both embedded and embodied. Closed systems exhibit embedded processes. Once the behavior sequence is set in motion, it continues according to its intrinsic dynamics. An embedded system, in contrast, is open to environmental influences, hence the adaptation. The environmental influences might be assimilated with only small variations in the system’s behavior, or they might require accommodation whereby the system reorganizes itself in some way to execute the task. In human cognition, the executive function is probably operating to a much greater extent in the cases of accommodation, whereas automaticity prevails in the cases of assimilation.

The perceptual, cognitive, and psychomotor aspects of an automatic process often do not begin as an automatic process. The perceptual, cognitive, and psychomotor parts of the CAS interact with each other, shape each other, exchange information, and are not replaceable or removable without fundamentally altering the dynamics of the system as a whole. With repetition and practice, the individual parts of the behavior organize into one flowing unit, which we recognize as part of the learning process. Trial and error at both the neurological and behavioral level give way to a self-organized combination of neural networks and events. Thus “stable” does not mean “without variability.” An element of variability is necessary if it will ever be possible for the person, group, or organization to attain greater levels of performance (Abbott, Button, Pepping, & Collins, 2005; Mayer-Kress, Newell, & Liu, 2009).

Attempts to correct flaws or otherwise change a part of the CAS often do not succeed because the parts adapt in such a way as to protect the system from intrusions from outside the system. A CAS can thus be considered resilient to the extent that the self-repairing feature is operating (Hollnagel, Woods, & Leveson, 2006; Sheridan, 2008).

A comparison between the propositions for a theory of adaptation in organizations that were advanced by Burke et al. (2006(p. 540) and those of the theory of the complex adaptive system for organizations advanced nearly a decade earlier by Dooley (1997) appeared elsewhere recently (Guastello, 2009c). The aspects of the CAS’ adaptive capability that are more germane to cognitive processes are considered next.

Group members scan the environment and develop schemata (Dooley, 1997). A schema is essentially the same as a mental model, although it places some additional emphasis on the actions that could be taken in response to the requirements of the mental model (Newell, 1991). Schemata define rules of interaction with other agents that exist within the work group or outside the work group boundaries (Dooley, 1997). A group’s schemata are often built from existing building blocks, and new ones are inevitably brought into the group when members arrive. They often take the form of particular individual differences in job knowledge and experience.

When schemata change, requisite variety, robustness, and reliability are ideally enhanced (Dooley, 1997). Reliability denotes error-free action in the usual sense. Robustness denotes the ability of the system to withstand unpredictable shock from the environment. Requisite variety refers to Ashby’s (1956) Law: For the effective control of a system, the complexity of the controller must be at least equal to the complexity of the system that is being controlled. Complexity in this context refers the number of system states, which are typically conceptualized as discrete outcomes.

The dynamics of agent interaction and problem solving give rise to the development of schemata. Once adopted, they are expected to have a supervenience effect on the further actions of the agents. A schema that is deployed or changed against a context that contains little history or precedent, as in a group’s early stages of life, might have a different impact if it were deployed in a context where a supervenience effect was occurring.

Cognitive Workload, Resilience, and Fatigue

One of the chronic problems with research in cognitive workload and fatigue is that it is generally difficult to separate the impact of workload, fatigue, other forms of stress, and practice within the conventional experimental paradigms (Hancock & Desmond, 2001; Ackerman, 2011). A viable solution, however, is afforded by the use of two catastrophe models, one for workload effects and one for fatigue (Guastello, 20032006; Guastello, Boeh, Schimmels, et al., 2012; Guastello, Boeh, Shumaker, et al., 2011). They have similar structures but derive from different underlying dynamics. Although the primary research has been conducted at the individual level of analysis, the possibility of collective implications has been noted.

Cognitive Workload

The cognitive workload model is analogous to Euler buckling of an elastic beam as larger loads are placed on the beam. A more elastic beam will waffle when heavily loaded, but an inelastic beam will snap under the same conditions. The application that produced the model (Guastello, 1985) was based on a study of physical labor in a wheelbarrow obstacle course. Employees in a steel manufacturing facility completed the obstacle course three times with increasing loads in their wheelbarrows. The addition of weights had the effect of separating people who displayed no slowing in their response times as loads were added from people who exhibited a sharp increase in their response times under the same conditions. The amount of change in response time was governed by a group of physiological variables, which, when taken together, indicated a condition comparable to elasticity in materials science. In the buckling model, the amount of weight on the proverbial beam was the asymmetry parameter, and the elasticity measurement was the bifurcation parameter.

For load stress, the asymmetry parameter is once again the load amount, and the bifurcation parameter is the elasticity variable, which takes the form of “coping strategies” psychologically (Guastello, Boeh, Schimmels, et al., 2012; Guastello, Boeh, Shumaker, et al., 2012; Figure 36.3). The role of coping strategies or elasticity as the bifurcation factor, which could vary across individuals and perhaps situations, explains why both variable upper limits and fixed upper limits have been reported in the experimental literature. In one recent experiment, trait anxiety acted as the bifurcation variable in a memory task where the load was augmented by competition and incentive conditions; high anxiety was less flexible (Guastello, Boeh, Schimmels, et al., 2012).

Thompson (2010) applied essentially the same cusp model to the possible results of high-impact decisions that are made under conditions of stress. He observed that otherwise capable leaders sometimes make disastrous decisions. Any of the load or environmental stressors that are known in stress research could be part of the asymmetry parameter. (p. 541) He recommended emotional intelligence as the primary variable that captures the elasticity that is needed to respond to the load demands.

Resilience

The notion of coping strategies in the face of severe stress has also been interpreted as resilience in socio-technical systems (Hollnagel, Woods, & Leveson, 2006; Seligman & Matthews, 2011; Sheridan, 2008), and the connection between catastrophes and the idea of resilience is now crossing over into clinical psychology and medicine (Guastello, in press; Pincus, 2010).

Resilience in socio-technical systems poses questions such as: How well can a system rebound from a threat or assault? Can it detect critical situations before they fully arrive? Importantly, several chapters in Hollnagel et al. (2006) described “emergence” scenarios where subcritical events combined to produce explicitly the situations that required adaptation. Although the PMS was not explicitly characterized as a CAS, that was apparently the intended meaning. Many of the initial examples of resilience in organizations were post-hoc interpretations of events, as acknowledged by their authors, but it now appears that NDS theory can now make them analytical.

Fatigue

Fatigue is defined as the loss of work capacity over time for both cognitive and physical labor (Ackerman, 2011; Guastello, 1995; Guastello & McGee, 1987). Depletion of work capacity is typically assessed by a work curve that plots performance over time; there is a sharp drop in performance when fatigue sets in that is also coupled with a higher level of performance variability over time. Not everyone experiences a decline as a result of the same expenditures, however. Some show an increase in physical strength akin to “just getting warmed up,” while others show stably high or lower performance levels for the duration of the work period. Thus Ioteyko (1920) introduced a cubic polynomial function to account for the classic and more common work curve as well as all the other variations. Emergence in Organizations and Human
            Collective Intelligence

Figure 36.5 The cusp catastrophe model for cognitive fatigue.

The cubic function was essentially the structure of the cusp catastrophe model for fatigue (Guastello & McGee, 1987; Guastello, Boeh, Schimmels, et al., in press; Guastello, Boeh, Shumaker, et al., 2012), shown in Figure 36.5. The fatigue model has the same cusp structure as the buckling model for workload, but the variables that contribute to the control parameters are different. Work capacity is the dependent measure that displays two stable states. Capacity and performance at a single point in time are not always easy to distinguish, but in principle it is the capacity that is subject to dramatic or small change over time. The total quantity of work done would be the main contributor to the bifurcation parameter; if the individual did not accomplish much in a fixed amount of time, there would be little fatigue in the sense of work capacity.

The asymmetry parameter would be a compensatory strength measure. For instance, in the original example (Guastello & McGee, 1987), laborers displayed differences in arm strength as a result of about two hours worth of standard mill labor tasks, which were primarily demanding on arm strength. They were measured on isometric arm strength and leg strength using a dynamometer before and after the work session. Leg strength showed little change after the work session, which was not surprising, but it did act as a compensation factor for arm strength; those with greater leg strength experienced less fatigue in their arms, all other things (such as total work accomplished) being equal.

In a study of cognitive fatigue, Guastello, Boeh, Shumaker, et al. (2012) found that arithmetic ability showed a compensatory effect on fatigue in an episodic memory task. A later experiment showed that episodic memory showed a compensatory effect on a pictorial memory task (Guastello, Boeh, Schimmels, et al., 2012).

The principle of degrees of freedom is thought to operate in fatigue dynamics as well. Not only does performance drop precipitously in the classic work curve, but it becomes more variable as well. Hong (2010) suggested that the increase in performance variability during the low-production period arises from an internal search for a possible reconfiguration of degrees of freedom. There are two plausible scenarios: According to the redistribution principle, the individual is searching for a lower-entropy (p. 542) means of accomplishing the same task or goal. If a loss of total entropy was occurring, however, the individual would be not only trying to regroup internal resources but also reducing responsiveness to the total complexity of task situations and demands, gravitating toward what amounts to the easier task options or situations, or to simpler tasks.

Collective Intelligence and Collective Action

The concept of collective intelligence originated with studies of social insects, particularly ants (Sulis, 19972009). The concept crossed over to human cognitive phenomena when it became apparent that decentralized networks of people produce ideas, plans, and coordinated actions without being present in the same physical location simultaneously. The interaction among people is greatly facilitated by computer-based systems such standard email, listservers, and web-based technologies (Guastello & Philippe, 1997; also see Bockelman Morrow & Fiore, this handbook). The growth of virtual communities gravitates to an attractor that represents a stable population. The study of collective communication patterns in technology-driven systems, which often facilitate easy tracking of specific statements and their temporal sequencing, has led to a rethinking of human interactions in real time as well (Gureckis & Goldstone, 2006). The same phenomena are sometimes known as distributed cognition.

One should bear in mind that the boundaries usually associated with “organization” are semipermeable, meaning that a great deal of information flows across organizational boundaries and might not be centralized within an organization at all. This phenomenon, together with analogies to insect colonies, was the underlying theme in Kelly (1994). With decentralized or network-based communication and decision patterns, the notion of controlling a human system necessarily changes. Consistent with the idea behind ant colonies, the top-down control that is usually inherent in organizational structures simply does not operate well any longer: Events self-organize from the bottom up. The next section of this chapter considers some selected themes: basic principles of collective intelligence, creative problem solving, team coordination, and the learning organization.

Principles of Collective Intelligence

An ant colony displays some highly organized activities such as foraging, nest building and maintenance, and travel. At the same time, each ant does not have a big plan in its little head. Rather, each ant is equipped with elementary schemata that synchronize with those of other ants when larger-scale events occur. Sulis (2009) identified several principles of ant collective intelligence from which it is possible to extrapolate analogous functions in human systems. The first two are interactive determinism and self-organization, which were described in general systems form already: The interaction among individuals gives rise to the final collective result. The final result stabilizes in interesting ways and without external intervention; in other words, the queen ant or bee is not barking (buzzing) orders to the others. The stabilization of a collective action pattern is a phase transition.

Recalling the counterpoint made earlier about embedded and embodied cognition, the embodied portion operates automatically, assimilating nuances in the environment. The embedded portion is aware of the nuances in the environment and permits adaptations or accommodations to be made. Environmental nuances, nonetheless, have an impact on the final collective result; the phenomenon is known as stochastic determinism.

Probability structures that underlie collective outcomes remain stable over time, however, and are regarded as nondispersive. They remain as such until a substantial adaptation is needed, such as some regions of the environment become favored or disfavored. This broken ergodicity occurs at the collective level. Similar disruptions occur at the individual level as an experience of one agent impacts the behavior of another, thereby amplifying the feedback to a third. With enough uncontrolled fluctuation, the foraging path can shift suddenly. Hence broken symmetry is possible as well.

Some further principles are likely to take different forms in human in contrast to insect contexts. One is salience: The environmental cues that could induce broken symmetry are likely to work to the extent that they are sufficiently visible compared to other cues. In human contexts salience is complicated by meaning, which can be operationally defined as the connection to other ideas, personal intentions, and system goals (Kohonen, 1989). Ants do not appear to express much variety in personal intention, but humans do so regularly. Humans and human systems often have several competing goals.

Also important is that computational experiments assume that individual agents are destined to interact. Sometimes that is the case, of course, but people also reserve the choice to interact or not. (p. 543) Whatever rules they invoke to decide whether to interact are likely to play out in emergent patterns of social organization eventually (Sulis, 20082009; Trofimova, 2002).

Multiple Person-Machine Systems

DeGreene (1991) made three points that are relevant to the present exposition. First, the early person-machine systems were conceptualized at the interface level, where bilateral interactions between one person and one machine occur. Information that was considered relevant was predominately atheoretical in nature and mostly geared toward defining critical numeric values for one function or another.

The newer technologies have enabled a greater number of functions to be controlled by multiple person-machine systems. As depicted in Figure 36.6, information flows the between the person and machine pretty much as usual, but there are also flows between machines and between humans. The machines are linked into networks. People can communicate either in real space-time or through networks via machines. Information from one PMS affects another, and delays in communication can result in various forms of uncoordinated action. The information that transfers is often far more complex in nature. Emergence in Organizations and Human
            Collective Intelligence

Figure 36.6 The extended person-machine system.

Second, he observed that although the concept of the system had been introduced at an earlier time (Meister, 1977), the full implications of the system concept had not been fully explored. The concepts of self-organization, the complex adaptive system, and others considered in this chapter have been possible and practicable only in the last 15 years or so. One of the more exotic applications of both the extended person-machine system and NDS is inherent in the problem of how to control a fleet of robots (Trianni, 2008). Each is an autonomous agent, which is itself complex with respect to cognitive and psychomotor components. (Not all concepts of robotic devices involve humanoid appearance or functionality.) Self-organizing properties require information loops between the units, and the current challenge is to develop sensors and response structures that keep the cluster of robots functioning even if one of them should become impaired. A human controller is still involved, especially where the goals of the system’s actions need to be defined for a given purposes, but how much control could be, or should be, allocated to the human “executive”? How much of the behavior of the system is going to be self-organized? How much broken symmetry can be tolerated? Several science fiction movies have centered on this theme, with nothing good happening to the humans.

Third, chaotic sequences of events do occur. Sources of chaos are potentially inherent in two places. One is in the information flow that is conveyed in the machine’s displays. The other is in the output of the PMS.

Stimuli arrive over time in many cases. Humans are highly variable in their ability to interpret patterns arising from chaotic or simpler nonlinear sources (Guastello, 2002; Heath, 2002; Ward & West, 1998). Historical and predictive displays have been in use for decades, with growing levels of sophistication. Some thought has been given to the use of chaotic controllers, which could take two basic forms (Guastello, 2006; Jagacinski & Flach, 2003). One form might be designed to regularize the flow of information to the operator. The other would recognize and interpret patterns and identify options to the user. There do not appear to be known cases of chaotic controllers of either type in a PMS in operation at this time, however.

Self-organization occurs when any multiple PMS increases its rate of interaction. The chaos in its function over time would take the form of unstable patterns that form and reform, and would characterize a learning process. Self-organization would set in as a stable coordinated pattern coalesces (Guastello, Bock, Caldwell, & Bond, 2005). Synchronicity can be produced even in nonliving systems with only minimum requirements—two coupled oscillators, a feedback channel between them, and a control parameter that speeds up the oscillations (Strogatz, 2003). The oscillators synchronize when they speed up fast enough. The principle also has been demonstrated with electrical circuits and mechanical clocks.

Complex systems that require decision making commonly involve intentional behavior and feedback loops between the person, machine, and physical environment that is ultimately the objective of control. Depending on the sensitivity of the real system to small control motions, we could end up with (p. 544) chaotic control motions. Simply driving a stable system into oscillation might be enough of a cause for alarm, and possibly damage the equipment. For reasons like these a form of resistance is often built into control systems to dampen the velocity, acceleration, or jerky movements that might be have been induced by the operator.

Creative Problem Solving

Creativity is a complex phenomenon involving divergent thinking skills, some personality traits that are commonly associated with creative individuals across many professions, an environment rich in substantive and interpersonal resources, and cognitive style. Cognitive style is a combination of personality and cognition; it refers to how people might use their talents rather than the quantity of such talents. According to an early version of chance-configuration theory (Simonton, 1988), creative products are the result of a random idea generation process. Greater quantities of ideas are generated from enriched personal and professional environments. Idea elements recombine into configurations during the idea generation process. When the creative thinker latches on to a new configuration and explores it as a possible solution to a problem, self-organization of the idea elements takes place, producing the experience of insight.

In the context of NDS, however, the generation and recombination of idea elements is chaotic rather than random (Guastello, 2002). The self-organization of idea elements is largely a response to a chaotic system state. The idea elements, meanwhile, are generated by deterministic human systems, whether individually or in groups. The individuals filter out some ideas and attract others depending on their goals for problem solving. They also organize idea elements according to their own unique mental organization and experience; some of these mental organizations, or semantic lattices (Hardy, 1998), are shared with other people in the society or with other problem solvers in the group, whereas other mental organizations are more unique. The process of idea generation activates and retraces the paths that the individuals have mentally created already among idea elements, prior to any one particular problem-solving event (Guastello, 19951998).

The dynamics of creative problem solving in groups that were working together in a real-time experiment were explained by a six-dimensional mushroom catastrophe model; the model featured two dependent measures (order parameters) that exhibited discontinuous change, and four control variables that governed the actual change (Guastello, 1995). In the experimental task, the participants were organized into groups of eight people who were told that they were influential personages from a hypothetical island nation. Their task was to organize a plan for developing the island’s commercial and social service infrastructure and to allocate an impending budget. At different times during their discussion, the groups received “news bulletins” of events occurring on the island that, in principle, could wreak havoc with their partially formed plans and compel an adaptive response. Participants completed a normal range personality test prior to the discussion. After the discussion they completed a questionnaire in which each person in the group rated each other person in the group on a number of variables related to communication patterns during the discussion.

The order parameters were two simultaneous and interacting clusters of social interaction patterns, which were isolated through factor analysis of the post-game questionnaire. General Participation included information giving, asking questions, and statements of agreement with other people’s ideas; it was a variable that exhibited two stable states. Especially Creative Participation included statements that initiated courses of action for the group, elaboration of ideas, and rectifying intellectual conflicts; it displayed one stable state with instability at the high contribution end of the scale. Two of the four system control parameters, both of which brought the system closer to critical points where discontinuous changes in activity levels could occur, were occupied by personality traits. One cluster of traits distinguished high-production participants from low-production participants on the factor for general contributions. Assertiveness distinguished participants who most often gave especially creative responses from others.

The other two control parameters were bifurcation variables. One bifurcation variable was the overall group activity level, which could be considered a social dynamic by itself, that affected the level of especially creative behaviors. The other bifurcation effect was captured by the effect of the “news bulletins.” News bulletins promoted different levels of general participation, but not especially creative participation; in other words, they generated more talk, but not necessarily more action.

Other studies have also explored whether computer-facilitated communication can enhance the group’s overall level of production compared to (p. 545) the production of a collection of non-interacting individuals, so long as the group is large enough to produce a critical mass of ideas. Computer-based media can facilitate chaotic levels of idea production. In this situation, chaotic refers to bursts of high and low idea production of ideas over time on the part of either individuals or groups. Larger changes in production by individuals are associated with greater quantities of ideas that are produced by other group members in between two successive inputs from a particular person. The packets of production by an individual were variably long or short, but the overall trend was increasing.

At the group level of analysis, greater productivity is associated with a relatively complex problem task, where the task can be broken down into subtopics. In an illustrative analysis, group members, who were working on genuine business problems, could work on any subtopic in any order they chose, define them as they chose, go back and forth among the subtopics, and so on (Guastello, 1998). The number of active topics increased and decreased over time in a periodic fashion. The level of output by the group was chaotic overall, but it also showed periodic rises and drops in activation level in accordance with the change in the number of active topics. Thus the result, in the thinking of synergetics (Haken, 1984), is a coupled dynamic consisting of a periodic driver and a chaotic slave. Separate nonlinear equations that characterized the driver and the slave were derived statistically. The driver was the number of active threads or subthemes in the discussion at a particular interval of time. The slave was the overall level of output, in which the driver was one of two control variables. The second control variable was the particular problem-solving conversation in play; there were three conversations studied simultaneously.

One conclusion from this line of research was that a critical mass of ideas was necessary to generate sufficient entropy, which in turn facilitated the production of new ideas. The second conclusion was that the problem should be unpacked into subthemes. If the problem itself involves how to operate on an already-complex system, it would stand to reason that the puzzle pieces of the discussion would need to self-organize as well.

Team Coordination

Coordination occurs when group members make the same or compatible responses at the right time for optimal production. According to game theory, and contrary to conventional thinking, there is more than one type of coordination. In game-theoretical scenarios, individuals make decisions based on the utilities associated with their options. The Prisoner’s Dilemma game involves choices between cooperation and competition (Axelrod, 1984). Games in experiments can be played iteratively over time and can include large numbers of people interacting according to the same rules. Eventually, long-run patterns of cooperation and competition emerge, along with meta-rules by which players respond to defectors. The meta-rules serve to restrain symmetry breaking to varying extents (Maynard-Smith, 1982). The important point for present purposes is that cooperation emerges as a dominant strategy to the extent that players make cooperative responses simultaneously.

Other important games are strictly cooperative and do not involve competitive responses between the players as options. Two strictly cooperative games, Intersection and Stag Hunt, have received some attention in the human performance research. Intersection is considered next. Stag Hunt is described in conjunction with emergency response in a later section of this chapter.

The Intersection game requires group members to take the correct actions in the correct sequence, and to figure out the correct sequence, similar to what occurs in a four-way stop intersection. If the drivers correctly perceive the turn-taking system adopted by the preceding drivers and follow the sequence, then all cars pass through the intersection in a minimum amount of time with the lowest odds of a collision. In a real-life intersection, any of several possible rule systems could be adopted by the drivers, and each driver approaching the intersection needs to observe the strategy that is actually in effect, and then make the correct move. If a car tries to go through the intersection out of turn, then an accident could occur, or, in the more common occurrences, other players would need to revert to ad lib turn-taking to untangle the confusion at the intersection.

The process of group coordination involves the development of nonverbal communication links among the participants. These links evolve with repeated practice with each other. The evolution of the links is essentially a self-organization process. Furthermore, the basic process of coordination is non-hierarchical, meaning that a leader, who might usually contribute task-structuring activities of some sort, is not required. This state of affairs is not unlike flocks of birds, herds of beasts, or schools of fish, which operate without leaders. (p. 546)

Experimental Intersection games have involved card games instead of crashing cars. Participants are required to figure out and implement a coordination rule in order for the group to acquire performance points. Note the contrast between the intersection approach and conventional thinking about shared mental models. Although shared mental models are still implicit in a successful example of intersection coordination, the mental models are acquired on the fly by the group rather than by having the model handed to them by a discussion leader.

The results of Intersection game experiments to date show that if the experimental task is not excessively difficult, the group will display a coordination learning curve (Guastello & Guastello, 1998). The coordination acquired during one task session will transfer to the learning and performance curve of a second task. If the task is too difficult, self-organization will not be complete, and the time series of coordination data will be chaotic. The acquisition of coordination is a form of synchronization wherein the group members entrain their behaviors to the others’ in the group. Psychologically, it is a form of implicit learning; participants are learning a process or procedure for interacting with each other while they are trying to figure out the solution to an explicit problem.

A coordinated group can withstand changes in personnel up to a point before coordination breaks down (Guastello et al., 2005). Verbalization enhances performance to some extent, but not necessarily the level to which leaders emerge from the social unit (Guastello & Bond, 2007).

Group Size

Group size acts as a control parameter in ant collective intelligence (Sulis, 2009). A critical mass of ants is required to produce a sufficient momentum of interactions to get a nest-building project going. The same principle appears to resonate in research with human creative problem-solving groups. Groups outperform the best-qualified individuals if the groups are large enough to produce a critical mass of ideas (Dennis & Valacich, 1993). Groups also have the potential for outperforming individuals because they can review the available information more thoroughly, retrieve errors more reliably, and rely on the skills and knowledge bases of more people to formulate a solution to a problem (Laughlin, 1996); here the critical mass of people probably varies with the complexity of the information processing task.

Campion, Papper, and Medsker (1996) observed that groups need to be large enough to contain enough resources to get their work accomplished, but not so large as to induce coordination difficulties. Social loafing is more likely in larger groups, however. Loafers or free riders would find utility in joining the group with the expectation that someone in the group would get the job done, and all members would benefit. Hierarchical group structures can introduce more opportunities for inefficiency (Guastello, 2002).

By using a research hypothesis concerning group size, one can assess the potential trade-off between critical mass and deficits in coordination (Guastello, 2010a). If there is a group emergence effect at all, there would be an optimal group size associated with the teams’ performance. If larger groups perform better, a group dynamic is operating that would be consistent with the critical mass principle. If mid-size groups perform better, critical mass would be associated with the mid-size groups and coordination loss with the larger groups. If smaller groups perform better, the group dynamics would reflect widespread loafing. If there were no effect for group size, then the teams’ task was carried out by the most competent individuals; it would then be debatable whether the others were loafing or just not competent at the task.

Emergency Response

Emergency situations, by definition, involve serious time urgency. Natural disasters, terrorist attacks, and some military operations are characterized by sudden onset, fast-changing situations, and unplanned physical focus points and times of day. Highly coordinated and adaptive responses by first responder teams are absolutely necessary to mobilize resources to maximize the number of lives saved and minimize damage to property or to the emergency response (ER) resources themselves (Comfort, 1999; Koehler, 19951996). Three sets of principles that bear directly on the ER issues, in addition to team coordination, are considered next: time ecologies, situational awareness, and dynamic decisions.

Time Ecologies

ER systems, like other types of public policies, operate on multiple time horizons, or time ecologies (Koehler, 1999). At the slowest time horizon, something akin to senior management is identifying and interpreting risks of an outbreak of a natural or other type of disaster. The time horizon is occupied by foresight and action planning over a period of time that could extend for many years. Organizations or (p. 547) other socio-political systems that fail at this level are seriously impaired when an actual disaster strikes and the focus of attention shifts to the more immediate time horizons (Comfort, 1996; Pauchant & Mitroff, 1992; Reason, 1997), as when hurricane Katrina struck New Orleans in 2005 (Cigler, 2007; Derthick, 2007; van Heerden, 2007).

The mid-range time horizon initiates when the disaster actually strikes. According to Comfort (1996), the horizon for rescuing people from an earthquake region is about five days. The majority of people rescued who survive are rescued within the first two days, and the odds of survival given rescue decay sharply afterward. Meanwhile, all the shock elements of unplanned physical locations, time of day, availability or impairment of medical or transportation resources, fires and explosions, are generally fast-changing situations that require instant adaptive responses.

The situation is chaotic in the literal sense, as the flows of goods, services, fuel, and communication are seriously disrupted. Sensitivity to initial conditions figures prominently in the unfolding of events (Koehler, 19951996). According to Farazmand (2007), “Crises are borne out of short chains of events, often unpredicted and unexpected, but they develop with dynamic and unfolding events over months, days, hours, or even minutes. They disrupt the routine events of life and governance, disturb established systems, and cause severe anxieties; they produce dynamics that no one can predict or control” (p. 150). Many, sometimes hundreds of formal and informal organizations and citizen groups mobilize and coordinate (self-organize) their resources and capabilities over the short time horizon (Comfort, 1996; Morris, 1906; Morris, Morris, & Jones, 2007). Furthermore, a complex socio-technical system that is suddenly placed in a state of high entropy can produce surprises of its own, with collateral demands for quick and effective adaptive responses (McDaniel & Driebe, 2005; Sellnow, Seeger, & Ulmer, 2002). Although the skill for managing chaos is thought to be in short supply in the population of management personnel (Guastello, 2002), Morris et al. (2007) cited many specific examples where the U.S. Coast Guard and U.S. Air Force coordinated their actions in the Katrina disaster very effectively.

Events occurring at the micro-level time horizon operate at the scale of hours and minutes. Koehler (1996) emphasized the critical and problematic nature of timing at this level of activity. For instance, one decision maker can ascertain that a hospital emergency room has a certain amount of carrying capacity at a particular moment, and then dispatch some casualties to that hospital. By the time the batch of casualties arrives, other decision makers may have had the same idea and dispatched more casualties to the same location, thus producing a bottleneck. Other critical events are connected to the discovery of new casualties or the prevention of concomitant disasters, such as fires in the wake of an earthquake, or the change in the path of a forest fire caused by a sudden shift in the winds. Human communication and the physical movement of people and equipment are not always fast enough to compensate. Koehler (1996) also observed that the psychological representation of time by disaster respondents and victims is strongly constricted to the needs of the present moment. The ability to see the future, even in the short horizon of a disaster response, is greatly impaired.

Situation Awareness and Sensemaking

Situation awareness and sensemaking are two collective cognitive processes that are critical in both emergency and normal times of operation. Situation awareness research in human factors engineering is centered on the design and use of computer interfaces and information systems that might be used by operatives in dispersed physical locations (Endsley, Bolte, & Jones, 2003; Riley, Endsley, Boldstad, & Cuevas, 2006; also see Endsley, this handbook). Situation awareness is usually regarded as a process that can be assisted by technology, rather than a particular outcome (Wickens, 2008). Dynamic situations are of particular interest (Durso & Sethumadhavan, 2008), although geographic position systems in use a decade ago were notably effective in mitigating the damages of an earthquake (Comfort, 1996). Effective situation awareness requires the right information at the right time. The computer equipment that is typically involved is essentially augmenting basic human perceptual processes.

Sensemaking (Weick, 2005) is an aspect of situation awareness that places joint emphasis on the process of gathering relevant information and the cognitive integration process that occurs shortly afterward. Expectations affect the information that one seeks. Expectations that are based on what is already known produce some automatic actions that might not have the desired effect if the interpretation of the situation is wrong. Preparedness for the unknown, surprising, or emergent events could produce a more advantageous result. Weick (p. 548) used the Centers for Disease Control’s (CDC) initial diagnosis of what turned out to be West Nile virus as an illustrative example. The correct diagnosis was obtained once the CDC became aware of lab tests that did not fit the original hypothesis and new information about West Nile virus that was not previously on record. The West Nile virus was not known in the Western Hemisphere up until that point. Arrival at the correct diagnosis was facilitated by coordinated communications among the responding agents.

The CDC’s experience raised the issue of how best to prepare for an emergent disease epidemic or bioterrorist attack. Preparedness for the unknown is the hallmark of a CAS. One does not prepare for the new disease exactly, according to Weick (2005). Rather, one prepares a reasonable strategy for finding out what it is and formulating an appropriate response, which includes coordinated actions.

Sometimes the situation report is accurate, but making sense out of it is an independent challenge. In the case of the floods that swamped the Red River Valley in 1997, the National Weather Service provided accurate reports and forecasts of river water levels and when they were expected to overflow the dam. Prompt sensemaking was required to respond to power outages and fires, evacuate a hospital, and combat poisonings from household chemicals (Sellnow et al., 2002), although the efforts were not entirely successful. Sellnow et al. emphasized the importance of sensemakers’ ability to think through the intricacies of a complex socio-technical system.

Sometimes the deficits in sensemaking do not reside with the ER teams, but rather with the populations that they try to serve. The tsunami that occurred in Southeast Asia in 2004 produced a dilemma in risk perception for many people. In the early stages of the event, water receded from the shore, exposing coral and other underwater attractions. People were attracted to the shoreline to gawk. Then, too late for many, they noticed the wall of water arriving and eventually interpreted the situation as dangerous (Guastello et al., 2008), although the published photographs showed that some people still did not get the message when the rushing water was imminent (p. 115–116). Guastello et al. developed a cusp catastrophe model (an NDS process involving two attractors, a saddle, a bifurcation, and two control parameters) for risk perception that was based on previously known catastrophe models for approach and avoidance behavior and the percep tion of ambiguous stimuli. Other principles from the social psychology of group dynamics, notably social comparison theory, were also relevant: The sheer quantity of people making the wrong choice could be enough to induce additional casualties from more wrong choices. The social dynamics of risk perception indicated the importance of an intervention at the time and location where group decisions were being made (p. 121).

Dynamic Decisions

Dynamic decisions involve a series of decisions that are not independent of each other, a problem situation that changes either autonomously or by virtue of person-system interaction, and decisions that are made in real time (Brehmer, 2005, p. 77). The time-phased inflow of information induces dynamics that increase the complexity of the decision situation. Currently we know that time pressure, feedback delays, and reliability of incoming information place demands on the human operator that affect his or her performance (Brehmer, 1987; Jobidon, Rousseau, & Breton, 2005; Omodei, McLennan, & Wearing, 2005).

The computer programs that are typically used to generate scenarios for the study of dynamic decisions are alternatively known as scaled worlds or low-fidelity simulations (Schiflett, Elliott, Salas, & Coovert, 2004). As such there is a reduced concern for the realism of the peripheral features of the scenarios and a strong emphasis on the psychological constructs that the experimenter wants to assess. Realism is thus regarded as relative to the research objectives (Cooke & Shope, 2004). The systems lend themselves to reprogramming for desired experimental conditions. The game used in the Stag Hunt experiments, and which was also used again in the ER study, was essentially a low-fidelity simulation, but it was one that is operable without a computer system or the need to reprogram one. It also allowed for more natural interaction among the team players.

There has been some expressed concern, however, about whether the unreliability of the performance measures that have been used in research on dynamic decisions, which are typically a single number at the end of the simulation, is undermining attempts to test conventional hypotheses such as the relationship between general intelligence and performance (Brehmer, 2005; Elg, 2005). NDS theory would suggest here that the apparent unreliability of simulator performance measures could be related to the time-phased nature of the task and might not be a psychometric problem at all. As with other forms of individual and group learning, chaotic behavior occurs before the self-organization (p. 549) and stabilization at the levels of neural networks, individual behavior, and group work performance (Guastello et al., 2005). Unlike the typical learning experiments, however, the specific decisions within dynamic decision sets are not independent of each other. Choices made in an early stage can affect options and utilities of options later on. Thus a dynamic decision set is not subgame perfect, and is less so in situations where the natural disaster or human attackers are not adopting a dominant strategy in response to the ER team.

The interactions among the ER team members are not subgame perfect either. The natural disaster itself, however, does adopt a dominant strategy, although it is one of total indifference to the humans. As the situation becomes more degrees removed from subgame perfection and players delay longer in adopting a dominant strategy, the final results of the scenario become less predictable from information about utilities, options, and strategies available early in the scenario. The qualitative change in the dynamics of the learning system suggests further that the performance measures that are generated under a regime of instability or chaos are qualitatively different from those generated from a regime of self-organized stability.

Stag Hunt

Stag Hunt is a strictly cooperative game in which players, in essence, choose between joining the group (analogous to hunting stag) and going off on their own (analogous to hunting rabbits). Players adopt a dominant strategy that depends on how they perceived the efficacy of the group compared to their own individual efforts. A potential negative outcome in Stag Hunt is social loafing, or the free rider syndrome, where participants join teams that are likely to be successful with the intention of letting the others do the work. The syndrome tends to become stronger from moment to moment, or from decision to decision, when the group receives feedback that its performance is taking a downturn (Guastello & Bond, 2004).

The dynamics of Stag Hunt games are prominent in ER. The group’s results will be optimal if everyone in the group pulls together on each part of the job. As mentioned earlier, the efficacy of the group can become challenged in the face of negative turns of events. A recent study (Guastello, 2010a) examined the impact of team size and performance feedback on adaptation levels, and performance of emergency response (ER) teams was examined. Performance was measured in an experimental dynamic decision task where ER teams of different sizes worked against an attacker who was trying to destroy a city. The complexity of the teams’ and attackers’ adaptation strategies and the role of the opponents’ performance were assessed by nonlinear regression analysis; the analysis featured the use of the Lyapunov exponent (a measure of turbulence in a time series) associated with the performance trends. The results showed that teams were more readily influenced by the attackers’ performance than vice versa. Teams of 9 or 12 participants were more likely to prevail against the attacker compared to teams of 4 or 6 participants; only teams of 12 people were effective at dampening the adaptive responses of the attacker, however. In all cases, when the attackers scored points, the teams’ performance on the next move declined. The attackers’ performance patterns showed greater levels of turbulence, which was interpreted as adaptability, than the teams’ overall.

Learning Organizations

Finally for this chapter we consider the situation in which the organization acts a whole unit. Any teams or work groups that are involved could be coordinated within a larger hierarchy of activities. The notion of a learning organization became a fashionable view of an organization shortly before the notion of the organization as a complex adaptive system took hold (Seo, Putnam, & Bartunek, 2004). In its earlier manifestations, learning organizations were those that had evolved processes or structures analogous to individual perception, cognition, memory, and adaptation processes. In later study, learning processes in organizations are seen to promote self-organization of dominant strategies or schemata from the bottom up. Individuals and teams adopt processes that produce ideas, schemata, mental models, and meanings that are eventually shared with other teams until some become dominant enough in the organization to shape new schemata for newcomers or new responses to new challenges (Van de Ven & Hargrave, 2004).

The perception, situation awareness, or sensemaking processes in organizational contexts require information exchange networks that extend outside the organization to other organizations in the same industry, other organizations in different industries, and of course customers. Van de Ven gave an example of a successful use of bottom-up development of wind turbine technology. Danish industries started with relatively simple technology and, through close interaction with customers and their needs, shaped (p. 550) a premier technology that is financially successful for the organizations involved. Would-be competitors from the United States, however, took an isolationist strategy and attempted to leapfrog the stages of development by developing an advanced technology quickly. They maintained little communication with customers and were generally unsuccessful in their efforts.

Diffusion of Innovation

The creative products alluded to earlier in this chapter will diffuse if they are successful. Diffusion usually takes the form of buying and adopting a product, but it could also mean adopting an idea in some other way. The most widely cited model for diffusion of innovation is the S-curve model that was introduced by Rogers (1962) and developed through numerous editions over the years. The idea is depicted in Figure 36.7. When viewed over time, there are early adopters who respond quickly to the idea. Then there is the bulk of the population that responds more gradually at first, but with a sudden shift to widespread adoption. The sudden shift is thought to be inherent in the shape of a normal distribution viewed as a cumulative function. Finally there are those who are slowest to adopt; they join in around the time the market for the product is more or less saturated. Emergence in Organizations and Human
            Collective Intelligence

Figure 36.7 S-curve for innovation diffusion.

Of course one cannot adopt even the most advantageous product or idea unless one hears about it first. Thus networks of agents are thought to facilitate the communications about the innovation, and hence the adoption process (Valente, 1995). Information flows fast to the extent that the network is tightly coupled, meaning that the density of interactions among agents is high. The downside, however, is that tightly coupled networks eventually run out of fresh ideas because all agents acquire all the ideas. Loosely coupled networks, however, can run less quickly, but because they are more diffuse they have access to more sources of novelty, and thus have more to report when needed (Frantz & Carley, 2009).

There is a tendency in the literature on diffusion of innovation to assume that the innovation will diffuse and should diffuse, even though many innovations fail, and that somehow something is wrong with the people or organizations that do not adopt the innovation. Failures may be related instead to better alternatives available, or good reasons for resistance to a particular innovation, cost being one of those reasons. Thus Jacobsen and Guastello (2007) proposed a cusp catastrophe model (Figure 36.8) to describe when a particular agent will adopt an innovation. One the one hand the model still reflects the S-shape, but it is the result of a more complex process with two control parameters. Positive expectations about the innovation, which are predicated on seeking information and actually finding it, lead the agent to the inflection point where the adoption could occur. A resistance factor, however, separates those who adopt at that point and those who let it go by. Adoption of an innovation in the face of strong resistance forces is likely to result in a stable adoption; the agents must really want it. Weak resistance might seem favorable to adoption at first blush, but it actually permits the unthinkable in prior models: buying it and not using it, or adopting it and exchanging it for something else. Thus the adoption dynamic is, in principle, reversible. Emergence in Organizations and Human
            Collective Intelligence

Figure 36.8 Cusp catastrophe model for the diffusion of innovations. Reprinted with permission from Jacobsen and Guastello (2007, p. 503) with permission of the Society for Chaos Theory in Psychology & Life Sciences.

In their assessment of the adoption behavior of 13 energy-saving innovations by large commercial or governmental facilities, Jacobsen and Guastello (2007) found that seven fit the cusp (p. 551) model remarkably well, five were more consistent with a linear model of attitude and behavior, and one fit a power-law distribution better than the other alternatives. The cusp model was most apparent for innovations that had longer amounts of time between their first introduction to the markets and the time of the survey. The model was least descriptive of innovations that were either very new, and thus did not have sufficient time to diffuse, or simply drowned out by more attractive alternatives.

Learning Strategies

An organization co-evolves with a changing environment. The organization itself emerges as a means of channeling energy into the production of products and services and fitting them to potential markets. Numerous decisions need to be made concerning the nature and scope of the market, possible product features, pricing, advertising strategies, and so forth. (Ergonomics are, of course, very important features of product design.) There is a learning process involved in isolating the most profitable combinations.

Allen (2009) reported a simulation study that examined four learning strategies and the relative effectiveness of each: (a) Darwinian learning, where the organizations start with a random strategy, organizations with good strategies survive, and organizations with poor strategies go bankrupt and are replaced by new organizations with random strategies; (b) imitate the winner, which means that organizations copy others in the environment that have apparently functional strategies; (c) trial and error, where organizations explore possible strategies, try some, observe results, re-evaluate, and perhaps try something else while continuing to consider new options; and (d) mixed strategies, where all of the previous three exist in the organizational environment. Results showed that Darwinian learning produced the worst results for the industry as a whole with the largest proportion of bankruptcies. Imitating the winner worked much better, although it was subject to large fluctuations in profitability levels; it involved imitating the winner’s limitations too. Overall the greatest success was recorded for the trial-and-error strategy, where agents learn from their mistakes and continually seek out possible improvements by exploring what Allen (2009) characterized as their landscape of opportunities.

Summary of Common Themes

The cognitive functions that are usually associated with individuals all have counterparts at the group and organizational system levels. Schemata that emerge at the group or organizational levels result from a combination of high entropy and interaction among agents, thereby facilitating the dominant schemata within the collective. Once a dominant schema emerges, it has a downward influence that directs or limits the schemata of individuals within the system.

Symmetry breaking is also possible, however, and it often occurs in response to adaptive pressures. Creative problem solving is a concerted attempt to develop new schemata. There are two flows of ideas within the group that self-organize into a solution to a problem. The capability of a person, group, or organization to break symmetry and self-organize a response is an expected feature of a healthy CAS.

Coordination itself is a learning process wherein the group members entrain their behaviors to each other. This facet of emergence goes beyond the simple interaction processes that are usually encapsulated in agent-based computer modeling programs. Furthermore, there are several different coordination processes relevant to collective group behavior, not just one as commonly assumed.

ER efforts by or within organizations involve several classes of NDS processes simultaneously. Situation awareness, sensemaking, creative problem solving, coordination, and dynamic decisions all contribute to the final result. The first four involve self-organizing dynamics. All involve sensitivity to initial conditions, which is a hallmark feature of chaos. It is unlikely that all agents will have a firm grasp of the entire situation or action plan at all times simultaneously, but if they do so at the collective level, the ER efforts should be as successful as the situation allows.

At the organizational level, successful adaptation requires information flows outside the organization’s boundaries to other organizations within the industry, organizations in other industries, and the customer bases. Flows consolidate in the form of networks. Diffusion of innovation is predicated on information flow on the one hand, and resistance forces on the other. In the classic view the two forces are oppositional. In the NDS view, however, they play separate roles that affect the stability of adoption and screen the innovations that could be adopted.

There are different possible processes of learning and evolution in organizations. Simulation studies show that the most effect type of learning involves continual exploration of new ideas, trial and error, and subsequent improvement. In other words, (p. 552) organizations need to function as a CAS instead of simply imitating the winner.

Future Directions

There are numerous opportunities for future research on emergence phenomena in organizations, and it would be helpful to consider them in categories. First, there is a problem that actually emanates from basic cognitive theory, which is the extent to which human thought is representational (Dietrich & Markman, 2000) or computational (Gluck & Pew, 2005) in nature. According to Sulis (2009), collective intelligence in ants is computational and not representational, yet human thought processes consist of both. We can then ask how the two principles might balance in human collective intelligence and how different dynamics might ensue from different tasks with different proportions of each type of thinking.

Second, it is well known that leaders emerge from leaderless groups as the group works together for a while. Although the literature on the nonlinear dynamics of leadership emergence is substantial and growing (Guastello, 20072009c2010b2011; Hazy, 2008), the topic was not included in this chapter because the cognitive features of the process have not been sufficiently specified. It would be reasonable to anticipate, however, that the cognitive processes that are part of the emergence of leaders, and the other types of emergence that have been considered here, would be substantially connected by similarities in the nonlinear dynamic processes involved. For instance, we might ask how the role of the executive function in individual cognitive processes corresponds to the contribution of leadership roles in groups, or executives in organizations more broadly.

The third class of problems in nonlinear dynamics in cognition pertains to cognitive workload, fatigue, and stress. Although some viable models have been developed empirically, the range of tasks and situations is limited; the current status of the work has stood unaltered for quite some time (Guastello, 20032006), but has recently resumed, as reported here. There are thus plenty of opportunities for new research on these models. What combinations of tasks and environmental constraints induce fatigue or compensate for it beyond what we already know about resource allocation (Wickens, 2002)? How can the degrees of freedom principle suggest improvements for task design, task allocation, or task switching? Does anything emerge in these situations at the collective level? What constitutes the capacity for resilience or elasticity, and is it the same thing in every circumstance?

Fourth, Dooley (2009) observed that the empirical studies on emergence in organizations (rather than simply the group level of analysis) are in very short supply relative to the expansiveness of the theoretical works on the topic. Greater reliance on simulation strategies, such as those found in Allen (2009) concerning learning dynamics or Frantz and Carley (2009) concerning network dynamics, could produce some important breaking developments. Another strategy would utilize communication analysis techniques to decipher the process by which meaning is made in organizations, and how situational conditions could affect the development of meaning (Dooley & Corman, 2004; Dooley, Corman, McPhee, & Kuhn, 2003).

Finally, the concept of dynamic decisions and its experimental platform present considerable opportunities for NDS analysis. The one study on the subject (Guastello, 2010a) isolated NDS concepts and an experimental design that could inform a wide range of new studies. It is probable that the methodological issues that are currently encountered in dynamic decision research could be resolved by incorporating NDS principles and analysis.

References

Abbott A., Button C., Pepping G. -J., & Collins D. (2005). Unnatural selection: Talent identification and development in sport. Nonlinear Dynamics, Psychology, and Life Sciences, 9, 61–88.Find this resource:

Ackerman P. L. (2011). Cognitive fatigue. Washington, DC: American Psychological Association.Find this resource:

Allen P. A. (2009). Complexity, evolution, and organizational behavior. In S. J. Guastello, M. Koopmans, & D. Pincus. (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 452–474). New York, NY: Cambridge University Press.Find this resource:

Ashby W. R. (1956). Introduction to cybernetics. New York, NY: Wiley.Find this resource:

Andriani P., & McKelvey B. (2011). From skew distributions to power-law science. In P. Allen, S. Maguire, & B. McKelvey (Eds.), The Sage handbook of complexity and management (pp. 254–273). Thousand Oaks, CA: Sage.Find this resource:

Axelrod R. (1984). The evolution of cooperation. New York, NY: Basic Books.Find this resource:

Bak P. (1996). How nature works: The science of self-organized criticality. New York, NY: Springer-Verlag/Copernicus.Find this resource:

Bernstein N. (1967). The coordination and regulation of movements. Oxford, England: Pergamon.Find this resource:

Brehmer B. (1987). Development of mental models for decision in technological systems. In J. Rasmussen, K. Duncan, & J. Leplat (Eds.), New technology and human error (pp. 111–120). New York, NY: Wiley.Find this resource:

Brehmer B. (2005). Micro-worlds and the circular relation between people and their environment. Theoretical Issues in Ergonomic Science, 6, 73–94. (p. 553)Find this resource:

Burke C. S., Stagl K. C., Salas E., Pierce L., & Kendall D. (2006). Understanding team adaptation: A conceptual analysis and model. Journal of Applied Psychology, 91, 1189–1207.Find this resource:

Campion M. A., Papper E. M., & Medsker G. J. (1996). Relations between work team characteristics and effectiveness: A replication and extension. Personnel Psychology, 49, 429–452.Find this resource:

Cigler B. A. (2007). The “big questions” of Katrina and the 2005 great flood of New Orleans. Public Administration Review, 67, 64–76.Find this resource:

Comfort L. (1996). Self-organization in disaster response: Global strategies to support local action. In G. Koehler (Ed.), What disaster response management can learn from chaos theory (pp. 94–112). Sacramento, CA: California Research Bureau, California State Library.Find this resource:

Comfort L. (1999). Nonlinear dynamics in disaster response: The Northbridge California earthquake, January 17, 1994. In E. Elliott & L. D. Kiel (Eds.), Nonlinear dynamics, complexity, and public policy (pp. 139–152). Commack, NY: Nova Science.Find this resource:

Cooke N. J., & Shope S. M. (2004). Designing a synthetic task environment. In S. G. Schiflett, L. R. Elliott, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation, and applications (pp. 263–296). Burlington, VT: Ashgate.Find this resource:

DeGreene K. B. (1991). Emergent complexity and person-machine systems. International Journal of Man-Machine Studies, 35, 219–234.Find this resource:

Derthick M. (2007). Where federalism didn’t fail. Public Administration Review, 67, 36–47.Find this resource:

Dennis A. R., & Valacich J. S. (1993). Computer brainstorms: More heads are better than one. Journal of Applied Psychology, 78, 531–537.Find this resource:

Dietrich E., & Markman A. B. (2000). Cognitive dynamics: Conceptual and representational change in humans and machines. Mahwah, NJ: Erlbaum.Find this resource:

Dooley K. J. (1997). A complex adaptive systems model of organization change. Nonlinear Dynamics, Psychology, and Life Sciences, 1, 69–97.Find this resource:

Dooley K. J. (2009). Organizational psychology. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 434–451). New York, NY: Cambridge University Press.Find this resource:

Dooley K. J., & Corman S. (2004). Dynamic analysis of news streams: Institutional versus environmental effects. Nonlinear Dynamics, Psychology, and Life Sciences, 8, 403–428.Find this resource:

Dooley K. J., Corman S., McPhee R. D., & Kuhn T. (2003). Modeling high-resolution broadband discourse in complex adaptive systems. Nonlinear Dynamics, Psychology, and Life Sciences, 8, 403–428.Find this resource:

Dore M. H. I. (2009). The impact of Edward Lorenz: An introductory overview. Nonlinear Dynamics, Psychology, and Life Sciences 13, 243–247.Find this resource:

Durso F. T., & Sethumadhavan A. (2008). Situation awareness: Understanding dynamic environments. Human Factors, 50, 442–448.Find this resource:

Elg F. (2005). Leveraging intelligence for high performance in complex dynamic systems requires balanced goals. Theoretical Issues in Ergonomic Science, 6, 63–72.Find this resource:

Endsley M. R. Bolte, B., & Jones D. G. (2003). Designing for situation awareness. Philadelphia, PA: Taylor & Francis.Find this resource:

Farazmand A. (2007). Learning from the Katrina crisis: A global and international perspective with implications for future crisis management. Public Administration Review, 67, 149–159.Find this resource:

Frantz T. L., & Carley K. M. (2009). Agent-based modeling within a dynamic network. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 475–505). New York, NY: Cambridge University Press.Find this resource:

Gluck K. A., & Pew R. W. (2005). Modeling human behavior with integrated cognitive architectures. Mahwah, NJ: Erlbaum.Find this resource:

Goldberger A. L., Amaral L. A. N., Hausdorff J. M., Ivanov P. C., Peng C. K., & Stanley H. E. (2002). Fractal dynamics in physiology: Alterations with disease and aging. Proceedings of the National Academy of Sciences, 99, 2466–2472.Find this resource:

Goldstein J. (2011). Emergence in complex systems. In P. Allen, S. Maguire, & B. McKelvey (Eds.), The Sage handbook of complexity and management (pp. 65–78). Thousand Oaks, CA: Sage.Find this resource:

Gregson R. A. M. (1992). n-Dimensional nonlinear psychophysics. Mahwah, NJ: Erlbaum.Find this resource:

Gregson R. A. M. (2009). Psychophysics. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 108–131). New York, NY: Cambridge University Press.Find this resource:

Guastello S. J. (1985). Euler buckling in a wheelbarrow obstacle course: A catastrophe with complex lag. Behavioral Science, 30, 204–212.Find this resource:

Guastello S. J. (1995). Chaos catastrophe and human affairs. Mahwah, NJ: Erlbaum.Find this resource:

Guastello S. J. (1998). Creative problem solving groups at the edge of chaos. Journal of Creative Behavior, 32, 38–57.Find this resource:

Guastello S. J. (2002). Managing emergent phenomena: Nonlinear dynamics in work organizations. Mahwah, NJ: Erlbaum.Find this resource:

Guastello S. J. (2003). Nonlinear dynamics, complex systems, and occupational accidents. Human Factors in Manufacturing, 13, 293–304.Find this resource:

Guastello S. J. (2005a). Nonlinear models for the social sciences. In S. A. Whelan (Ed.), The handbook of group research and practice (pp. 251–272). Thousand Oaks, CA: Sage.Find this resource:

Guastello S. J. (2005b). Statistical distributions and self-organizing phenomena: What conclusions should be drawn? Nonlinear Dynamics, Psychology, and Life Sciences, 9, 463–478.Find this resource:

Guastello S. J. (2006). Human factors engineering and ergonomics: A systems approach. Mahwah, NJ: Erlbaum.Find this resource:

Guastello S. J. (2007). Nonlinear dynamics and leadership emergence. Leadership Quarterly18, 357–369.Find this resource:

Guastello S. J. (2009a). Chaos as a psychological construct: Historical roots, principal findings, and current growth directions. Nonlinear Dynamics, Psychology, and Life Sciences13, 289–310.Find this resource:

Guastello S. J. (2009b). Chaos and conflict: Recognizing patterns. Emergence: Complexity in Organizations, 10(4), 1–9.Find this resource:

Guastello S. J. (2009c). Group dynamics: Adaptability, coordination, and leadership emergence. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 402–433). New York, NY: Cambridge University Press.Find this resource:

Guastello S. J. (2010a). Nonlinear dynamics of team performance and adaptability in emergency response. Human Factors52, 162–172. (p. 554)Find this resource:

Guastello S. J. (2010b). Self-organization and leadership emergence in emergency response teams. Nonlinear Dynamics, Psychology, and Life Sciences, 14, 179–204.Find this resource:

Guastello S. J. (2011). Leadership emergence in engineering design teams. Nonlinear Dynamics, Psychology, and Life Sciences, 15, 87–104.Find this resource:

Guastello S. J. (in press). Modeling illness and recovery with nonlinear dynamics. In J. Sturmberg & C. M. Martin (Eds.), Handbook on complexity in health. New York, NY: Springer.Find this resource:

Guastello S. J., Bock B., Caldwell P., & Bond R. W., Jr. (2005). Origins of group coordination: Nonlinear dynamics and the role of verbalization. Nonlinear Dynamics, Psychology, and Life Sciences9, 175–208.Find this resource:

Guastello S. J., Boeh H., Schimmels M., Gorin H., Huschen S. Davis, E.,… Poston K. (2012). Cusp catastrophe models for cognitive workload and fatigue in a verbally cued pictorial memory task. Human Factors.Find this resource:

Guastello S. J., Boeh H., Shumaker C., & Schimmels M. (2012). Catastrophe models for cognitive workload and fatigue. Theoretical Issues in Ergonomics Science, 13. 586–602Find this resource:

Guastello S. J., & Bond R. W., Jr. (2004). Coordination in Stag Hunt games with application to emergency management. Nonlinear Dynamics, Psychology, and Life Sciences, 8, 345–374.Find this resource:

Guastello S. J., & Bond R. W., Jr. (2007). The emergence of leadership in coordination-intensive groups. Nonlinear Dynamics, Psychology, and Life Sciences, 11, 91–117.Find this resource:

Guastello S. J., Gorin H., Huschen S. Peters, N. E., Fabisch M., & Poston K. (2012). New paradigm for task switching strategies while performing multiple tasks: Entropy and symbolic dynamics analysis of voluntary patterns. Nonlinear Dynamics, Psychology, and Life Sciences, 16, 471–497Find this resource:

Guastello S. J., & Gregson R. A. M. (Eds.). (2011). Nonlinear dynamical systems analysis for the behavioral sciences using real data. Boca Raton, FL: CRC Press /Taylor & Francis.Find this resource:

Guastello S. J., & Guastello D. D. (1998). Origins of coordination and team effectiveness: A perspective from game theory and nonlinear dynamic s . Journal of Applied Psychology83, 423–437.Find this resource:

Guastello S. J., Koehler G., Koch B., Koyen J., Lilly A., Stake C., & Wozniczka J. (2008). Risk perception when the tsunami arrived. Theoretical Issues in Ergonomic Science9, 95–114.Find this resource:

Guastello S. J., & Liebovitch L. S. (2009). Introduction to nonlinear dynamics and complexity. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 1–40). New York, NY: Cambridge University Press.Find this resource:

Guastello S. J., & McGee D. W. (1987). Mathematical modeling of fatigue in physically demanding jobs. Journal of Mathematical Psychology, 31, 248–269.Find this resource:

Guastello S. J., & Philippe P. (1997). Dynamics in the development of large problem solving groups and virtual communities. Nonlinear Dynamics, Psychology, and Life Sciences, 1, 123–149.Find this resource:

Gureckis T. M., & Goldstone R. L. (2006). Thinking in groups. Pragmatics & Cognition, 14, 293–311.Find this resource:

Haken H. (1984). The science of structure: Synergetics. New York, NY: Van Nostrand Reinhold.Find this resource:

Hancock P. A., & Desmond P. A. (Eds.). (2001). Stress, workload, and fatigue. Mahwah, NJ: Erlbaum.Find this resource:

Hardy C. (1998). Networks of meaning: A bridge between mind and matter. Westport, CT: Praeger.Find this resource:

Hazy J. K. (2008). Toward a theory of leadership in complex systems: Computational modeling explorations. Nonlinear Dynamics, Psychology, and Life Sciences, 12, 281–310.Find this resource:

Heath R. A. (2002). Can people predict chaotic sequences? Nonlinear Dynamics, Psychology, and Life Sciences, 6, 37–54.Find this resource:

Holland J. H. (1995). Hidden order: How adaptation builds complexity. Cambridge, MA: Perseus.Find this resource:

Hollis G., Kloos H., & Van Orden G. C. (2009). Origins of order in cognitive activity. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 206–241). New York, NY: Cambridge University Press.Find this resource:

Hollnagel E., Woods D. D., & Leveson N. (Eds.). (2006). Resilience engineering. Burlington, VT: Ashgate.Find this resource:

Hong S. L. (2010). The entropy conservation principle: Applications in ergonomics and human factors. Nonlinear Dynamics, Psychology, and Life Sciences, 14, 291–315.Find this resource:

Ioteyko J. (1920). La fatigue [Fatigue] (2nd ed.) Paris, France: Flammarion.Find this resource:

Jacobsen J. J., & Guastello S. J. (2007). Nonlinear models for the adoption and diffusion of innovations for industrial energy conservation. Nonlinear Dynamics, Psychology, and Life Sciences, 11, 499–520.Find this resource:

Jagacinski R. J., & Flach J. M. (2003). Control theory for humans: Quantitative approaches to modeling performance. Mahwah, NJ: Erlbaum.Find this resource:

Jobidon M. -E., Rousseau R., & Breton R. (2005). The effect of variability in temporal information on the control of a dynamic task. Theoretical Issues in Ergonomic Science, 6, 49–62.Find this resource:

Kantowitz B. H. (1985). Channels and stages in human information processing: A limited analysis of theory and methodology. Journal of Mathematical Psychology, 29, 135–174.Find this resource:

Kauffman S. A. (1993). Origins of order: Self-organization and selection in evolution. New York, NY: Oxford University Press.Find this resource:

Kauffman S. A. (1995). At home in the universe: The search for laws of self-organization and complexity. New York, NY: Oxford University Press.Find this resource:

Kelly K. (1994). Out of control: The new biology of machines, social systems, and the economic world. Reading, MA: Addison-Wesley.Find this resource:

Koehler G. (1995). Fractals and path-dependent processes: A theoretical approach for characterizing emergency medical responses to major disasters. In R. Robertson & A. Combs (Eds.), Chaos theory in psychology and the life sciences (pp. 199–216). Hillsdale, NJ: Erlbaum.Find this resource:

Koehler G. (1996). What disaster response management can learn from chaos theory. In G. Koehler (Ed.), What disaster response management can learn from chaos theory (pp. 2–41). Sacramento, CA: California Research Bureau, California State Library.Find this resource:

Koehler G. (1999). The time compacted globe and the high tech primitive at the millennium. In E. Elliott & L. D. Kiel (Eds.), Nonlinear dynamics, complexity, and public policy (pp. 153–174). Commack, NY: Nova Science.Find this resource:

Kohonen T. (1989). Self-organization and associative memory (3rd ed.). New York, NY: Springer-Verlag.Find this resource:

Laughlin P. R. (1996). Group decision making and collective induction. In E. Witte & J. H. Davis (Eds.), Understanding group behavior: Consensual action by small groups (pp. 61–80). Mahwah, NJ: Erlbaum.Find this resource:

Lorenz E. N. (1963). Deterministic periodic flow. Journal of the Atmospheric Sciences, 20, 130–141. (p. 555)Find this resource:

Mayer-Kress G., Newell K. M., & Liu Y-T. (2009). Nonlinear dynamics of motor learning. Nonlinear Dynamics, Psychology, and Life Sciences, 13, 3–26.Find this resource:

McDaniel R. R., Jr., & Driebe D. J. (Eds.). (2005). Uncertainty and surprise in complex systems. New York, NY: Springer.Find this resource:

McKelvey B., & Lichetenstein B. B. (2007). Leadership in the four stages of emergence. In J. K. Hazy, B. B. Lichtenstein, & J. Goldstein (Eds.), Complex systems leadership theory (pp. 93–107). Litchfield Park AZ: ISCE.Find this resource:

Marken R. S. (1991). Degrees of freedom in behavior. Psychological Science, 2, 86–91.Find this resource:

May R. M. (1976). Simple mathematical models with very complex dynamics. Nature, 261, 459–467.Find this resource:

Maynard-Smith J. (1982). Evolution and the theory of games. Cambridge, England: Cambridge University Press.Find this resource:

Meister D. (1977). Implications of the system concept for human factors research methodology. Proceedings of the Human Factors Society, 21, 453–456.Find this resource:

Morris C. (1906). The San Francisco calamity by earthquake and fire. City Unknown: W. E. Scull.Find this resource:

Morris J. C., Morris E. D., & Jones D. M. (2007). Reaching for the philosopher’s stone: Contingent coordination and the military’s response to hurricane Katrina. Public Administration Review, 67, 94–106.Find this resource:

Newell K. M. (1991). Motor skill acquisition. Annual Review of Psychology, 42, 213–237.Find this resource:

Omodei M. M., McLennan J., & Wearing A. J. (2005). How expertise is applied in real-world decision environments: Head-mounted video and cued recall as a methodology for studying routines of decision making. In T. Betsch & S. Haberstroh (Eds.), The routines of decision making (pp. 271–288). Mahwah, NJ: Erlbaum.Find this resource:

Pauchant T. C., & Mitroff I. I. (1992). Transforming the crisis-prone organization. San Francisco, CA: Jossey-Bass.Find this resource:

Prigogine I., & Stengers I. (1984). Order out of chaos: Man’s new dialog with nature. New York, NY: Bantam.Find this resource:

Reason J. (1997). Managing the risks of organizational accidents. Brookfield, VT: Ashgate.Find this resource:

Renaud P., Chartier S., & Albert G. (2009). Embodied and embedded: The dynamics of extracting perceptual visual invariants. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 177–205). New York, NY: Cambridge University Press.Find this resource:

Riley J. M., Endsley M. R., Boldstad C. A., & Cuevas H. M. (2006). Collaborative planning and situation awareness in Army command and control. Ergonomics, 49, 1139–1153.Find this resource:

Rogers E. M. (1962). The diffusion of innovations. New York, NY: Free Press.Find this resource:

Rosenbaum D. A., Slotta J. D., Vaughn J., & Plamondon R. (1991). Optimal movement selection. Psychological Science, 2, 92–101.Find this resource:

Sawyer R. K. (2005). Social emergence: Societies as complex systems. New York, NY: Cambridge University Press.Find this resource:

Schiflett S. G., Elliott L. R., Salas E., & Coovert M. D. (Eds.). (2004). Scaled worlds: Development, validation, and applications. Burlington, VT: Ashgate.Find this resource:

Sellnow T. L., Seeger M. W., & Ulmer R. R. (2002). Chaos theory, informational needs, and natural disasters. Journal of Applied Communications Research, 30, 269–292.Find this resource:

Seo M.-G., Putnam L. L., & Bartunek J. M. (2004). Dualities and tensions of planned organizational change. In M. S. Poole & A. H. Van de Ven (Eds.), Handbook of organizational change and innovation (pp. 73–107). New York, NY: Oxford University Press.Find this resource:

Sheridan T. B. (2008). Risk, human error, and system resilience: Fundamental ideas. Human Factors, 50, 418–426.Find this resource:

Simonton D. K. (1988). Creativity, leadership, and change. In R. J. Sternberg (Ed.), The nature of creativity: Contemporary psychological perspectives (pp. 286–426). Cambridge, MA: MIT Press.Find this resource:

Sprott J. C. (2003). Chaos and time-series analysis. New York, NY: Oxford University Press.Find this resource:

Strogatz S. (2003). Sync: The emerging science of spontaneous order. New York, NY: Hyperion.Find this resource:

Sulis W. (1997). Fundamental concepts of collective intelligence. Nonlinear Dynamics, Psychology, and Life Sciences, 1, 35–54.Find this resource:

Sulis W. (2008). Stochastic phase decoupling in dynamical networks. Nonlinear Dynamics, Psychology, and Life Sciences, 12, 327–358.Find this resource:

Sulis W. (2009). Collective intelligence: Observations and models. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: Theory of nonlinear dynamical systems (pp. 41–72). New York, NY: Cambridge University Press.Find this resource:

Thom R. (1975). Structural stability and morphegenesis. New York, NY: Benjamin-Addison-Wesley.Find this resource:

Thompson H. L. (2010). The stress effect: Why smart leaders make dumb decisions—and what to do about it. San Francisco: Jossey-Bass.Find this resource:

Townsend J. T., & Wenger M. J. (2004). A theory of interactive parallel processing: New capacity measures and predictions for a response time inequality series. Psychological Review, 30, 708–719.Find this resource:

Trianni V. (2008). Evolutionary swarm robotics: Evolving self-organizing behaviors in groups of autonomous robots. Berlin, Germany: Springer.Find this resource:

Trofimova I. (2002). Sociability, diversity and compatibility in developing systems: EVS approach. In J. Nation, I. Trofimova, J. Rand, & W. Sulis (Eds.), Formal descriptions of developing systems (pp. 231–248). Dordrecht, The Netherlands: Kluwer.Find this resource:

Turvey M. T. (1990). Coordination. American Psychologist, 45, 938–953.Find this resource:

Valente T. (1995). Network models of the diffusion of innovations. Cresskill, NJ: Hampton Press.Find this resource:

Van de Ven A. H., & Hargrave T. J. (2004). Social, technical and institutional change: A literature review and synthesis. In M. S. Poole & A. H. Van de Ven (Eds.), Handbook of organizational change and innovation (pp. 259–303). New York, NY: Oxford University Press.Find this resource:

van Heerden I. L. I. (2007). The failure of the New Orleans levee system following hurricane Katrina and the pathway forward. Public Administration Review, 67, 24–35.Find this resource:

Waldrop M. M. (1992). Complexity: The emerging science at the edge of order and chaos. New York, NY: Simon & Schuster.Find this resource:

Ward L. M., & West R. L. (1998). Modeling human chaotic behavior: Nonlinear forecasting analysis of logistic iteration. Nonlinear Dynamics, Psychology, and Life Sciences, 2, 261–282.Find this resource:

West B. J., & Deering B. (1995). The lure of modern science: Fractal thinking. Singapore: World Scientific.Find this resource:

Weick K. E. (2005). Managing the unexpected: Complexity as distributed sensemaking. In R. R. McDaniel, Jr. & D. J. Driebe (Eds.), Uncertainty and surprise in complex systems (pp. 51–65). New York, NY: Springer.Find this resource:

Wickens C. D. (2008). Situation awareness: Review of Mica Endsley’s 1995 articles on situation awareness theory and measurement. Human Factors, 50, 397–403.Find this resource:Stephen J. Guastello

Stephen J. Guastello, Department of Psychology, Marquette University, Milwaukee, WI

Posted in Uncategorized | Leave a comment

Team Cognition: Coordination across Individuals and Machines

.

Team Cognition: Coordination across Individuals and Machines  

Patricia Bockelman Morrow and Stephen M. Fiore

The Oxford Handbook of Cognitive Engineering

Edited by John D. Lee and Alex Kirlik

Print Publication Date: Feb 2013Subject: Psychology, Cognitive PsychologyOnline Publication Date: May 2013DOI: 10.1093/oxfordhb/9780199757183.013.0012

York UniversityAUTOMATICALLY SIGNED IN

Sign in to an additional subscriber account

In This Article

Go to page:

Abstract and Keywords

Team cognition emerges as both the process and product of effective collaboration and coordination. This chapter examines the essential vocabulary for framing team cognition as an interdisciplinary endeavor. A historical review of major contributors to contemporary team cognition theories is provided to establish the field’s place in the larger stories of psychology, computer intelligence, and learning. Mental model representation methodologies provide bridges between theory and practice.

Keywords: team cognitionmental modelscollaborationinterdisciplinaritycognitive engineering

Introduction to Team Cognition

Cognitive science has had an important influence on a number of fields, including engineering and systems design, thus making an impact on our understanding of how to improve organizational and human performance. Importantly, this application has produced a bidirectional influence in the theories that cognitive scientists have produced to understand complex cognition and human performance in team contexts. This influence has helped to move the field such that a melding of human and machine cognition is emerging—an important theoretical blend that is producing a more holistic understanding of cognition in context. In this chapter, we review some of the foundational elements of team cognition theory and discuss the historical basis from which they developed. We discuss these in the context of a theory of team cognition as both process and product of interaction. We first establish a working vocabulary with which the topics may be discussed. The interdisciplinary roots of cognitive science make it especially important to clarify terms that varied fields may use with other intended meanings. We then review some of the theoretical shifts during the 20th century that set in motion the current trends in team cognition. We follow this with a discussion of a team cognitive tool that we suggest provides an example of how to bridge theory and practice. Our overarching goal is to discuss a sampling of the historical antecedents to team cognition as well as discuss important developments in the fields addressing complex collaborative cognition. Through all of this is the theme of the importance of understanding the relation between coordinated cognition and context, whether that context consists of additional humans or machines.

For the cognitive engineer, this means a challenge to design systems for nested intelligences. If teams of people create an emergent cognitive entity, and teams can be found within the context of larger teams, then a cognitive engineer will design for the human and for the layers of cognitive entities found at each team level. In this chapter, we will describe the historical context of team cognition, with one goal being the articulation of team cognition as a distinct form of cognition, one that demands its (p. 201) own tools and techniques from system designers and managers. It is a type of cognition that is inseparable from the humans that compose a team, but distinct in its own cognitive demands. To illustrate the challenges of cognitive engineering for teams, we describe the processes and products related to shared mental models (SMM). While SMM are by no means the only factor of team cognition that is not a critical component of individual cognition, SMM provide distinct theoretical and design challenges for cognitive engineers. We will close with a look at the future directions and research questions that directly relate to cognitive engineering.

Foundations of Team Cognition

The interdisciplinary nature of cognitive science requires caution as we use terms that some of the adjacent and complementary disciplines may apply with different implications. Therefore, we first clarify some of the nomenclature framing team cognition theory. We then review key historical contributions in basic and applied research that have brought us to the contemporary models for understanding the factors that contribute to collaborative cognitive processes and outcomes.

Coordinating the Nomenclature

As team cognition emerges as an increasingly distinct area of inquiry unto itself, it is imperative that the nomenclature becomes more established in its application. One important distinction that has risen from the academic refinery is that between group and team. Though there are a multitude of team types, and studies will often focus on specific aspects of teams, there are well-established components of teams that distinguish them from groups. A “team” is made from interdependent individuals who are viewed collectively, share responsibility for performance and achievement, and are embedded in organizational contexts (Cohen & Bailey, 1997; Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000; Hackman et al., 2000; Katzenbach & Smith, 1993; Salas & Fiore, 2004). Conversely, the term “group” may be used more generically, implying a commonality but not necessarily identifying shared constructs—which becomes vitally consequential when the term “cognition” is added to “team.” Cognitive processes arise during communications and interactions, becoming both meaningful and measurable in the forms of process and product (Fiore & Salas, 2004). Although there are numerous cognitive constructs at play in groups, the individuals are still largely independent. Teams, however, depend upon one another and, by definition, are interdependent (see Saavedra, Earley, & Van Dyne, 1993).

These distinctions led to the succinct definition of teams as “interdependent collections of individuals who share responsibility for specific outcomes for their organizations” (Sundstrom, DeMeuse, & Futrell, 1990, p. 120), or as “two or more people who interact dynamically, interdependently and adaptively toward” a shared goal (Salas, Dickinson, Converse, & Tannenbaum, 1992, p. 4). We embrace these definitions, noting the particular importance of interdependenceshared goals, and collective adaptivity, as it is from these factors that we can see how the construct of team cognition can help us better explore distinct theoretical principles.

The cognitive and social sciences have also identified an important quandary unique to the problems of team cognition. Decades ago, attempting to address coordination, researchers identified a phenomenon that Steiner (1972) referred to as process loss—coordination decrements that led to performance below team potential. If failure to support coordination results in team failure, it serves to reason that coordination lies at the heart of team success. The factors that enhance coordination—such as communication, shared knowledge, and team member awareness—are tied together in team cognition in ways in which Fiore and Salas (2004) equated to the binding problem. In neuroscience, the binding problem speaks to the conceptualization of the myriad of coordinated neurological impulses that must coalesce to generate synchronized processes. Fiore and Salas (2004) suggested that we can similarly conceptualize team cognition as the mechanism that “fuses the multiple inputs of a team into its own functional entity” (p. 237). Just as neural firings synchronize, successful team performance requires analogously coordinated actions from team members. Thus, cognitive and behavioral components must bind to produce desired outcomes in ways that can be recognized and assessed via coordination terms. Consequently, the binding problem becomes representative of both the neurocognitive and the team cognitive. Further expanding on the critical importance of coordination, Fiore and Salas (2006) argued for a more foundational understanding of team coordination, asserting that it is unique from “collaboration” or “cooperation,” terms sometimes used interchangeably in theories of team cognition. They note that collaboration and cooperation simply mean to “work together,” but the concept of coordination most cogently captures (p. 202) what we mean by effective teamwork. Considering this in light of the literature on teamwork, perhaps the most faithful definition of coordination can be found in theorizing by Marks and colleagues. Specifically, they view team coordination as “orchestrating the sequence and timing of interdependent actions” (Marks, Mathieu, & Zaccaro, 2001, p. 363). As the etymological origins of coordination show that it was derived from three distinct concepts (i.e., “arrange,” “order,” and “together”), we can see how the Marks et al. definition encompasses these origins and succinctly relates them to teams.

In addition to this foundational connection between team coordination and team cognition, Fiore and Salas (2004) illustrated how conceptualizations of team cognition fit under a general theme of awareness or communication. Specifically, team cognition research focused either on a type of awareness used to bind a team’s actions or on implicit/explicit communication as the means through which team cognition is developed or scaffolded in support of coordination. First, awareness in general, and shared awareness in particular, emerges as a key concept in team cognition research whereby researchers speak of the general need for shared awareness within teams. For example, team metacognition (Hinsz, 2004), mutual understanding of team knowledge and capabilities (e.g., Rentsch & Woehr, 2004), and computer-supported collaborative work (e.g., Gutwin & Greenberg, 2004) all emphasize awareness in the context of team cognition. Second, communication has been described as a “window to team cognition” (Cooke, Salas, Kiekel, & Bell, 2004) and may be the method whereby individual cognitive components actually become integrated within the team. It is through team processes such as communication that teams come to share or make common their awareness (cf. Fussell & Krauss, 1989).

More recently, Elias and Fiore (2012) engaged in a deeper analysis of the distinction between collaboration and coordination. They note that team cognition not only enables and facilitates team social cognition but also provides the context through which we can understand the constraints that scaffold a team’s interaction—a set of constraints that “endows behavior with meaning and purpose, with directedness and aim, and allows for anticipation precisely by narrowing the range of action” (p. 585). Their conceptual analysis suggests a system of frames inside frames in which collaboration occurs within, and because of, the constraints of coordination. Elias and Fiore (2012) argue that autonomy is inherent to collaboration, but that there is still subordination of the “part” to the larger team “whole.” Collaboration, therefore, consists of interaction and interdependence among autonomous individuals. But while coordination provides the constraints for team interaction, collaboration allows for the adaptive response to a team’s interdependencies. Implicit and explicit team processes, then, create the shared awareness necessary for collaboration and coordination. As such, in the context of team cognition, these are neither synonymous nor entirely separable constructs; rather, they are a necessary complement that creates effective teamwork.

This analysis of terminology was meant to clarify the meaning and relation of a set of foundational concepts. With these concepts as our stepping off point, we next discuss some of the historical antecedents that set the stage for the varied ways in which researchers have studied team cognition through the lenses of coordination, awareness, and communication.

Past as Prelude: Historical Underpinnings of Team Cognition

Team cognition as a distinct area of inquiry has risen as both melding and offshoot. Its history pulls from multidisciplinary considerations and collaborations that speak to this merger of complementary concepts. However, it also is distinct unto itself, contributing methods and philosophies that have afforded researchers the opportunities to consider problems of biological and artificial cognition. To better understand the basis for the approaches to team cognition, we provide a brief history of developments in applied psychology and related fields. Although only tangentially related, we suggest that this early work set the stage for the field of team cognition by demonstrating the value to theory and practice of connecting context to studies of cognition.

Though applied psychology may trace its roots to the early 20th century, it would take decades to move the field to a point where it truly examined contextually situated cognition. From observations made in the courts of law, Hugo Münsterberg called for the application of psychology to be used in real-world contexts (Benjamin, 2000). In other words, he encouraged his field to move beyond the theoretical findings in controlled laboratory settings to observe the psychological phenomena in the natural settings of human interaction. Throughout the subsequent decades, the opinions toward basic versus applied psychology ebbed and flowed. However, globally significant events, primarily World Wars I and II, would usher in an era where applied psychology not (p. 203) only was respected but also contributed to theory and practice in human performance.

The large-scale wars were contextualized in an industrial time, when the interactions of team members and machinery required strategic considerations. From this, researchers transcended the artificial barriers between basic and applied science, and effectively melded theory and practice in human performance (for a discussion, see Fiore, Salas, & Pavlas, 2009). In particular, World War I ushered in a complex industrialization of combat and, consequentially, new approaches to thinking about human performance. A new type of war fighter emerged, the pilot, inseparable in task from the machinery he operated, and coordinated in complex task goals with other pilots and ground troops. The aviators, with unique task and perception stressors, presented subjects for study in the military as researchers sought to understand and improve their performance (e.g., Hoffman & Deffenbacher, 1992; Meister, 1999).

Psychological measures were also being developed and applied for selection and assignment during World War I, and this continued into World War II (Katzell & Austin, 1992)—a practice that continues today. Although not described in the modern terminology of human cognition, this early work enhanced our understanding of cognition by setting the stage for how to train “skills” and their relation to “aptitudes.” Further, this work began to illustrate the importance of conceptualizing the relation between the job or task context and cognitive processes.

The interaction of humans with each other and with machines continued to gain attention from researchers following World War II as computers became important tools in not only military strategy but also domestic industry. Understanding how teams would work in context became a central concern, and the trends in the West would begin to converge with task theories developing in other countries. For example, researchers in the Soviet Union were developing complementary approaches to understanding humans in complex situations. Activity theory informed research on humans in their work environments by providing tools for looking at a person’s activity at both micro- and macro- levels (see Nardi, 1996). To activity theorists, context plays a central role, applying to internal goals and objects and simultaneously to external factors, thereby requiring multidisciplinary input. Artifacts, connections between humans and their experiences, are anthropological, historical, and sociological. These views set the stage for influential theories in the cognitive sciences, where ideas about “situated” and “distributed” cognition argued for the importance of context and collaboration to both human cognition as well as human-machine cognition (e.g., Clancey, 1997; Hutchins, 1995).

The final decades of the 20th century brought conspicuous shifts in cognitive research as the integration of technologies became the norm in everyday life. Researchers reassessed theories and frameworks, observing human interaction with sophisticated new equipment in complex environments, the likes of which had simply not existed in earlier times (e.g., aircraft cockpits, power plants, control systems). User-inspired engineering followed the newer foci, adjusting work theories into more holistic and useful cognitive task paradigms. From this, some theorists and practitioners began to realize the futility of trying to parse human from machine, recognizing that cognition had to be viewed across humans and their machines.

The evolution of the “task analysis” concept illustrates these developments. Earlier versions of task analysis, easily transposed to flowcharts where chains of events and consequences could be shown and anticipated, failed to fully capture the real-world mental and physical activities involved in decision making, especially in technology-rich and high-risk collaborative contexts. Cognitive task analysis (CTA) and team cognitive task analysis were developed as methods and techniques to inform decision making in these dynamic environments (see Rasmussen, 1985; Crandall, Klein, & Hoffman, 2006). CTA methods depended heavily on in-depth observations and interviews with experts (Crandall et al., 2006; Klein & Militello, 2001; Militello & Hutton, 1998). Expertise holds an important place in CTA, the assumption being that experts can provide copious insights to their essential knowledge, skills, and processes foundational to optimal performance. Though much of the theoretical literature arose simultaneously, it came from across the globe as more people in more places needed to work in harmony with advanced technologies and each other.

In sum, this brief review illustrates how a blend of basic and applied science supported the development of theories foundational to examination of human performance. These arose from a careful analysis of the work of not only individuals but also teams, carrying out complex responsibilities with and through sophisticated technologies. Understanding how humans interacted with each other and with their systems helped set the stage for theories of (p. 204) cognition in context. Important methods and theories for human performance testing (e.g., aptitude tests) and for system design (e.g., activity theory and CTA methods) arose from this work. But we turn next to the theoretical view that had the largest impact on research in cognition—the information processing approach to human cognition.

Human Information Processing Model of Cognition

The information processing model of cognition reigned solidly as the dominating theoretical approach to cognitive psychology in the post–WWII era. This model facilitated research and engineering because the computer metaphor assigned segments to cognitive processing—input, process, output (see Simon, 1978)—and was productively applied to theories about groups (Hinsz, Tindale, & Vollrath, 1997). In turn, these classifications of activity informed computer engineering by opening a potential “likeness” to the human mind, inspiring programmers to increase human-like interaction by modeling human linguistic and emotional responses as the product of input-based processes on elementary and aggregate levels. A complete review of the information processing approach is beyond the scope of this chapter. We therefore focus on its impact to theories of team cognition and the view of groups as information processers.

As in individuals, the information processing model is meant to capture a significant amount of the cognitive activity observed in groups and teams (e.g., Lord & Maher, 1990; Larson & Christensen, 1993; Levine, Resnick, & Higgins, 1993). Examining team cognition through the information processing model allowed researchers to expand the focus of study to include cognitive processes at the individual level and the group level, treating the group as a unit of cognitive study. This distinction is critical, as it moves beyond traditional examinations of social, contextual, or ecological cues as impacting individual minds and acknowledges the emergent and dynamic processes that occur in the collective (von Cranach, Ochsenbein, & Valach, 1986; Ickes & Gonzalez, 1994; Stasser & Dietz-Uhler, 2001).

In their influential paper that helped to cement the view of cognitive processing at the level of group, Hinsz, Tindale, and Vollrath (1997) applied a model for information processing asserting that “group-level information processing includes information, ideas, and cognitive processes that are shared, in that not only are they common among group members but also that the information, ideas, and cognitive processes are being shared (i.e., exchanged and transferred)” (p. 44). Their work applied that model to the group process by identifying the following components: processing objective, attention, encoding, storage, retrieval, processing workspace, output or response, and feedback. Those components have been explored in numerous studies involving groups and teams and, because of their influence on team cognition, we briefly review a subset of the ideas from Hinsz et al. In addition, for each of these we illustrate the practical relevance of these theoretical contributions.

First, the processing objective is the information embedded in a given context. The roles of members, diversity in perspective, procedures and governance, nature of the task, and even the instructions, influence the efficiency and efficacy of processing (Sherif, 1935; Hinsz, Tindale, & Vollrath, 1997; De Dreu, Nijstad, & van Knippenberg, 2008). The complexity of some modern team situations, such as aviation combat and nuclear facility management, demonstrates multiple processing objectives. From a practical standpoint, the consequential cognitive processing load of teams has become so great in those multiple-process situations (e.g., Johnston, Fiore, Paris, & Smith, in press) that research must examine how team cognition can be engineered with awareness of simultaneous or sequential objectives so that, for example, intelligent agents are organically and anticipatorily embedded in teams.

Following the thought of processing objectives, it is natural that attention is the immediate concatenation. Research has shown the human tendency for distraction within the numerous group dynamic, and attention problems manifest in numerous ways. For example, group members may distract one another, and individuals may be self-conscious or insecure and consequently self- (rather than task-) focused (Mullen, Chapman, & Peaugh, 1989). Aspects of information distribution among group members and components of interaction also have been shown to influence attention (Stewart & Stasser, 1998). At a practical level, such notions reconnect team cognition to its interdisciplinary roots and illustrate how cognitive engineering needs to be informed by the social sciences.

The group as information processor must also encode, or structure and interpret concepts, schema, and individual representations in shared models. Teams rely on encoding at two distinct yet inseparable levels of cognition. Obviously, team members must be able to represent all of the stages of task accomplishment, but they also need to collectively (p. 205) represent the task overall as well as the individual roles for meeting the goal (e.g., Cannon-Bowers, Salas, & Converse, 1993; Salas, Sims, & Burke, 2005). Encoding is intimately connected to the other stages of information processing, as it both influences, and is influenced by, the processing and attention arising during collaboration.

The notion of storage is well accepted as a computer-based concept, and, within collaborative contexts, it serves the same role. But, rather than manifesting in neatly organized files, humans tap into a variety of memory systems. When measuring simple storage capacity, groups have an advantage over individuals simply because capacity increases as the group size grows. Ideally, groups access this advantage via interpersonal communication to enhance performance and judgment. But a long line of research suggests that, despite a broader memory storage capability, the advantage does not always meet actualization (e.g., Hinsz, 1990; Stasser & Stewart, 1992; Stasser & Titus, 19851987). As an important illustration of the social interacting with the cognitive, when designing or organizing teams, it may be useful to consider the impact of minority voice on storage, as it can contribute to a broader base of options in decision making and avoid the “groupthink” behavior of team members possessing or expressing only common knowledge (Nemeth, 1986; Janis, 1982). Further, in a recent meta-analysis of information sharing within teams, Mesmer-Magnus and DeChurch (2009) found that factors such as task demonstrability fostered sharing, whereas others, like information distribution, inhibited it. As such, technologies that can attenuate the factors impacting information sharing are an important target for cognitive engineering.

Finally, critical to utilization of distributedly stored information is access. In an early discussion of this, Hinsz (1990) asserted an advantageous position for groups in the retrieval phase because of their multiple access points or triggers for memory retrieval. Further, members may recognize errors in another person’s recollection and may collectively be able to construct a more accurate aggregate account. Conversely, groups can create retrieval interference (e.g., Stroebe & Diehl, 1994; Basden et al., 1998). Thus, it is critical, in terms of engineering team cognitive systems, to leverage the benefits while addressing the challenges that develop in group retrieval settings.

This brief review of the groups as information processors model was meant to illustrate that, as engineers and computer scientists design for teams, they must consider the interactive nature of the group as it impacts processing, encoding, and retrieval. By taking into account individual and group level strategies and the positional dynamics emergent in the team setting, technology-mediated collaborative activity can be designed to mitigate process loss (cf. Steiner, 1972) and to produce the kind of coordination that fully leverages the promise of team cognition. We next turn to a discussion of theories of teamwork that have moved beyond the information processing view of team cognition.

From Cognition as Information Processing to Macrocognition

The information processing model would remain the primary means to understand human cognition until the mid-1970s. Despite its strengths as an approach for conceptualizing cognition, with this interpretation came numerous limitations (Hollnagel, 2002). This view considered cognition outside of context and, as such, complex cognitive activity was thought of as complex only because variables within the environment were so, as opposed to acknowledging that the processes themselves were very elaborate. As the information processing model is reductionist in nature, researchers conducted microcognitive research in controlled settings, seeking to reduce the cognitive phenomena to the smallest contributing components. Microcognition studied the mind as functions and processes of the individual so that group or team interaction was interpreted solely as the result of independent brain activity. The research concentrated on inquiries like serial or parallel attention, puzzle solving, and interpretation errors (Crandall et al., 2006). However, for the purpose of interpreting and understanding natural cognitive activity and team functions, microcognitive approaches are sorely limited.

An important development at this time was theory that distinguished between levels of process control, that is, analysis at the micro- and macro-operational levels (see Schraagen, Klein, & Hoffman, 2008; Woods & Roth, 1986). Out of this, macrocognition was offered as a theoretical lens through which to interpret complex cognition. Researchers in the area of naturalistic decision making adopted this view and expanded upon it to more fully describe complex cognition in natural settings (Hutton, Miller, & Thorsden, 2003; Klein, Klein, & Klein, 2000; Klein et al., 2003; Schraagen, Militello, Ormerod, & Lipshitz, 2008). Macrocognition suggested a way of developing “a framework for studying and understanding cognitive processes as they (p. 206) directly affect performance of natural tasks” [with representative macrocognitive functions described as] “decision-making, situation awareness, planning, problem detection, option generation, mental simulation, attention management, uncertainty management, expertise, and so forth” (Klein et al., 2000, p. 173).

Holding to this broader schema for examining cognition, Hollnagel (2002) regarded five aspects that support the macrocognitive approach. First, across natural and artificial cognitive systems, the process and product of cognition will be distributed. Second, cognition is not self-contained and finite, but a continuance of activity. Third, cognition is contextually embedded within a social environment. Fourth, cognitive activity is not stagnant, but dynamic. Last, artifacts aid in nearly every cognitive action. Importantly, these latter notions fit within emerging theories from cognitive science that similarly have argued for the importance of understanding externalized and embedded cognition (e.g., Clark, 2001; Clark & Chalmers, 1998).

Contexts like medical decision making illustrate the value of macrocognitive concepts such as distributed and embedded cognition and the use of cognitive artifacts as a part of the team’s cognition. For example, Nemeth and colleagues (20042006) analyzed the value of externalized cognitive artifacts such as schedules, lists, and display boards in medical decision making. They argued that such artifacts “mediate collective work … as a way to maintain an overview of the total activity … [and] are products of various work activities that are distributed in time and location” (2006, p. 728). Essentially, these forms of externalized cognition support assessment and planning, as well as coordination for contingencies and negotiation of resources. In the broader context of collaborative medical decision making, with its inherent uncertainties, these externalizations serve as “cognitive-aid structures” (e.g., Rao & Turoff, 2000) to reify the decision processes among collaborating experts.

Research in collaborative engineering domains is also illustrative of cognition of the more macrocognitive form. For example, software design and development and system administration all require complex collaborative problem solving. Further, teams ranging from somewhat homogeneous teams, such as in software development, to often heterogeneous teams, such as in systems administration, are all created to develop, manage, and maintain complex technological products or systems. These collaborative tasks consist of dynamic cognitive processes requiring diagnostic interrogation of some system and diagnostic questioning from an oftentimes ad hoc team. Haber (2005) referred to this as “group sense making” when he described problem definition and solution processes in systems administration. In an example illustrative of these macrocognitive processes, he states that a “problem existed due to interactions between the components of a very complicated system, and the experts on the different components needed to work together to understand the cause and find a solution. The overall strategy was a cycle of shared observations of the system in question, developing hypotheses as individuals, small groups, or the group as a whole, and implementing changes to attempt a fix” (p. 3). Maglio et al. (2003) similarly discuss computer systems administration from a perspective that fits within a macrocognitive frame and articulates the complex nature of the collaboration. They describe a requirement for developing common ground and the coordination of attention across a number of team members, ranging from engineers engaged in troubleshooting to technical support personnel to software application developers.

But social-cognitive factors also come into play in macrocognitive contexts. In a study of expert software teams, Sonnentag (2000) showed that experienced problem solvers place a high value on cooperation and engage in more work-related communication. Thus, a crucial factor is an emphasis on cooperation strategies because the work places high cognitive and social demands on system administrators. Specifically, these engineers have to “troubleshoot systems, making sense of millions of log entries by controlling thousands of configuration settings, and performing tasks that take hundreds of steps. The work also places high social demands on practitioners as systems administrators need organizational and interpersonal skills to coordinate tasks and collaborate effectively with others” (Barrett et al., 2004). Related to this, Sonnentag and Lange (2002) found that, among engineering and software development teams, a general knowledge of cooperation strategies, that is, what to do in situations requiring cooperative behavior, is related to better performance. Further, this research showed that cooperation is more valued by the experts than by the moderate-level performers. Because of this, experts engaged in higher amounts of work-related communication, helped their coworkers, and sought out feedback from coworkers (Sonnentag, 2000).

More recently, this notion of collaborative macrocognition has been elaborated upon to specifically (p. 207) address macrocognition in teams as a form of complex and coordinative cognition (Letsky, Warner, Fiore, & Smith, 2008; Warner, Letsky, & Cowen, 2005). Macrocognition in teams is defined as the internalized and externalized high-level mental processes employed by teams to create new knowledge during complex, collaborative problem solving (Letsky, Warner, Fiore, Rosen, & Salas, 2007). High-level, in this setting, encompasses the processes of combining, visualizing, and/or integrating information to resolve ambiguity and in support of the discovery of new knowledge.

In this context, macrocognition in teams is a particular instance of the more general area of team cognition research in that team cognition theory tends to emphasize coordinating actions among individuals. For example, research in team cognition might examine how team members sequence actions in service of meeting a team’s objectives. Macrocognition in teams focuses more on the knowledge work done by a team and how externalized knowledge and the creation of cognitive artifacts support this work (Fiore, Rosen, et al., 2010; Fiore, Smith-Jentsch, Salas, Warner, & Letsky, 2010). In this sense, knowledge work is defined as the transformation of data and informational inputs to build knowledge that enables the team to develop problem representations and candidate solutions for the problem at hand (Fiore, Elias, Salas, Warner, & Letsky, 2010). Although team cognition research does address “knowledge” in teams when discussing shared mental models and related forms of overlapping knowledge structures (e.g., Cannon-Bowers, Salas, & Converse, 1993; Marks, Zaccaro, & Mathieu, 2000; Mathieu, Heffner, Goodwin, Cannon-Bowers, & Salas, 2005; Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000; Rentsch & Davenport, 2006; Salas & Fiore, 2004), as noted, its emphasis is more upon coordination processes and executing previously learned task procedures in familiar environments. Macrocognition in teams is distinguished from the broader area of team cognition research in that it does not involve selecting and executing procedures or rules. Rather, the focus is on understanding processes engaged by teams to generate new knowledge to solve particular problems in context (Fiore, Rosen, et al., 2010; Fiore, Smith-Jentsch, et al., 2010). More specifically, it is “the process of transforming internalized knowledge into externalized team knowledge through individual and team knowledge-building processes” (Fiore, Rosen, et al., 2010, pp. 204–205).

In sum, applying macrocognition theory to the study of teams helps researchers and engineers account for the interwoven influences on human performance beyond the individual members’ thoughts, perceptions, or activity. Macrocognitive studies have shifted psychology’s narrower focus from simple tasks with control groups to naturalistic studies that are more qualitative in nature than quantitative (Crandall et al., 2006). This has opened the door for interdisciplinary approaches to thinking about the diverse problems of team cognition, all in the service of improving our understanding of organizational productivity. We next delve deeper into a core element of team cognition, the shared mental model construct. We seek to address the question, “How do we do team cognition?” and we describe “process mapping” to illustrate how team cognition emerges as both process and product during process re-engineering. We describe its components and use it to connect shared mental model theory to theorizing on macrocognition in teams.

Mental Models and Team Cognition

From research on team process and performance, we have come to understand the key success factors for expert teams and the relationship between team cognition and team processes. Based upon a review of the teamwork literature, Salas and colleagues identified what it is that expert teams do best (for elaboration on these, see Salas, Rosen, Burke, Goodwin, & Fiore, 2006), and we next summarize a subset of the elements of expert teamwork most relevant to team cognition. As will be seen, team cognition benefits teams by helping them comprehend and deal with complex phenomena, predict performance, and aid in the production of a course of action (Cannon-Bowers & Salas, 2001).

First and foremost, research suggests that expert teams hold shared or compatible knowledge structures referred to as either shared mental models or transactive memory systems (see DeChurch & Mesmer-Magnus, 2010, for a review). Second, expert teams demonstrate collaborative learning and use that to adapt to changing situations (e.g., Edmonson et al., 2001). Related to learning, expert teams will often engage in preparatory and reflective activities to improve performance. Here they will anticipate performance needs as well as reflect upon their performance episodes (Smith-Jentsch, Zeisig, Acton, & McPherson, 1998). In line with the notion of shared mental models, expert teams manage expectations of their teammates by clearly understanding the roles and responsibilities necessary to (p. 208) meet team goals. For example, air traffic controllers adapt responsibilities during shifts to meet evolving workload conditions (La Porte & Consolini, 1991; Beauchamp, Bray, Eys, & Carron, 2002; Brun et al., 2005; Bliese & Castro, 2000). In further elaboration of shared knowledge, expert teams consist of members who have a clear understanding of their mission, vision, and goals (e.g., Castka, Bamber, Sharp, & Belohoubek, 2001; Pearce & Ensley, 2004). This shared knowledge also supports superior decision making and reduces errors. Communication becomes more efficient when members hold compatible knowledge; specifically, team members give and receive timely information (e.g., Orasanu, 1990; Patel & Arocha, 2001). Along these lines, they use this knowledge to identify relevant teamwork and task-work requirements. In this sense, expert teams balance task characteristics and workload with individual expertise, and work to alter their operating environment to optimize communication and coordination (Schaafstal, Johnston, & Oser, 2001). As can be seen, shared mental models are an important component of expert teamwork, essentially acting as the foundation from which effective team processes can be executed. Given this, we turn next to a more thorough explication of the shared mental model construct.

Shared Mental Models as an Organizing Construct

Shared mental models are clearly one of the more promising advancements in team cognition. We next connect theory and practice through a more thorough review of shared mental model theory and examination of a process re-engineering tool that may contribute to better team performance by both establishing and actually becoming a shared mental model. Further, we illustrate how this supports some of the core ideas within a theory of macrocognition in teams.

The team cognition literature identifies a set of factors that must be present to be considered a shared mental model (e.g., Cannon-Bowers, Salas, & Converse, 1993; Klimoski & Mohammed, 1994; Mathieu et al., 2000), and these facilitate explanation, description, and prediction to aid team performance. In this vein, Fiore and Schooler (2004) used this theoretical approach to argue for the following essential factors in collaborative problem solving: awareness of problem structure, understanding of the roles and skills that teammates contribute as they relate to the task, and awareness that all team members possess that problem structure knowledge. In the following section, we build on their assertion that the development of these components leads to more productive problem conceptualization processes and subsequent solution generation.

To the first component, a shared problem structure, Orasanu and Fischer (1992) propose that, “the degree to which a team establishes a shared mental model for a problem and the degree to which it is made explicit in communication, will determine the team’s effectiveness in coping with the problem” (p. 189). This shared problem structure provides team members with the benefits of overlapping and organized knowledge (Resnick, 1991). Whether declarative or procedural, the knowledge that concerns both the problem and the rules of decision making are included in this body of correlative knowledge (Cannon-Bowers et al., 1993).

The second component, understanding each team member’s skills and roles, helps team members use fully the potential contributions of each other (Fiore & Schooler, 2004). The main assumption of this component is that understanding the endowments and obligations of others decreases erroneous assumptions and directs specific task segments to the members most likely to succeed at them.

These explanatory aspects of shared mental models shape the “predictive” potential for team performance. The degree to which team members share an accurate and clear mental model, the more likely they are to perform successfully (Cannon-Bowers & Salas, 2001). Furthermore, the shared mental model gives members insight that can be used to avoid or fix potential problems. In this respect, the shared model can be self-diagnostic, predicting the outcome and indicating the pieces of task flow that would result in such an end.

In a recent meta-analysis of the team cognition literature, DeChurch and Mesmer-Magnus (2010) addressed the question of cognition’s value to team performance. They hypothesized that team cognition would be positively related to behavioral team process, team motivational states, and team performance. Via an analysis of over 60 experiments and nearly 4,000 teams, this analysis of team cognition research provides a clearer sense of what has been validated empirically and what yet needs to be explored further. They found that the effectiveness of a team is due primarily to “interaction processes and emergent states” that connect input and outcome. “Team cognition is an emergent state that refers to the manner in which knowledge important to team functioning is mentally organized, represented and distributed within the team and allows (p. 209) team members to anticipate and execute actions” (p. 2). They conclude that this emergence in teams can manifest itself in either shared mental models (i.e., knowledge held in common) or as transactive memory systems (i.e., knowledge distributed across members).

In application of the empirical evidence supporting the critical contributions of mental models and distributed knowledge, cognitive engineers and others who work to support complex collaborative processes must address the challenge of facilitating these emergent constructs to support team success. We turn next to a discussion of an example of an approach from process re-engineering that illustrates processes and products of interaction, along with cognitive emergence.

Process Mapping as Reification of the Shared Mental Model Concept

Process mapping is not a new tool in organizational knowledge management. It, along with various other techniques including flowcharts and diagrams, has been used by managers and engineers alike to visually represent team structure and task components. In application, process mapping helps a team gain a “big picture” for what is happening and what ought to happen within a complex organizational process. But its potential for team cognition is much broader. In the following section, we expand on some of the key ideas originally set forth by Fiore and Schooler (2004), who asserted that process mapping serves as an example of capturing a shared mental model, and, of significance to those who research and design for cognition, the creation of the process map actually facilitates the construction of shared models.

Most simply stated, a process map captures a visual representation of work flow within a given organizational process. Team members contribute from their individual sets of experiences, insights into workflow, and knowledge of task processes as they know them. Collaboratively, the team produces a representation of their process knowledge. Because the map is constructed from unique perspectives, even the best-informed team members will realize that they had gaps in their individual knowledge. This shared understanding comes directly from the act of constructing the process map and allows groups to focus appropriately on problem conceptualization rather than moving straight to solution generation (Fiore & Schooler, 2004).

Roles and responsibilities are also highlighted in the structuring of the process map, and team members become more aware of the group contributors. This validates the process, as groups often show difficulty identifying accurately who has the knowledge most relevant to the problem at hand (e.g., Serfaty, Entin, & Johnston, 1998). “Process mapping is additionally beneficial because it facilitates information sharing by guiding the transfer of information that takes place during group discussion” (Fiore & Schooler, 2004, p. 143). The team can contribute idiosyncratically and synergistically, thus improving the value of the information brought to the map. In execution, problem-solving teams have found that process-mapping sessions elicit better understanding of member roles, and the natural outgrowth of this understanding is respect (Loew & Hurley, 1995).

Further, in line with our earlier description of macrocognition in teams and the role of cognitive artifacts, Fiore and Schooler (2004) elaborated upon the value of external representations with process mapping. They suggested that “the degree the team-task requires the construction of a shared understanding, external representational tools can act as a scaffolding to facilitate the building of that shared representation” (Fiore & Schooler, 2004, p. 134). The externalizations, that is, the process maps, are tangible artifacts that embody the team’s conceptualization of the problem. They suggest that these artifacts provide the means through which collaborators can visually articulate abstract concepts. In support of team cognition, the members are able to “manipulate these task artifacts as the problem solving process proceeds [and they] act as a scaffolding with which the team can construct a truly shared, and concrete, depiction of the process problem” (p. 144).

Thus, more important than the mutual understanding of skills and roles is the collaborative conceptualization of problems. The expression of knowledge helps the team experience greater clarity, and it promotes and supports a shared problem model. Fiore and Schooler (2004) essentially argue that it is the externalization that helps to mediate the team’s cognitive and collaborative process. But the shared understanding does not just appear during construction of the map, or it would be as simple as fitting pieces of a puzzle together. Instead, metaphorically, many of the puzzle pieces actually need reshaping or, if an assumption is entirely inaccurate, the piece is thrown away. The team members must negotiate and experience a certain level of flexibility as they work to express, reshape, and incorporate ideas (Levine et al., 1993). This approach (p. 210) suggests that a forced negotiation for construction of the map itself develops into the shared mental model that said map represents. “By diagramming the entire flow such that interconnections are clear and all repercussions are noted, process mapping provides a means with which to accurately articulate complicated processes and can overcome limitations normally experienced when teams deal with complex problems” (Fiore & Schooler, 2004, p. 145). Furthermore, later problem-solving stages benefit from the process map because the problem-solving team disposes of inaccurate conceptions and workflow redundancies. As the individual mental models move toward a shared understanding, the initial problem representation (referred to as the “as is” map) reflects the whole group, but from there an idealized map, a picture of what should be, can then be generated (Mason, 1997). From the standpoint of team cognition, process mapping illustrate how teams can find solutions by collaborating to identify, analyze, and accurately conceptualize the problem.

In sum, Fiore and Schooler (2004) approached process mapping as an effective tool for developing shared problem models, and as a tool that provides a number of predictive and evaluative benefits to team cognition. They presented it as a representation of mental models themselves, noting that process maps capture the mental constructions of team members and the visionary constructs of a task as it should be. Beyond that, it shapes the way in which team members think. Specifically, the very creation of the map forces a more clearly shared understanding of both team and task. In this construction, the teams develop a shared problem model for the task by facilitating communication among contributors, and examination of factors within the problem environment produces a shared awareness of the process problem. We further suggest that it provides an important illustration of macrocognition in teams, connecting traditional team cognition theory with the value of externalized cognition in the service of building knowledge to solve problems (cf. Fiore, Elias, et al., 2010).

Future Directions in Cognitive Engineering for Teams

In this final section, we provide a set of questions distilled from our prior discussion. Our goal is to guide cognitive engineering in the context of team cognition. This list is not meant to be exhaustive. Rather, it is meant to be representative of the types of issues that arise when cognition and technology merge. Similarly, they illustrate well the necessity for interdisciplinary collaborations cutting across cognitive science, psychology, engineering, and computer science. Further, they are meant to ensure that the human is central in the design of team cognitive technologies.

  •  How can collaborative systems be engineered to support distributed teams? A key problem that cognitive engineers should continue to examine is the challenge of distributed teams. While programs and tools have been developed to address the challenges of distributed interaction (visual and audio interfaces for globally distributed teams, real-time text-based conversational tools), there are gaps in our ability to support the more socioemotional aspects of teams. This includes essential team needs, like trust, joint decision, and empathy. The characteristics of effective teams should be brought to the forefront of the engineering tasks, just as the individual needs for attention and prior knowledge are part of the design considerations for one human in a system. It becomes a matter of shifting from the notion that teams are only multiple humans to a decisive recognition that teams are distinct socio-cognitive entities that contain multiple humans.
  •  What technologies are necessary to scaffold the more embodied and enactive components of team cognition? In recent theorizing, we have argued for the notion of embodied cognitive fidelity (ECF) as a construct that captures the emergence of socio-cognitive factors at both individual and team levels (Bockelman et al., 2011). ECF is a form of fidelity “which captures the dynamic, embodied, enactive, and distributed nature of collaborative cognition that is situated within physical and social environments” (p. 1507). As a complement to our above point, we suggest that cognitive engineering must work to develop interaction systems that simulate interaction and prioritize these aspects of cognition (cf. Walmsley, 2008). In particular, on micro- and macrocognitive levels, the confluence of these factors produces the type of social intelligence that enables team effectiveness. Collaboration technologies for distributed cognition must enable the type of social cuing and implicit communication processes that foster collaboration and the development of shared awareness and knowledge within teams.
  •  How can cognitive engineering collaborations produce visualization technologies that scaffold team cognition dealing with more abstract problems? As we have shown in this chapter, research into external (p. 211) problem representations illustrates an important interplay between person and visualizations in problem solving. But we have primarily discussed concrete tasks more readily lending themselves to visualization. Further research is needed in how externalization of reasoning processes can be developed. For example, early work in this area documented the efficacy of diagrammatic presentation to facilitate argument construction (Stenning & Oberlander, 1995; Suthers & Hundhausen, 2001). Others showed how imagery in collaborative problem solving facilitated the generation of alternative interpretations (Grabowski, Litynski, & Wallace, 1997). More recently, cognitive science has explored decision support systems in the service of clinical reasoning and problem solving (e.g., Lu & Lajoie, 2008). But there are many such collaborative tasks rife with abstractions, uncertainties, and complexities (e.g., Balakrishnon, Kiesler, & Fussell, 2008). We suggest that cognitive engineering more fully explore how to develop technologies for visualization. Such technologies need to support collaboration through the development of artifacts that mediate collaborative cognition dealing with both concrete and abstract issues.
  •  How can team cognition support the development of human-robotic agents? One area where engineers can practically apply their skills and methods to team cognition is in the development of productive human-robot teams (Hoffman & Breazeal, 2004). Here we can find the interconnected tiers of cognitive engineering from the intelligent and autonomous robot agent, to interactive systems, to team cognition and beyond. In short, a significant challenge with technologically based “teammates” is understanding the subtle forms of interaction that emerge when humans collaborate with agents or even with robots (Goodrich & Schultz, 2007). This includes scaffolding how agents manage social engagement (Argall et al., 2009; Asada et al., 2009) and social cognitive factors like shared attention (Elias et al., 2011; Streater et al., 2011). From the standpoint of training humans how to engage in this new form of collaboration, the use of advanced capabilities in simulation may support this (Bockelman et al., 2011). This line of inquiry and development is particularly valuable, as it allows for advancement of the artificial agents in conjunction with deeper explorations into the nuances of social cognition.
  •  How might cognitive engineers use their experiences in teams to develop better team tools and products? Finally, cognitive engineering projects rarely, if ever, are taken on by individuals. The practitioners of cognitive engineering are starting with an experiential framework that could provide insight into the field of team cognition and could advance engineering as a whole. We suggest that an important starting point for addressing team cognition challenges is for cognitive engineers to begin looking at how their teams are already dealing with difficulties of an interdisciplinary field. How do neuroscientists, computer scientists, and psychologists develop shared mental models for the engineering tasks they address? From this meta-level approach of self-examination, cognitive engineers may be able to consider engineering and ergonomic solutions to help generate more intelligent team solutions.

Conclusion

In this chapter we have advocated for framing the discussion of team cognition as a distinct and interdisciplinary field developed to study complex collaborative processes. We first noted that, in the design and development of human-technology systems, it is imperative that we distinguish groups from teams. Although groups and teams involve multiple participants, designing for independent thought processes is drastically different from designing to support the natural and complex interdependent processes emerging in team contexts. Furthermore, we argued that coordination is at the core of team cognition, and human-centered technologies should keep this in the forefront of design concepts and frameworks.

We described how team cognition has evolved from early research in social and organizational psychology, which focused on the dynamics of interaction in context, to a broadly applied interdisciplinary science that explores complex cognitive dynamics where human and machine are intimately integrated into team performance. A review of the history of team cognition does not encourage the contemporary scientist to dismiss early theories and approaches, but in this field the insights into human cognition have matured while others have been replaced by new concepts. Nonetheless, as in the early days of psychology, the individual mind matters, but we now consider the contributions of the individual in conjunction with the contextual influences of the task, the team, and the technology and how these interrelate to create collective team (p. 212) models. This history has unfolded from the intimate connection between intelligent technologies and the study of intelligence itself.

Further, the development of team cognition has shifted focus from microcognition to macrocognition, informing understanding of coordination with externalized and distributed representations. To this point, we described the value of shared mental models and how these can be externalized and co-constructed in the service of collaborative problem solving. We discussed an example from process re-engineering that provided both means and metaphor for conceptualizing shared mental models through process mapping. Teams gain efficacy as they realize and eliminate inaccuracies associated with process problems. The emphasis here is not so much on the tool itself, but rather that a systematic approach to communicating and visualizing problem-solving tasks helps to move knowledge from individual minds. Externalizing this knowledge helps team members to employ it to its fullest extent. For the engineer, this invites exploration in the ways that engineering teams approach design and exploration into designs that support teams.

Across all of this was the connection between team cognition theory and human-technology integration. The increasing interaction and interdependence of humans and machines is reshaping cognitive science, psychology, engineering, and computer science. As the lines between human and machine become less clear, it becomes increasingly important to keep the human central in the design of technologies supporting teamwork and team cognition.

Acknowledgments

The writing of this chapter was partially supported by Grant SES-0915602 from the National Science Foundation and ONR MURI Grant #N000140610446 from the Office of Naval Research Collaboration and Knowledge Interoperability (CKI) Program. The views, opinions, and findings contained in this article are the authors’ and should not be construed as official or as reflecting the views of the University of Central Florida, the National Science Foundation, or the Office of Naval Research.

References

Asada M., Hosoda K., Kuniyoshi Y., Ishiguro H., Inui T., Yoshikawa Y.,… Yoshida C. (2009). Cognitive developmental robotics: A survey. IEEE Transactions on Autonomous Mental Development, 1(1), 12–34.Find this resource:

Argall B. D., Chernova S., Veloso M., & Browning B. (2009). A Survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.Find this resource:

Barrett R., Kandogan E., Maglio P. P., Haber E. M., Takayama L. A., & Prabaker M. (2004). Field studies of computer system administrators: Analysis of System management tools and practices. CSCW ‘04 Proceedings of the 2004 ACM conference on Computer supported cooperative work (pp. 388–395). New York, NY: ACM.Find this resource:

Balakrishnon A., Kiesler S., & Fussell S. R. (2008). Do visualizations improve synchronous remote collaboration? In Proceedings of the ACM Conference on Human Computer Interaction (CHI 2008) (pp. 1227–1236). New York, NY: ACMFind this resource:

Basden B. H., Basden D. R., Thomas R. L., III, & Souphasith S. (1998). Memory distortion in group recall. Current Psychology: Developmental, Learning, Personality, Social16, 225–246.Find this resource:

Beauchamp M. R., Bray S. R., Eys M. A., & Carron A. V. (2002). Role ambiguity, role efficacy, and role performance: Multidimensional and meditational relationships within interdependent sport teams. Group Dynamics: Theory, Research, and Practice, 6, 229–242.Find this resource:

Benjamin L. (2000). Münsterberg, Hugo. In Encyclopedia of psychology (Vol. 5, pp. 354–356). Washington, DC/ New York, NY: American Psychological Association.Find this resource:

Bliese P. D., & Castro C. A. (2000). Role clarity, work overload and organizational support: Multilevel evidence of the importance of support. Work & Stress14, 65–73.Find this resource:

Bockelman Morrow P., Elias J., Streater J., Ososky S., Phillips E., Fiore S. M., & Jentsch F. (2011). Embodied cognitive fidelity and the advancement of human robot team simulations. In Proceedings of the 55th Annual Meeting of the Human Factors and Ergonomics Society (pp. 1506–1510). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Brun W., Eid J., Johnsen B. H., Laberg J. C., Ekornas B., & Kobbeltvedt T. (2005). Bridge resource management training: Enhancing shared mental models and task performance? In H. Montgomery, R. Lipshitz, & B. Brehmenr (Eds.), How professionals make decisions (pp. 183–193). Mahwah, NJ: Erlbaum.Find this resource:

Cannon-Bowers J. A., & Salas E. (2001). Reflections on shared cognition. Journal of Organizational Behaviors, 22(2), 195–202.Find this resource:

Cannon-Bowers J. A., Salas E., & Converse S. (1993). Shared mental models in expert team decision making. In N. J. Castellan, Jr. (Ed.), Current issues in individual and group decision making (pp. 221–246). Hillsdale, NJ: Erlbaum.Find this resource:

Castka P., Bamber C., Sharp J., & Belohoubek P. (2001). Factors affecting successful implementation of high performance teams. Team Performance Management, 7, 123  134.Find this resource:

Clancey W. J. (1997). Situated cognition: On human knowledge and computer representation. New York, NY: Cambridge University Press.Find this resource:

Clark A. (2001). Mindware. Oxford, England: Oxford University Press.Find this resource:

Clark A., & Chalmers D. (1998). The extended mind. Analysis, 58, 7–19.Find this resource:

Cohen S. G., & Bailey D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23, 239–290.Find this resource:

Cooke N. J., Salas E., Kiekel P. A., & Bell B. (2004). Advances in measuring team cognition. In E. Salas & (p. 213) S. M. Fiore (Eds.), Team cognition: Understanding factors that drive process and performance (pp. 83–106). Washington, DC: American Psychological Association.Find this resource:

Crandall B., Klein G., & Hoffman R. R. (2006). Working minds: A practitioner’s guide to cognitive task analysis. Cambridge, MA: MIT Press.Find this resource:

DeChurch L. A., & Mesmer-Magnus J. R. (2010). The cognitive underpinnings of effective teamwork: A meta-analysis. Journal of Applied Psychology, 95(1), 32–53.Find this resource:

De Dreu C. K., Nijstad B. A., & van Knippenberg D. (2008). Motivated information processing in group judgment and decision making. Personality and Social Psychology Revue, 12(1), 22–49.Find this resource:

Edmondson A. C., Bohmer R. M., & Pisano G. P. (2001). Disrupted routines: Team learning and new technology implementation in hospitals. Administrative Science Quarterly, 46, 685–716.Find this resource:

Elias J., Bockelman Morrow P., Streater J., Gallaher S., & Fiore S. M. (2011). Towards triadic interactions in autism and beyond: Transitional objects, joint attention, and social robotics. In Proceedings of the 55th Annual Meeting of the Human Factors and Ergonomics Society (pp. 1486–1490). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Elias J., & Fiore S. M. (2012). Commentary on the coordinates of coordination and collaboration. In E. Salas, S. M. Fiore, & M. Letsky (Eds.), Theories of team cognition: Cross-disciplinary perspectives. Applied Psychology Series (pp. 571–595). New York, NY: Taylor & Francis.Find this resource:

Fiore S. M., Elias J., Salas E., Warner N., & Letsky M. (2010). From data, to information, to knowledge: Measuring knowledge building in the context of collaborative cognition. In E. Patterson and C. Miller (Eds.), Macrocognition metrics and scenarios: Design and evaluation for real-world teams (pp. 179–200). Aldershot, England: Ashgate.Find this resource:

Fiore S. M., Rosen M. A., Smith-Jentsch K. A., Salas E., Letsky M., & Warner N. (2010). Toward an understanding of macrocognition in teams: Predicting processes in complex collaborative contexts. Human Factors, 52(2), 203–224.Find this resource:

Fiore S. M., & Salas E. (2004). Why we need team cognition. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding factors that drive process and performance (pp. 235–248). Washington, DC: American Psychological Association.Find this resource:

Fiore S. M., & Salas E. (2006). Team cognition and expert teams: Developing insights from cross-disciplinary analysis of exceptional teams. International Journal of Sport and Exercise Psychology4, 369–375.Find this resource:

Fiore S. M., Salas E., & Pavlas D. (2009). Human performance enhancement: Use-inspired science and the promise of human-technology interaction and integration. In P. O’Connor & J. Cohn (Eds.), Human performance enhancement in high risk environments (pp. 9–15). Westport, CT: Praeger Security International.Find this resource:

Fiore S. M., & Schooler J. W. (2004). Process mapping and shared cognition: Teamwork and the development of shared problem models. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding factors that drive process and performance (pp. 133–152). Washington, DC: American Psychological Association.Find this resource:

Fiore S. M., Smith-Jentsch K. A., Salas E., Warner N., & Letsky M. (2010). Toward an understanding of macrocognition in teams: Developing and defining complex collaborative processes and products. Theoretical Issues in Ergonomic Science, 11(4), 250–271.Find this resource:

Fussell S. R., & Krauss R. M. (1989). The effects of intended audience on message production and comprehension: Reference in a common ground framework. Journal of Experimental Social Psychology, 25, 203–219.Find this resource:

Goodrich M. A., & Schultz A. C. (2007). Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction, 1(3), 203–275.Find this resource:

Grabowski M., Litynski D. M., & Wallace W. (1997). The relationship between three-dimensional imaging and group decision making: An exploratory study. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 27, 402–411.Find this resource:

Gutwin C., & Greenberg S. (2004). The importance of awareness for team cognition in distributed collaboration. In E. Salas and S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 177–201). Washington, DC: American Psychological Association.Find this resource:

Haber E. (2005, April 2–7). Sensemaking sysadmins: Lessons from the field. Paper presented at the Sensemaking Workshop at Association for Computing Machinery Conference on Human Factors in Computing Systems. New York, NY: ACM.Find this resource:

Hackman J. R., Wageman R., Ruddy T. M., & Ray C. R. (2000). Team effectiveness in theory and practice. In C. Cooper & E. A. Locke (Eds.), Industrial and organizational psychology: Theory and practice (109–129). Oxford, England: Blackwell.Find this resource:

Hinsz V. B. (1990). Cognitive and consensus processes in group recognition memory performance. Journal of Personality and Social Psychology, 59, 705–718.Find this resource:

Hinsz V. B. (2004). Metacognition and mental models in groups: An illustration with metamemory of group recognition memory. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding factors that drive process and performance (pp. 33–58). Washington, DC: American Psychological Association.Find this resource:

Hinsz V. B., Tindale R. S., & Vollrath D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121, 43–64.Find this resource:

Hoffman G., & Breazeal C. (2004). Collaboration in human-robot teams. In Proceedings of the AIAA 1st Intelligent Systems Technical Conference. Reston, VA: AIAA.Find this resource:

Hoffman R. R., & Deffenbacher K. A. (1992). A brief history of applied cognitive psychology. Applied Cognitive Psychology, 6, 1–48.Find this resource:

Hollnagel E. (2002). Cognition as control: A pragmatic approach to the modeling of joint cognitive systems. Theoretical Issues in Ergonomic Science, 2(3), 309–315.Find this resource:

Hutchins E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.Find this resource:

Hutton R. J. E., Miller T. B., & Thorsden M. L. (2003). Decision-centered design: Leveraging cognitive task analysis in design. In E. Hollnagel (Ed.), Handbook of cognitive task design (pp. 383–416). Mahwah, NJ: LEA.Find this resource:

Ickes W. & Gonzalez R. (1994). “Social” cognition & social cognition. Small Group Research, 25(2), 294–315.Find this resource:

Janis I. (1982). Groupthink (2nd ed.). Boston, MA: Houghton Mifflin.Find this resource:

Johnston J., Fiore S. M., Paris C., & Smith C. A. P. (in press). Application of cognitive load theory to developing a measure of team decision efficiency. Military Psychology.Find this resource:

Katzell R. A., & Austin J. T. (1992). From then to now: The development of industrial-organizational psychology in the United States. Journal of Applied Psychology, 77(6), 803–835.Find this resource:

(p. 214) Katzenbach J. R., & Smith D. K. (1993). The wisdom of teams: Creating the high performance organization. Boston, MA: Harvard Business School Press.Find this resource:

Klein D. E., Klein H. A., & Klein G. B. (2000). Macrocognition: Linking cognitive psychology and cognitive ergonomics. In Proceedings of the 5th International Conference on Human Interactions with Complex Systems (pp. 173–177). Urbana-Champaign: University of Illinois at Urbana-Champaign, The Beckman Institute; U.S. Army Research Laboratory, Advanced Displays & Interactive Displays, Federated Laboratory Consortium.Find this resource:

Klein G., & Militello L. (2001). Some guidelines for conducting a cognitive task analysis. Advances in Human Performance and Cognitive Engineering Research, 1, 161–199.Find this resource:

Klein G., Ross K. G., Moon B. M., Klein D. E., Hoffman R. R., & Hollnagel E. (2003). Macrocognition. IEEE Intelligent Systems, 18(3), 81–85.Find this resource:

Klimoski R., & Mohammed S. (1994). Team mental model: Construct or metaphor? Journal of Management20, 403–437.Find this resource:

La Porte T. R., & Consolini P. (1991). Working in practice but not in theory: Theoretical challenges of high-reliability organizations. Journal of Public Administration Research and Theory, 1, 19–47.Find this resource:

Larson J. R., & Christensen C. (1993). Groups as problem-solving units: Toward a new meaning of social cognition. British Journal of Social Psychology, 32, 5–30.Find this resource:

Letsky M., Warner N., Fiore S. M., Rosen M. A., & Salas E. (2007). Macrocognition in complex team problem solving. In Proceedings of the 12th International Command and Control Research and Technology Symposium. Washington, DC: U.S. Department of Defense Command and Control Research Program.Find this resource:

Letsky M., Warner N., Fiore S. M., & Smith C.A.P. (2008). Macrocognition in teams: Theories and methodologies. London, England: Ashgate.Find this resource:

Levine J. M., Resnick L. B., & Higgins E. T. (1993). Social foundations of cognition. Annual Review of Psychology, 44, 585–612.Find this resource:

Loew C., & Hurley H. (1995). Faster product/process development through cross functional process mapping. Production, 107, 56–61.Find this resource:

Lord R. G., & Maher K. J. (1990). Alternative information-processing models and their implications for theory, research, practice. Academy of Management Review15(1), 9–28.Find this resource:

Lu J., & Lajoie S. P. (2008). Supporting medical decision making with argumentation tools. Contemporary Educational Psychology33, 425–442.Find this resource:

Maglio P. P., Kandogan E., & Haber E. (2003). Distributed cognition and joint activity in collaborative problem solving. In Proceedings of the Twenty-fifth Annual Conference of the Cognitive Science Society. (COGSCI’03) (pp. 758–763) Boston,MA: Erbaum,.Find this resource:

Marks M. A., Mathieu J. E., & Zaccaro S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review26, 355–376.Find this resource:

Marks M. A., Zaccaro S. J., & Mathieu J. E. (2000). Performance implications of leader briefings and team-interaction training for team adaptation to novel environments. Journal of Applied Psychology, 85(6), 971–986.Find this resource:

Mason F. (1997). Mapping a better process. Manufacturing Engineering, 118, 58–68.Find this resource:

Mathieu J. E., Heffner T. S., Goodwin G. F., Cannon-Bowers J. A., & Salas E. (2005). Scaling the quality of teammates’ mental models: Equifinality and normative comparisons. Journal of Organizational Behaviors, 26(1), 37–56.Find this resource:

Mathieu J. E., Heffner T. S., Goodwin G. F., Salas E., & Cannon-Bowers J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85(2), 273–283.Find this resource:

Meister D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Erlbaum.Find this resource:

Mesmer-Magnus J. R., & DeChurch L. A. (2009). Information sharing and team performance: A meta-analysis. Journal of Applied Psychology, 94(2), 535–546.Find this resource:

Militello L. G., & Hutton R. J. B. (1998). Applied cognitive task analysis (ACTA): A practitioner’s toolkit for understanding cognitive task demands. Ergonomics41(11), 1618–1641.Find this resource:

Mullen B., Chapman J., & Peaugh S. (1989). Focus of attention in groups: A self-attention perspective. Journal of Social Psychology129(6), 807.Find this resource:

Nardi B. (1996). Studying context: A comparison of activity theory, situated action models, and distributed cognition. In B. Nardi (Ed.), Context and consciousness: Activity theory and human-computer interaction (pp. 35–52). Cambridge, MA: MIT Press.Find this resource:

Nemeth C. J. (1986). Differential contributions of majority and minority influence. Psychological Review, 93, 23–32.Find this resource:

Nemeth C. P., Cook R. I., O’Connor M. F., & Klock P. A. (2004). Using cognitive artifacts to understand distributed cognition. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 34(6), 726–735.Find this resource:

Nemeth C. P., O’Connor M. F., Klock P. A., & Cook R. I. (2006). Discovering healthcare cognition: The use of cognitive artifacts to reveal cognitive work. Organization Studies: Special issue on Naturalistic Decision Making, 27(7), 1011–1035.Find this resource:

Orasanu J. (1990). Shared mental models and crew decision making (Cognitive Science Laboratory Report No. 46). Princeton, NJ: Princeton University.Find this resource:

Orasanu J. M., & Fischer U. (1992). Team cognition in the cockpit: Linguistic control of shared problem solving. In Proceedings of the 14th Annual Conference of the Cognitive Science Society (pp. 189–194). Hillsdale, NJ: Erlbaum.Find this resource:

Patel V. L., & Arocha J. F. (2001). The nature of constraints on collaborative decision making in health care settings. In E. Salas & G. Klein (Eds.), Linking expertise and naturalistic decision making (pp. 383–405). Mahwah, NJ: Erlbaum.Find this resource:

Pearce C. L., & Ensley M. D. (2004). A reciprocal and longitudinal investigation of the innovation process: The central role of shared vision in product and process innovation teams (PPITs). Journal of Organizational Behavior, 25, 259–278.Find this resource:

Rao G. R., & Turoff M. (2000). A hypermedia-based group decision support system to support collaborative medical decision-making. Decision Support Systems, 30(2), 187–216.Find this resource:

Rasmussen J. (1985). The role of hierarchical knowledge representation in decision making and system management. IEEE Transactions on Systems, Man, and Cybernetics, 15(2), 234–243.Find this resource:

Rentsch J., & Woehr D. J. (2004).Quantifying congruence in cognition: Social relations modeling and team member schema similarity. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 11–31). Washington, DC: American Psychological Association.Find this resource:

Rentsch J. R., & Davenport S. W. (2006). Sporting a new view: Team member schema similarity in sport teams. International Journal of Sports and Exercise Psychology, 4(4), 401–422.Find this resource:

(p. 215) Resnick L. (1991). Shared cognition: Thinking as a social practice. In L. B. Resnick, J. M Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 1–20). Washington, DC: American Psychological Association.Find this resource:

Saavedra R., Earley P. C., & Van Dyne L. (1993). Complex interdependence in task-performing groups. Journal of Applied Psychology, 78, 61–72.Find this resource:

Salas E., Dickinson T. L., Converse S. A., & Tannenbaum S. I. (1992). Toward an understanding of team performance and training. In R. W. Swezey & E. Salas (Eds.), Teams: Their training and performance (pp. 3–29). Norwood, NJ: Ablex.Find this resource:

Salas E., & Fiore S. M. (2004). Why team cognition? An overview. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding factors that drive process and performance (pp. 3–8). Washington, DC: American Psychological Association.Find this resource:

Salas E., Rosen M. A., Burke C. S., Goodwin G. F., & Fiore S. (2006). The making of a dream team: when expert teams do best. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 439–453). New York, NY: Cambridge University Press.Find this resource:

Salas E., Sims D., & Burke C. (2005). Is there a big five in teamwork? Small Group Research36(5), 555–599.Find this resource:

Schaafstal A. M., Johnston J. H., & Oser R. L. (2001). Training teams for emergency management. Computers in Human Behavior, 17, 615–626.Find this resource:

Schraagen J. M., Klein G., & Hoffman R. (2008). The macrocognitive framework of naturalistic decision making. In J. M. Schraagen, L. Militello, T. Ormerod, & R. Lipshitz (Eds.), Naturalistic decision making and macrocognition (pp. 1–25). Burlington, VT: Ashgate.Find this resource:

Schraagen J. M., Militello L. G., Ormerod T., & Lipshitz R. (Eds.). (2008). Naturalistic decision making and macrocognition. Burlington, VT: Ashgate.Find this resource:

Serfaty D., Entin E., & Johnston J. H. (1998). Team adaptation and coordination in training. In J.A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individuals and team training (pp. 221–245). Washington, DC: American Psychological Association.Find this resource:

Sherif M. (1935). A study of some social factors in perception. Archives of Psychology, 27, 5–60.Find this resource:

Simon D. P. (1978). Information processing theory of human problem solving. In D. Estes (Ed.), Handbook of learning and cognitive processes (pp. 271–295). Hillsdale, NJ: Erlbaum.Find this resource:

Smith-Jentsch K. A., Zeisig R. L., Acton B., & McPherson J. A. (1998). Team dimensional training: A strategy for guided team self-correction. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress (pp. 271–297). Washington, DC: American Psychological Association.Find this resource:

Sonnentag S. (2000). Excellent performance: The role of communication and cooperation processes. Applied Psychology: An International Review, 49, 483–497.Find this resource:

Sonnentag S., & Lange I. (2002). The relationship between high performance and knowledge about how to master cooperation situations. Applied Cognitive Psychology, 16, 491–508.Find this resource:

Stasser G., & Dietz-Uhler B. (2001). Collective choice, judgment and problem solving. In M. A. Hogg & R. S. Tindale(Eds.), Blackwell handbook of social psychology: Group processes (Vol. 3, pp. 31–55). Oxford, England: Blackwell.Find this resource:

Stasser G., & Stewart D. (1992). Discovery of hidden profiles by decision-making groups: Solving a problem versus making a judgment. Journal of Personality and Social Psychology, 63, 426–434.Find this resource:

Stasser G., & Titus W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48, 1467–1478.Find this resource:

Stasser G., & Titus W. (1987). Effects of information load and percentage shared information on the dissemination of unshared information during group discussion. Journal of Personality and Social Psychology, 48, 1467–1478.Find this resource:

Steiner I. D. (1972). Group processes and productivity. New York, NY: Academic Press.Find this resource:

Stenning K., & Oberlander J. (1995). A cognitive theory of graphical and linguistic reasoning: Logic and implementation. Cognitive Science, 19, 97–140.Find this resource:

Stewart D., & Stasser G. (1998). The sampling of critical, unshared information in decision-making groups: The role of an informed minority. European Journal of Social Psychology28(1), 95–113.Find this resource:

Streater J., Elias J., Bockelman Morrow P., & Fiore S. M. (2011). Towards an interdisciplinary understanding of perspective for human-robot teamwork. In Proceedings of the 55th Annual Meeting of the Human Factors and Ergonomics Society (pp. 1481–1485). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Stroebe W., & Diehl M. (1994). Why groups are less effective than their members: On productivity losses in idea-generating groups. European Review of Social Psychology5, 271–303.Find this resource:

Sundstrom E., DeMeuse K., & Futrell D. (1990). Work teams: Applications and effectiveness. American Psychologist, 45(2), 120–133.Find this resource:

Suthers D., & Hundhausen C. (2001). Learning by constructing collaborative representations: An empirical comparison of three alternatives. In P. Dillenbourg, A. Eurelings, & K. Hakkarainen (Eds.), European perspectives on computer-supported collaborative learning: Proceedings of the First European Conference on Computer-Supported Collaborative Learning (pp. 577–584). Maastrict, The Netherlands: Universiteit Maastricht.Find this resource:

von Cranach M., Ochsenbein G., & Valach L. (1986). The group as a self active system: Outline of a theory of a group action. European Journal of Social Psychology, 16, 193–229.Find this resource:

Walmsley J. (2008). Methodological situatedness; or, DEEDS worth doing and pursuing. Cognitive Systems Research, 9, 150–159.Find this resource:

Warner N., Letsky M., and Cowen M. (2005). Cognitive model of team collaboration: Macro-cognitive focus. In Proceedings of the 49th Annual Meeting of the Human Factors and Ergonomic Society. Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Woods D. D., & Roth E. M. (1986). Models of cognitive behavior in nuclear power plant personnel (NUREG-CR-4532). Washington, DC: U.S. Nuclear Regulatory Commission.Find this resource:Patricia Bockelman Morrow

Patricia Bockelman Morrow, Cognitive Sciences Laboratory, University of Central Florida, Orlando, FLStephen M. Fiore

Stephen M. Fiore holds a joint appointment with the University of Central Florida’s Cognitive Sciences Program in the Department of Philosophy and UCF’s Institute for Simulation and Training and Team Performance Laboratory. He earned his PhD (2000) in cognitive psychology from the University of Pittsburgh, Learning Research and Development Center. He maintains a multidisciplinary research interest that incorporates aspects of the cognitive, organizational, and computational sciences in the investigation of learning and performance in individuals and teams. He is Co‐editor of a recent volume on Distributed Learning as well as a volume on Team Cognition and he has published in the area of learning, memory, and problem solving at the individual and the group level. He has helped to secure and manage over US$6 million in research funding from organizations such as the National Science Foundation, the European Science Foundation, the Office of Naval Research, and the Air Force Office of Scientific Research.

Posted in Uncategorized | Leave a comment

What is Attention ? by Christopher Wickens

.

Abstract and Keywords

This chapter describes attention in cognitive engineering and in design by metaphors of the filter, that selects incoming sensory information for perception, and the fuel that supports all stages of information processing with a limited supply of resources, and that therefore limits multi-tasking We describe applications of the filter to noticing events, alarm design, supervisory control and display layout, display integration, and visual search. We then consider two aspects of multi-task performance: when fuel is available to support concurrent processing, predicted by a multiple resource model, and when the task demands are sufficiently high, as to force sequential processing, and consideration of task and interruption management strategies. Finally, we consider the role of mental workload in automation and situation awareness. Where relevant, the chapter highlights the role of computational models.

Keywords: attentionmulti-taskinginterruption managementmultiple resourcestime-sharingdisplay integrationvisual scanningalarmsvisual search

Fundamentals

What Is Attention?

Attention may be described as one of a fundamental set of limits to human performance (along with, for example, memory or control precision) on the amount of information that can be processed per unit of time. Of use for the current chapter is the consideration of two metaphors of attention, as a filter and as a fuel (Kramer, Wiegmann, & Kirlik, 2007; Wickens & McCarley, 2008). As a filter, it describes the limits and constraints on the sensory systems (particularly the eyes and ears) to accept and process varying events and elements, up to the level of perception, where the meaning of those events is understood. Thus we conventionally describe the filter metaphor as selective attention. As a fuel it describes the limits and constraints on all information processing operations—perception, working memory, decision, and action—to operate concurrently, whether in the service of a single task or in multitasking. That is, attention characterizes a sort of limited mental energy, fuel, or “resource” that facilitates performance of the relevant process. For example, as the worker “tries harder” to understand a difficult instruction, he or she may lose focus on monitoring other changing variables in the work environment. Thus we can apply the fuel metaphor to divided attention between tasks and processes.

Importantly, this dichotomy of metaphors can be broken down by the extent to which the two attention operations succeed or fail. We speak for example of the success of the filter as guiding attention (often our eyes) to relevant sources of information or events in the world; we speak of failures of selective attention as both failures to notice those events at all, and distraction as failures to focus attention on important information as attention is diverted to less important things. We speak of “success” of divided attention, when we can multitask effectively, doing two things at once as well as either (p. 37) alone. In contrast, failure of divided attention, a matter of degree, ranges from a small dual task decrement in one or the other of two tasks to a complete abandonment of one of them and postponement of its initiation until the other is completed (serial task switching).

What Is Attention in Design?

At a fundamental level, we conceptualize design from a human factors standpoint as an engineering process, whereby the balance between two measurable constructs, performance and workload, is optimized. This balance is complicated in two respects. First, “performance” is itself multifaceted, and in particular in many systems we consider both routine performance and performance in unexpected or “off-nominal” conditions (Burian, 2007; Wickens, Hooey, Gore, Sebok, & Koenicke, 2009). The former is typically the goal of design, but effective human response to off-nominal unexpected conditions depends upon design that supports accurate situation awareness of the task (and the environment in which the task is being performed (Burns et al., 2008; Wickens, 2000a). Such design may not necessarily help routine performance and may sometimes even compromise it. The second complication is that workload should not necessarily be minimized for optimal design, but must be preserved within a range in the middle. This chapter addresses the role of attention in characterizing variables of performance, situation awareness, and workload.

Attention Allocation

As we discuss below, attention may be allocated at two different levels. At the highest level, we can speak of attention—the fuel—as allocated to tasks, as tasks may be defined by distinct semi-independent goals (Kirwan & Ainsworth, 1992). Thus the vehicle driver has the task of lane keeping, a second task of navigating, and a third task of dealing with in-vehicle technology (e.g., radio listening, cell phone conversation). Tasks are distinct in this sense in that they usually compete for attentional resources. At the lowest level, we can speak of attention—the filter—as allocated to elements within the environment as well as to internal cognition. Thus, in the vehicle example, the single task of navigation (and higher-level attention directed to the goal of successful navigation) may need to be accomplished by dividing or allocating visual attention (the filter) between a map and the search for landmarks and road signs outside; or between reading a navigation display, recalling the correct option, and placing the fingers on the correct key for menu choice; or between searching for the road signs and rehearsing the route number to be searched for. In our discussion below, we consider both levels of attention.

A Brief History: Single-Channel Theory and Automaticity

There are two concepts, single-channel processing and automaticity, that are fundamental to most findings and theories in attention, and indeed define endpoints on a sort of continuum from attentional failure to attentional success. Both are deeply rooted in the history of the study of attention (James, 1890; Titchner, 1908).

Single-channel theory (Craik, 1947; Welford, 1967; Pashler, 1998; Broadbent, 1958), the more pessimistic view of human attention, underlies the notion that attention can be focused on only one task at a time, as if performing one task so totally occupies the “single channel” of human cognition and information processing that any other task (usually one arriving later or deemed of lesser importance) must wait, unstarted, until the higher-priority task is completed. Its proponents have cited data in which people must perform two tasks of very high demands at once (like reacting in emergency to an unexpected roadway hazard while dialing a cell phone) or perform two tasks that compete for incompatible resources (like reading a paper document and reading a computer screen).

In stark contrast, the more optimistic view, automaticity (James, 1890; Schneider & Shiffrin, 1977; Fitts & Posner, 1963) defines circumstances when a task requires essentially no attention at all; if it has no attention demands, then ample attention (reserve resources) can be allocated to performing other tasks concurrently without decrement. Classic examples here include walking and talking, or driving (lane keeping) and listening to the radio. In both pairs, the first-mentioned task is so “easy” or automated that it requires little attention. Attention

Figure 2.1 Three examples of the performance-resource function.

Single-channel behavior and the perfect time sharing invoked by automaticity of course represent two endpoints on a continuum that can be best defined by the degree of attentional resources necessary to obtain a given level of performance. Such a relation between resources and performance is described by the performance-resource function (PRF; Norman & Bobrow, 1975), three examples of which are shown in Figure 2.1. The graph line at the bottom (A) suggests a task that would invoke single-channel behavior, since full resources must be allocated to obtain (p. 38) perfect performance (or indeed any performance at all). The curve at the top (C) represents an automated task. Perfect performance can be obtained with little or no attention. The graph in the middle (B) highlights the continuum between SCT and automaticity; performance improves up to a point as more resources are allocated to it, but it eventually reaches a level where “trying harder” will not improve performance.

Importantly, the transition from A → B → C can describe both an intrinsic change in the objective difficulty (complexity or demand value) of the task, or in the subjective difficulty of the task as rendered across three levels of skill development (e.g., novice, journeyman, expert). Important also is the observation that tasks A and C may be performed at equivalent levels in single task conditions; however, when a concurrent task is added, task A will suffer, but C will not.

In the following pages, we describe several general design issues relevant to attention (or attention issues that can be addressed by design)—the role of the filter in noticing, information, access, and search; the role of both the filter and fuel in information integration; the role of the fuel in multitasking that is both parallel and serial; the role of the fuel in mental workload prediction and measurement; and the relationship between workload, situation awareness, and automation. Within each section, we address, where relevant, certain validated computational models that can serve the engineering design community.

Noticing and Alerting

Selective attention as the filter can be seen to “tune” toward certain physical events in the environment, while filtering out others. Designers can capitalize on this by assuring that such tuning is focused on important events. Thus a critical design implication of attention is rendered by the attention-capturing properties of alarms and alerts that will direct operators’ attention to events (and locations) that a designer (and sometime automation) has deemed to be important. The fundamental basis of this approach lies in the fact that people are not very good monitors for infrequent or unexpected events if these are not highlighted in some way, a phenomenon recently described as change blindness (Carpenter, 2002; Rensink, 2002; Simons & Levine, 1997; St. John & Smallman, 2008; Wickens Hooey et al., 2009a) or inattentional blindness (Mack & Rock, 1998). The latter is a form of change blindness that occurs when a change is not noticed, even when directly looking at it.

Alert Salience

Research has identified a number of features of warning systems that will capture attention by making events salient (Boot, Kramer, & Becic, 2007, Itti & Koch, 2000)). For example, appearances of new “objects” in the scene will capture attention, and onsets (increases in luminance) will be more effective in attention capture than will offsets (decreases in luminance or contrast, or disappearing objects; Yantis, 1993). Whether appearing or disappearing, the noticing or attention-capturing properties of these transients is much better when the visual contrast of the changes is larger, the signal/noise ratio is higher (less clutter around the change event location), when visual or cognitive load is lower, and when they occur near or close to foveal vision, than when they are in the periphery (McCarley et al., 2009; Wickens et al., 2009; Steelman-Allen, McCarley, & Wickens, 2011; McKee & Nakayama, 1984). This loss in sensitivity with increasing eccentricity is estimated to be approximately 0.8%/degree (McCarley et al., 2009; Wickens, Alexander, et al., 2003). An extreme example of this eccentric presentation is when the to-be-noticed-event is not in the visual field at all when it occurs (e.g., the eye is closed in a blink or the head is turned beyond about 60 degrees away from the changing element. In these instances, referred to as “completed changes” (Rensink, 2002), change is very hard to notice when fixation is restored to the location of the change.

To some extent, the attention-capturing properties of the physical event (measurable for example by luminance contrast differences) are also modified by knowledge-driven or cognitive processes. One such process is expectancy. We will better notice events if they are expected (Wickens, Hooey, et al., 2009); for example, if the operator knows that a system is operating near its limits, he or will more likely expect the warning that those limits have been exceeded and therefore notice the alert when it appears, even if it may not be in foveal vision. A second process (p. 39) is tuning, whereby people are able to “tune” their monitoring to certain event features, to enhance noticing when events contain those features (Most & Astur, 2007; Folk, Remington, & Johnson, 1992, Wolfe & Horowitz, 2004). An obvious case is when the tuned feature is location; people can tune their attention by simply directing their gaze toward the location where an alert is likely to be. But they can also tune attention to be receptive to certain features at a given location: For example, in most cockpit situations, attention is tuned to a red event (e.g., a red light onset) because of the high priority given to red as a warning.

The difference between the attention-capturing processes defined by physical elements in the environment (e.g., signal-noise ratio) and the attention-tuning processes defined by worker expectations illustrates the more general contrast between what are termed “bottom-up” and “top-down” influences on perception. A final, strong effect on attention capture or noticeability is the ongoing non-visual (auditory and cognitive) workload at the time an event occurs (Fougnie & Marois, 2007).

A computational model called N-SEEV (noticing—salience effort expectancy value; Wickens, Hooey, et al., 2009; Steelman-Allen et al., 2011; Wickens, 2012) can be used to predict the likelihood of detecting an event as a combined function of its salience (Itti & Koch, 2000), expectancy, peripheral eccentricity (from foveal vision), and overall workload. However, in the workplace, as opposed to the laboratory, it is often challenging to determine what the eccentricity of a particular event at a given location may be, as the eyes can be scanning many different locations around the workplace. The SEEV model, the second component of N-SEEV model, predicts the course of this workplace scanning as a context in which the event to be noticed (N) occurs. The SEEV model will be described in a later section.

Beyond the visual modality, there are of course differences in attention-capturing properties between modalities. Most critically, vision is hampered in noticing events in that only about 2 × 2 = 4 squared degrees of a momentary visual field of around 60 × 60 degrees is occupied by foveal vision at any time; that is, only around 0.1%, and noticing degrades rapidly outside of this region. In contrast, events in either the auditory or tactile modality are not much constrained by sensor orientation; they are said to be omni-directional, and so auditory (and more recently tactile) warnings have been validated as superior alerts. However, within these non-visual modalities, issues of bottom-up capture (signal-to-noise ratio) and tuning or expectancy play the same role that they do in vision. As an example, auditory warnings may not be effective in noisy or conversation-rich environments, nor tactile alerts in an environment with extensive physical activity (e.g., a soldier crashing through heavy timber).

Nevertheless, a meta-analysis of noticing events within a visual workplace indicates that the auditory and tactile modality are 15% more effective (faster, more accurate) in capturing attention than are visual interrupting events, even when the latter events are adjacent (in the best case) to the location of the ongoing visual tasks (Wickens, Prinet et al., 2011; Lu, Wickens et al., 2011; Sarter, this handbook).

Alert Reliability

Most alert systems are imperfect in their reliability. They are designed with algorithms to integrate raw physical data to infer an important or “danger” state (e.g., a malfunction, a fire, or a predicted collision), and if this integrated product exceeds a threshold, the alert activates. However, the raw data are often noisy, and in the case of predictive alerts, circumstances in the environment may change after the alert is given to make the forecast event less likely. The longer this span of prediction is, the lower the reliability. As the obvious consequence, as described by Meyer (20012004) and Meyer and Lee (this handbook), alerts can make one of two types of decision errors: deciding there is not a problem when there is (a “miss”) and deciding that there is a problem when there is not (a “false alert”). When considering the consequences of these two types of errors, most designers quite reasonably assume that misses (or delayed alerts) are worse than false alerts, and so they choose to adjust the threshold lower so that the false alarms are more prevalent. In this case, when the FA rate increases, the system often produces the well-known “cry wolf” problem (Breznitz, 1983; Dixon, Wickens, & McCarley, 2007; Wickens, Rice, et al., 2009; Xiao et al., 2004), whereby operators may turn their attention away from the alerts when they occur and hence are more likely to respond late, or not at all, to true alerts.

Alert Dependence: An Attentional Analysis in a Dual-Task Environment

The effect of alarm reliability can be placed within the broader context of the multitask environment in which alarms are most critical, and the consideration of two cognitive states and two aspects of attention with which those states are associated (p. 40) (Meyer, 20012004; Meyer & Lee, this handbook; Dixon & Wickens, 2006; Maltz & Shinar, 2003). Thus, in most applications, a busy operator in a multitask environment (driving, flying, health care operations) is depending upon the automation to (1) alert him or her if there is a problem, but (2) be “silent” if all is well so that he or she can comfortably turn full attention to the concurrent tasks and away from the domain of the alerted event. As Meyer describes, an operator who responds rapidly to the alerts when they occur is demonstrating the compliance to the alert system; one who retains full attention to the concurrent tasks when the alert is silent is demonstrating reliance on the alerts. Thus the psychological constructs of compliance and reliance represent two independent aspects of operator dependence upon the alert system (Meyer & Lee, this handbook).

With regard to attention, when the overall reliability of the alert system degrades, both types of automation errors (misses or late alarms and false alarms) may increase. However, a designer-imposed shift in the alert threshold can mitigate the rise in one at the expense of the other. In these cases, data suggest that a rise in false alert rate, with miss rate held constant, will cause a progressive loss in compliance. This “cry wolf” effect can be objectively measured by the response rate, by the response time (to address the alarm), and by a selective attention measure of the time it takes to look at or switch attention to) the alerting domain (Wickens, Dixon, Goh, & Hammer, 2005). Conversely, an increase in miss rate, with FA rate more or less constant, will lead to a progressive loss in performance on the concurrent task with lower reliance as more attentional resources are reallocated continuously to monitoring the automated domain even when “all is well” (Wickens & Colcombe, 2007). This allocation is directly manifest as increased scanning to any visual display of “raw data” within the alerted domain (Wickens, Dixon, Goh, & Hammer, 2005). These human adjustments in response to failure event frequency may be described as optimal or “eutactic” (Moray & Inagaki, 2000), much as human signal detectors optimally adjust beta in response to signal frequency, as discussed in McCarley & Benjamin (this handbook).

The influences of false alert rate on compliance and miss rate on reliance are not entirely independent in two respects. First, if the threshold of an alert system with constant reliability is varied by the designer, it is obvious that reliance and compliance measures will change in opposite directions. Second, there is some evidence that increasing FA rate not only degrades compliance but will also degrade reliance (Dixon, Wickens, & McCarley, 2007; Dixon & Wickens, 2006), as if false alarms, being more salient and noticeable than misses, lead to an overall reduction in trust in (and therefore dependence on) the system. So, from the perspective of the impact on human performance in the multitask environment, it appears that FA-prone systems are more problematic than miss-prone (or late-alert-prone) systems. But of course, a full analysis of the appropriate balance between misses and false alarms in alert system design must take into account the primary issue of the costs of overall system misses versus false alerts (i.e., should both the human and the alert system miss the dangerous event).

Amplifying and mitigating the AFA problem. Three factors amplify the AFA problem. First, for any given threshold setting, the lower the base rate of events, the greater will be the false alert rate, at least as measured by the proportion of alerts that are false. In some circumstances this can be as high as 0.90. Indeed, in one case (border monitoring for nuclear fuel), it reached 100% (Sanquist, Doctor, & Parasuraman, 2008).

Second, in environments with multiple independent alerts and low thresholds (e.g., the intensive care unit; Seagull et al., 2001), if the probability of a false alert in any given system is even modestly high, then the probability that a single alert within the total workspace will be false can be extraordinarily high. A recent study at a medical center revealed that the typical health care worker was exposed to approximately 850 alerts in a typical workday, many of them undoubtedly false. Nurses experience 841 nuisance alerts/day. Kestin, Miller, and Lockhart (1988) estimated that in the typical operating room an alarm was triggered every 4.5 minutes.

In such circumstances with multiple alarm systems, some of them more prone to false alarms than others, people tend to generalize across the population of all systems, distrusting the good as well as the bad (Keller & Rice, 2010).

Third, the problems with false alarms can obviously be amplified to the extent that those alerts themselves are annoying and intrusive. A visual alert that is false can be fairly effectively “filtered,” since as we noted above, only when it is in the fovea is it most salient. In contrast, the down side of the omni-directionality of auditory or tactile alerts is that the attentional filter cannot restrict access. The increased annoyance accompanying such intrusive false alerts will increase the tendency of workers to deactivate them, or at least try to ignore them (Sorkin, 1989).

(p. 41) Finally, there is emerging evidence that people respond differently when false alerts are clearly “bad” (e.g., the user can obviously perceive that there is no danger) versus when they are “plausible” (e.g., a danger threshold was approached but not quite passed; Lees & Lee, 2007; Wickens, Rice, et al., 2009; see also Madhavan, Wiegmann, & Lacson,, 2006). “Cry wolf” behavior is more likely in the former case than in the latter. However, in order for humans to make this determination that a false alarm is plausible, they must be able to monitor the “raw data” independently from, and in parallel with, the automated sensors.

The mitigating solutions for the AFA problem range from the highly intuitive to the less obvious, as we describe below.

  •  Increasing alerting system sensitivity in discriminating safe from dangerous conditions. Often algorithms can be improved and an approach taken over time in developing the airborne traffic alert (TCAS) as designers responded to pilots’ complaints about the high false alarm rate (Rantanen, Wickens, Xu, & Thomas, 2004). An important question in this regard is how low such sensitivity (or reliability) can be before an alerting system becomes no longer effective. One review of alerting studies indicated that with reliabilities above about 0.80 (mean of FA and miss rate), for humans operating in a multitask environment (where attentional resources were at a premium), performance of a human supported by an imperfect alerting system would be better than that of the unaided human (Wickens & Dixon, 2007).
  •  Instructing users about the inevitable necessity of some false alarms in uncertain environments, and particularly when the event base rate is lower. Such instructions can render the false alerts as more “forgivable,” particularly if they are not bad false alerts, as described above.
  •  Implementing context sensitive mechanisms that may raise the threshold during circumstances when the base rate is known to be quite low, and lower it when the base rate is higher (e.g., fire alerts during fire season versus rainy season).
  •  Providing the user with rapid (and ideally continuously available) access to the raw data in parallel with the automation. Hence, to the extent that false alerts are in the “plausible,” not the “bad,” category described above, such access will diminish cry-wolf problems. Indeed, in such a system with raw data access, the activation of the alert may actually reinforce the human’s own raw data monitoring behavior (if the human detected the pending event before the alert sounded), as well as confirm to the human that the system is in fact well functioning (albeit a little too sensitive). These characteristics appear to have mitigated the “alarm false alarm” issue in some segments of air traffic control (Wickens, Rice, et al., 2009).
  •  Developing “likelihood alarms” in which the alert system itself can express its own degree of uncertainty when events occur that are close to the threshold (Sorkin, Kantowitz, & Kantowitz, 1988; St. Johns & Manes, 2002; Wickens & Colcombe, 2007). Such uncertain-class events can then be associated with a physical sign (e.g., an amber signal) that is less urgent than “sure events” (e.g., red flashing) but more urgent than the sign of “all clear” (e.g., green, or no sign at all). Some evidence suggests that likelihood alerts provide better overall sensitivity than simple two-state alerts (on-off).
  •  Informative alerts. Many complaints about alerts are associated with frustration that, while informing that something has gone wrong, they say little about what is wrong and what to do about it. Such concerns, addressed by making the alerts more informative (e.g., voice alerts), lead us beyond their attention-capturing properties to consideration of the further information properties associated with alerts and other displays, the issue we turn to in the next section.

Attention & Attention Travel in Information Processing

Display Layout

Attention, both its filter and fuel capabilities, is particularly challenged in a spatially distributed workspace such as that confronted by the pilot, driver, health care worker, or process controller, where multiple sources of information must be processed as a basis for action and not simply monitored. Such processing may consist of multitasking (as when the driver examines a map while endeavoring to maintain some attention to the roadway), or it may consist of information integration, as when the pilot compares the map with the visual view of landmarks outside the airplane to assure that he or she is on the right track. In such circumstances, we see that attention must travel from place to place, an analog to physical travel, and that such travel is not effortless, particularly in a widely distributed visual workspace.

In these circumstances, designers often have an opportunity to “lay out” some aspects of the (p. 42) workspace to minimize net travel time, according to seven specific principles (Wickens, Vincow, Schopper, & Lincoln, 1997), as we describe in the following. The first two of these principles depend upon defining a “normal line of sight” (NLOS); that is, in a seated workspace, a line about 20 degrees below the horizon extending from the eyes (Sanders & McCormick, 1993). With regard to the point where the line intersects the workspace surface:

  1. 1. The most important displays should be closer to the NLOS. (This applies particularly to displays whose changes are critical to be noticed in a timely fashion.)
  2. 2. The most frequently used displays should be closest to the NLOS.
  3. 3. Pairs (or N-tuples) of displays used for a single task (i.e., that must be integrated or compared and are therefore typically used in sequence) should be close together. In some cases this may involve database overlay, as when terrain and weather are superimposed in a pilot’s navigational map so that a safe route through both hazards can be planned (Kroft & Wickens, 2003).
  4. 4. Displays related to a single class of information should be close together, or grouped. This will aid in visual search, as we will see below.
  5. 5. Displays should be positioned close to the controls that affect those displays (display-control compatibility; Proctor & Proctor, 2006).

We note that in particular, principles 2 (frequency of use) and 3 (relatedness) are designed to minimize the total attention travel time. This optimization, if not followed, may lead to slower performance (since attention travel takes time) but, in a worst case, when attention travel is very effortful, may lead to a relevant display not being visited at all.

Given the role of attention travel in display layout optimization, it is important to realize that travel cost (or information access cost) is not a linear function of distance, but instead can be seen to have at least three components (see Wickens, 1993; Wickens & McCarley, 2008): (1) When displays are close together, so that the eye can scan from one to the other without head movement (within about 20 degrees), the cost is minimal and does not change with separation distance. (2) When the displays are separated by more than 20–30 degrees, head movements are required to move the eyes from one to the other, imposing not only a substantially increased cost, but one that grows with the distance (angle) of head movement. (3) Sometimes displays just cannot be accessed by head movements alone, but rather, require body rotation (checking the blind spot in a car) or, increasingly, key presses or mouse movements to access a particular “page” in a menu or multifunction display. In the latter case, the “distance” of attention travel can be calculated in part by the number of key presses and in part by the cognitive complexity of menu navigation. (e.g., number of options; Seidler & Wickens, 1992, Wickens & Seidler, 1997). Greater information access can not only impose direct time costs but also inhibit information retrieval (Gray & Fu, 2004) and may alter the overall strategy and accuracy of task performance (Morgan, Patrick, et al., 2009).

An important question for designers to answer is what happens when principles “collide” or oppose each other. Suppose, for example, that frequency-of-use dictates that a particular display be close to the NLOS, but integration requires that the same display be close to another, which (for other reasons) has been positioned far from the NLOS. Which principle is more costly to violate? A study that addressed this question had pilots fly with eight different display layouts that either conformed to or violated each of three different principles; frequency of use, integration (sequence of use), and display-control compatibility (Andre & Wickens, 1992). The results revealed that the sequence-of-use principle (close positioning of displays to be integrated for the same task) dominated the frequency-of-use principle, as assessed by overall pilot performance. Both of these dominated display-control compatibility. The impact of these human performance weightings, coupled with others, has been represented in various display layout models summarized in Wickens, Vincow, et al. (1997), which have integrated various elements that influence the efficiency of attention travel, as described above, to provide “figure of merit” estimates of display layout optimization (e.g., Fowler, Williams, Fowler, & Young, 1968).

There are two additional attention-guided principles that can be applied to display layout: A principle of (6) consistency dictates that displays should remain in the same consistent location so that they can always be found (selective attention directed there) with minimal interference. Adhering to this principle will not only lead to standardization of layouts across different systems (e.g., aircraft instrument panels always adhere to the basic “T” formation for locating four critical instruments), but adherence will also provide a resistant force against flexible reconfigurable display layouts, where designers may chose to reposition displays as a function of work phase (e.g., phase (p. 43) of flight, or normal vs. abnormal operations), or workers may be given the option of moving displays according to their preference. While such flexibility provides some advantages, these may be offset by the lack of consistency (Andre & Wickens, 1992).

A principle of (7) clutter avoidance is one that resists the forces to either put too many displays in a workspace or, in adhering to frequency of use, place all displays tightly clustered or even overlapping. Close proximity achieved via minimizing spatial separation will create clutter—difficulty of focusing attention on individual elements—whenever the spatial separation is less than around 1 degree of visual angle (Broadbent, 1982), and particularly when the elements overlap or are overlaid, as in a HUD display, or a map with text labels overlaying ground features, or overlaying an ATC map (Wickens, 200b).

Head-up displays and head-mounted displays accomplish this by superimposing instruments over an important forward view. The benefit (of not having to move the eyes between the instruments and the forward view) is partially offset by the clutter costs of closely placed information (Wickens, Ververs, & Fadden, 2004). We note here that a special case of close spatial proximity for information to be integrated is represented by geographical database overlay; for example, a map of terrain and weather for an aircraft pilot. When the two databases must be integrated (e.g., to find a safe path avoiding both terrain and weather), the close proximity (0 distance) of an overlay provides better performance than a side-by-side presentation of each, despite the greater clutter of the overlay (Kroft & Wickens, 2003; Wickens, 2000b).

The Proximity Compatibility Principle

The theoretical basis for the particular advantage of close proximity displays for information that needs to be integrated (principle 3) lies in the multitasking required as the human must retain (often by rehearsal) information from a first-accessed source, while attention travels to the second source for it to be accessed and then compared or combined. At a minimum, the time for travel will degrade memory for the first source. However, if locating the second source requires some search through a cluttered field or (worse yet) accessing another screen via a key press or turning a page, then the mental effort of such access will compete with the retention. This principle, that information that must be integrated in the mind (close mental proximity) should also be close together on a display (close physical proximity), is referred to as the proximity compatibility principle (Wickens & Carswell, 1995; Wickens & McCarley, 2008) and will be addressed further below.

The SEEV Model of Visual Attention Travel

Attention travel across displays and visual workspaces requires eye movements. While in reading text these movements are relatively linear and systematic, in monitoring multi-element displays to supervise dynamic systems, like those of the anesthesiologist, pilot, driver, or process control supervisor, scan paths will be much less predictable. Assisting these predictions is the SEEV model, which was introduced in the previous section in the context of the noticing-SEEV (NSEEV) model of event detection. SEEV predicts steady state scanning around the workspace before the event to be noticed occurs. The integration of its four components—S = salience, E = effort, E = expectancy, and V = value—is based on the prior modeling of Senders (19641980), Sheridan (1970), and Moray (1986), and these are combined additively to predict the distribution of fixation locations. Then, when the to-be-noticed event (TBNE) is scheduled to occur at a specific location in this workspace, SEEV will predict the distribution of eccentricities of that location from the fovea, which in turn predicts the likelihood of detection (diminishing with increasing eccentricity).

The SEEV model has been validated to predict the percentage of time looking at different areas of interest or displays with 80%–90% validity, in workspaces ranging from the live surgical operating table (Koh, Park, Wickens, Teng, & Chia, 2011) to simulations of vehicle driving (Horrey, Wickens, & Consalus, 2006) to both the conventional cockpit (Wickens, Goh, et al., 2003) and the more automated cockpit (Wickens, McCarley, et al., 2008; Steelman-Allen et al., 2011). As noted above, when N is added to SEEV, SEEV then provides the context for predicting eccentricity of the TBNE. N-SEEV has been able to predict pilot detection of a variety of unexpected events both within and outside the cockpit with reasonably high accuracy (r = 0.75; Steelman-Allen et al., 2011, Wickens, 2012, Wickens, Hooey, et al., 2009).

The SEEV model predicts how attention is actually allocated across displays. Without the unwanted influence of salience and effort, how attention SHOULD be allocated across displays is defined purely by expectancy (frequency of use and frequency of sequential use) and value. These parameters have been combined in several (p. 44) computational models of display layout, as discussed above (see Wickens, Vincow, et al., 1997, for review of these).

Display Integration

Design Principles

 Attention

Figure 2.2 Creating proximity in an air traffic display via linking and color.

As noted in the previous section, simply moving displays close together to reduce information access cost can create clutter. There are other means of creating closeness or “proximity” between two or more display elements and hence aid the movement of attention between them, techniques that can loosely be referred to as “display integration.” Many of these are incorporated within the proximity compatibility principle introduced above (see also Wickens & McCarley, 2008). Thus, when spatial proximity cannot be achieved for two elements that are to be integrated (as, for example, when comparing two elements on a map whose coordinates are fixed), the following two techniques can be employed:

  •  Linking, by constructing a physical line between the two, as a line connecting two points on a line graph. Attention can be said to “follow the line,” just as following a road between two geographical locations facilitates the travel from one to another (Jolicoeur & Ingleton, 1991).
  •  Common color, by combining linking and common color. Consider the air traffic control display shown schematically in Figure 2.2 in which planes A and D are at the same altitude and on a collision course. Clearly the controller must mentally integrate the trajectories of the two to determine where and when this collision might take place. Having automation construct a graphic link between them and illuminate them in a distinct color (e.g., red) will facilitate this mental integration computing the anticipated point, time, and separation of closest passage.

Besides spatial proximity, linkage, and color, a fourth technique of display integration involves moving two elements so close together that they essentially “fuse” into a single object, a technique known as object integration. For example, a single data point on a correlation plot represents two elements, an X and a Y value (Goettl, Wickens, & Kramer, 1991). The “artificial horizon” on a pilot’s attitude display represents pitch and roll by a single line that can rotate and translate. A single icon object on a weather map may contain several attributes of information. One advantage of object integration, supported by a great deal of research on attention (e.g., Treisman, 1986; Carswell & Wickens, 1996; Duncan, 1984; Scholl, 2001), results because all attributes of a single object are processed more or less in parallel, whereas two separate objects are more likely to be processed in series; hence there is greater efficiency of divided attention between two attributes of the single object display than between two objects.

A fifth technique for display integration, and one that sometimes accompanies object integration, is the creation of emergent features (Pomerantz & Pristach, 1989; Bennett & Flach, 2011). This results when multiple elements of a given display “configure” to create a new feature that is not inherent in any of the objects themselves. For example, four bar graphs (e.g., representing engine temperature on four systems) that are all aligned to the same baseline will present an emergent feature of “equality,” which is the co-linearity of their tops, when all are at the same level. Such emergent features can greatly benefit performance to the extent that the feature itself “maps” directly to a critical integration quantity necessary for monitoring and control (Bennett & Flach, 1992; Bennett & Flach, this handbook; Peebles, 2008). If the features are perceptually salient (like the co-linearity above or the symmetrical appearance of certain geometric objects), then direct perception can allow the integration to be achieved without imposing extensive cognitive effort (Vicente, 2002).

Note that the association of object displays with emergent features results because the formation of an object by dimensions, like the length, height, and width of sides and tops of a rectangle, will almost always create emergent features (like the size and shape of the rectangle) that would not exist were the dimensions presented in isolation from each other (e.g., as separate bar graphs; Barnett & Wickens, 1988). However, we also note that if the emergent features of the object are not mapped to critical integration task parameters, such object (p. 45) integration may be of no benefit, and other means of configuring the individual variables may provide better emergent features.

Display Proximity and Clutter

As we have noted above, close proximity achieved via minimizing spatial separation will create clutter. This is one distinct advantage of object integration. Two (or more) attributes of a single object are processed in parallel and hence unlikely to interfere with each other’s processing, in contrast to two separate objects occupying the same space (e.g., overlay). Various computational models of clutter have been proposed (e.g., Rosenhotz, Li, & Nakano, 2007; Beck, Lohrenz, & Trafton, 2010).

Extensions of Proximity Compatibility and Object Integration

Two important design concepts related to proximity compatibility are those of visual momentum (Woods, 1984; Aretz, 1991; Wickens & McCarley, 2008; Bennett & Flach, 2012) and ecological interface displays (Vicente, 2002; Burns & Hajdukiewicz, 2004; Burns et al., 2008). Both have, at their core, the goal of fluently moving attention across complex multi-element workspaces in order to facilitate integration and comparison. Visual momentum is a technique designed to facilitate mental integration of two or more different “views” of a single spatial area or network. For example, one technique of visual momentum would involve presenting a global view of the full workspace, alongside a more localized zoom-in view, with the region of the local view highlighted in the global view (Aretz, 1991; Olmos, Liang, & Wickens, 1997; Tang, 2001). Such highlighting allows rapid movement of attention between the two views. A second technique is continuous “panning” rather than abrupt switching between two views of the same region, but from different orientations (Hollands et al., 2008). Visual momentum concepts are particularly valuable when visualizing complex information (Robertson, Czerwinski, et al., 2009; Wickens, Hollands, Banbury, & Parasuraman, 2012).

The concept of an ecological interface is more complex, and space here does not allow much coverage except to note that for very complex systems like power plants, industrial process control industries, or human physiology, there are ways of presenting the multiple variables in such a manner that they directly signal certain critical constraints of the environment or “ecology” that they represent (Burns & Hajdukiewicz, 2004; Burns et al., 2008); not surprisingly, many of these “ways” capitalize on emergent features and configural displays to graphically represent constraints and boundary conditions in the system (e.g., the balance between mass and energy, or between inflow and outflow, which characterizes stability). Such ecological displays are often found to be most beneficial in fault management, a particular situation when variables must be integrated in new and different ways to diagnose the source of a fault and project its implications to system safety and productivity (Burns, this Handbook, Vicente, 2002; Burns & Hajdukiewicz, 2004).

Visual Search

Visual search is a selective attention function, similar to both noticing and supervisory sampling. However, unlike noticing, search is more goal directed toward finding a predetermined target. In doing so, attention (often coupled with the eyes) usually moves sequentially until the target is found or a decision is made that it is not present (Drury, 2006; Wickens & McCarley, 2008). Search is a key component in many industrial inspection tasks (Drury, 19902006). Thus the primary cognitive demands associated with search precede locating the object; whereas the primary task in noticing typically follows the triggering event. That said, many variables affect both tasks in the same way: Both usually involve eye movements (when noticing involves a visual event), both are inhibited by a cluttered background and cognitive workload, and both are improved when the target (in search) or the TBNE (in noticing) is salient (flashing, high-contrast, moving, etc.). Importantly, and for a given level of salience, a target will be more likely to be found in a search task than it will be noticed in a noticing task. This difference reflects the added top-down influence of the goal direction of the search task; the search is “tuned” to certain target properties. Both tasks are also influenced by top-down expectancy in other ways. In search, there are two sources of expectancy. Expectancy for target location influences where we look first, and expectancy of whether there is a target at all influences how long we continue a search when the target has not been found (Wolfe, Horowitz, & Kenner, 2005; Drury & Chi, 1995).

From a design perspective, long, tedious searches can have two detrimental influences. First, they can often sacrifice worker efficiency, as, for example, when a computer service worker must spend several seconds searching for a target on a screen, repeating the operation hundreds of times over a workday. (p. 46) In these circumstances, even milliseconds of added search delay can accumulate large costs (Gray & Boehm-Davis, 2000). Second, they can often inhibit safety, particularly in vehicle control, when long head-down searches (e.g., for a destination on an electronic map) can leave the driver exposed to roadway hazards (Wickens & Horrey, 2009). In another example, analysts computed that the long search time on a railway traffic map spelled the difference between safety and a fatal railway crash, as dispatchers spent 18 precious seconds attempting to locate the source train, causing a flashing collision alert (Stanton & Babar, 2008). This elapsed time spelled the tragic difference between the dispatcher commanding a braking action in time, and too late.

Improving Search

In response concerns such as those described above, a number of attention principles can speak to ways that search can be improved. Some of these solutions include:

  •  Target enhancement. In some circumstances, simple solutions like improving workplace lighting can increase the discriminability between targets and non-targets, a definite advantage when the targets themselves are subtle (like cracks in the hull of an aircraft; Drury, Spencer, & Schurman, 1997).
  •  Signal-noise enhancement. Creative solutions can identify ways to differentially amplify the target over the non-targets. For example, if targets are identified by different depths in a three-dimensional display, then providing the user with the ability to change the viewpoint on that display will produce differential motion of targets vs. non-targets(Drury et al., 2001, Drury, 2006).
  •  Selective highlighting. To the extent that the searcher (or another agent) can define features possessed by the target, display technology can then artificially enhance all elements possessing those features—for example, by painting them a different color or increasing their intensity. Thus, for example, in air traffic control, all aircraft flying at a common altitude may be highlighted as particularly relevant because they are more likely to be on a collision course than those at different altitudes (Remington, Johnson, Ruthruff, Gold, & Romera, 2001). Of course, such attention-guidance automation imposes the danger that it could be less than fully reliable (Yeh & Wickens, 2001a; Yeh, Merlo, Wickens, & Brandeberg, 2003; Fisher & Tan, 1989; Metzger & Parasuraman, 2005). For example, highlighting could be imposed on an element that is not a target, or, more seriously, it could fail to highlight one that is. (These two classes of highlighting errors parallel the two classes of alerting errors discussed previously.) Studies of highlighting validity indicate that people naturally tend to search the highlighted items first (Fisher, Coury, Tengs, & Duffy, 2009), and if there is uncertainty as to whether a target is present or not, people may truncate the search if they fail to find it in the highlighted subset. This behavior would lead to a miss if the target was not highlighted.
  •  Search field organization. In many search fields (e.g., a computer screen), it is possible to impose an organization on the elements to be searched: a linear list or grid. Such organization aids search in two respects. It can help people keep track of examined and not-yet-examined items without excessive burden on memory. It can also avail the opportunity for designers to place the items most likely to be the target of search near the top (for example, the most frequently used items in a computer menu), given the tendency for people to search from top to bottom.
  •  Search instructions and target expectancy. As noted, the expectancy of whether a target is present or not can influence the amount of effort spent on continuing the search when a target is not yet found. Search shows a clear speed-accuracy trade-off, such that longer searches are more likely to turn up a target (Drury, 1994). On the one hand, instructions that emphasize the value of finding the target will produce greater success (but longer search times; Barclay, Vicari, Doughty, Johanson, & Greenlaw, 2006). On the other hand, a low target expectancy will more likely produce a premature termination, leading to a miss (Wolfe et al., 2005). Furthermore, when there may be multiple targets (such as malignant nodules in an x-ray), instructions can counter the tendency to stop the search after a first target is found and impose the search in an exhaustive manner (Barclay et al., 2006).

Modeling Search: The Serial Self-Terminating Model

The serial self-terminating search (STSS) model proposed by Sternberg (1966) is based on data from Neisser (1963) by which attention searches a field of non-targets sometimes containing a target. The (p. 47) model predicts the time to locate the target or, if it is not present, to decide that it is not. Accordingly, the model predicts that each non-target element is inspected in series, requiring a constant time (T) to decide that each is not the target, until the target is reached and a response is made. Thus the search is self-terminated. When the target is not present, all items must be inspected. When the target is present, on average half the items will be inspected. Hence, the slope of the search time as a function of the size of the search field (N) is NT when the target is absent, and NT/2 when it is present. Various versions of search models have borrowed from the basic elements of the SSTS model (Drury, 1994; Drury et al., 2001; Teichner & Mocharnuk, 1979; Yeh & Wickens, 2001b; Fisher et al., 1989; Fisher & Tan, 1989; Beck et al., 2010; Nunes, Wickens, & Yin, 2006).

Several modifications and elaborations of this model can be made. For example, if the target is more confusable with the non-targets, T will increase (hence increasing the slope; Geisler & Chou, 1995). If the target is defined by a single salient feature (e.g., red in a sea of green), the slope is essentially 0, describing a parallel search process (all items inspected at once). Wolfe (19942007, Wolfe & Horowitz, 2004) has proposed a “guided search” model by which initially several non-targets in the search field can be immediately filtered out (i.e., in parallel), but search through the remainder is serial. This approach has been taken to modeling the benefits of highlighting certain key elements of the search field that are assumed to be most relevant, as discussed above (Fisher, Coury, et al., 1989; Beck et al., 2010; Nunes et al., 2006; Yeh & Wickens, 2001b; Wickens, Alexander, et al., 2004).

Attention to Tasks: Multiple Resources

When two tasks must be performed within a narrow window of time, there are two qualitatively different ways in which this can be managed: They can be time-shared, wherein the performance of each task is ongoing concurrently, as when listening to a cell phone while driving (Regan, Lee, & Young, 2011; Wickens, Hollands et al., 2012). This is divided attention between tasks. Alternatively, they can be performed in sequence, as when a driver stops the car before answering the cell phone call. Each situation has very different implications and different sorts of processing operations underlying the success and failure of multitasking, so we consider each in turn.

Concurrent Task Performance: Multiple Resources

According to one prominent theory of multitasking, the multiple resource theory (Navon & Gopher, 1979; Wickens, 19801984200220052008a), there are three fundamental elements dictating how well a given task will be performed concurrently with another. First, most intuitively, the difficulty or attentional resource demand of both tasks will influence time sharing. Easier tasks (those of lower mental workload, or greater automaticity) will be time shared more effectively (Kahneman, 1973).

Second, a greater degree of shared versus separate resources within the human’s information processing structure will increase interference. Wickens (2002) has developed a conception of what those separate resources might be in a way that is consistent with neurophysiological data (Just et al., 2001). For design purposes, these can be broken down in terms of four dichotomies, with “different resources” defined by the two levels of each dichotomy, as follows:

  •  processing stages—perceptual-cognitive (working memory) versus response selection and execution of action
  •  processing codes—spatial versus verbal/linguistic
  •  processing modalities (within perception)—visual versus auditory (and there is now emerging evidence that the tactile channel defines a third perceptual resource category; Lu, Sarter, & Wickens, 2011)
  •  visual channels (within visual modality)—focal (object recognition) versus ambient (motion processing) vision (Previc, 19982000)

Accordingly, as a design and analysis tool (Wickens, 20022005; Wickens, Bagnall, Gosakan, & Walters, 2011), a given task may be defined by levels within one or more of the four dimensions. The interference between two tasks can then be partially predicted by the number of dimensions on which their demands share common levels. This prediction of dual task interference is then augmented by summing the total resource demands of the two tasks (independent of their resource competition). A computational version of this model is described in more detail in Wickens, 2005; Sarno and Wickens, 1995; and Wickens, Bagnall et al., 2011.

The third element in predicting success or failure in divided attention between tasks is the allocation policy between them (Norman & Bobrow, 1975; Navon & Gopher, 1979). Intuitively, the more favored task of a pair (the primary task) will preserve its performance close to the single task level, whereas the less favored (the secondary task) will show a greater decrement. This simple feature, allocation (p. 48) policy, describes why the automobile accident rate while using cell phones, while substantial, is not higher than it is: Most drivers still do treat lane keeping and hazard monitoring as a task of higher priority than that of phone conversation.

There is one final factor not accommodated by multiple resource theory that can account for differences in the effectiveness of concurrent task performance, and that is confusion, caused by the similarity of elements within the two tasks (Wickens & Hollands,2000). The more similar those elements are, the more likely there will be cross talk between the two such that, for example, elements of one task show up in the response to the second task. A classic example relates to the challenge of patting your head while rubbing your stomach. Another might be trying to tally or copy student test scores while listening to basketball scores. Note, however, that similarity-based confusion is most likely to occur when the tasks already share some demand for common resources (e.g., in the above two examples, both spatial manual tasks or both auditory/verbal tasks using digits).

Sequential Performance & Task Management

Even when an operator may try to perform two tasks in parallel (albeit with degraded performance on one or both), this may become impossible either because one or both are of high resource demand or because they compete for common incompatible resources, like speaking two different messages at once (the voice can speak only one at a time) or looking to two sources of widely spaced visual inputs. In these circumstances, once the limits of multiple resources have dictated that concurrence is impossible, the first two elements of multiple resource theory (demand and resource structure) no longer play a role in predicting interference. However the third element—allocation policy—now occupies center stage as the most important factor in sequential task management: which task is performed and which is completely abandoned or neglected, and for how long.

Two general scenarios underlie the manifestation of sequential task management strategies, both involving a decision process of which task to perform, and both partially embedded within the framework of queuing theory (Moray, Dessouky, Kijowski, & Adapathya, 1991). One of these is the study of task switching (e.g., Rogers & Monsell, 1995; Goodrich, this handbook), and the other is the study of interruption management (e.g., Trafton & Monk, 2007). In the former case, the operator is confronted with two tasks and must choose one to initiate first. In the latter case, the operator is already performing one (the “ongoing task”—OT) when a second task (the “interrupting task”—IT) arrives, and must decide whether (or for how long) to continue the OT before switching to the IT, then when to return to the OT. Here researchers often focus on the quality of OT upon return (e.g., how fast it is resumed, whether it is resumed where it was “left off,” etc. Trafton & Monk, 2007; Wickens, Hollands et al., 2000).

In both cases, queuing theory can sometimes be applied to determine optimal strategies of task (and interruption) management (Moray et al., 1991; Liao and Moray, 1993). Some of these strategies are quite intuitive, such as when two tasks differ in their importance (or penalty for delayed completion), the more important should be undertaken first. However, when a large number of task features vary between the two, such as their length, their expected duration, their difficulty, the decay of information within a task while it is neglected, or the uncertainty in priority, then assessing optimal solutions becomes very complex. Indeed, in these circumstances it can easily be argued that the mental workload (and time) cost of a human computing the optimal strategy will consume sufficient resources to offset the very goal of trying to make the optimal choice (Raby & Wickens, 1994; Laudeman & Palmer, 1995).

While there are many design-relevant research conclusions in this area, many of these are also based upon only limited data, or data collected in fairly simple laboratory environments. The following paragraphs describe some of the more important of these.

More optimal task switching can be achieved with a preview of upcoming tasks (e.g., its duration (Tulga & Sheridan, 1980).

Very slow task switching in multitask environments is suboptimal (Raby & Wickens, 1994), and optimal switching frequency can at least partially be dictated by optimal models (Moray, 1986; Wickens, McCarley, et al., 2008). Particularly in widely distributed visual workspaces, task switching can be partially captured by eye movements, using the (p. 49) SEEV model to prescribe optimal switching (Kohe et al., 2011).

Very slow task switching characterizes what is sometimes referred to as “attentional tunneling” or “attentional narrowing,” where critical areas of interest (and tasks served by those areas) are neglected for long periods of time, inviting failures to notice key events in those areas (Wickens & Alexander, 2009; Wickens & Horrey, 2009), particularly when those events are unexpected (Wickens, Hooey et al., 2009). In these instances, the “task” that is neglected is often considered the task of maintaining situation awareness (see below).

Three qualitatively different task features tend to induce attentional tunneling, these being extreme levels of interest (such as an engaging cell phone conversation (Horrey, Lesch, & Gabaret, 2009), compelling realistic displays (e.g., a 3-D navigational display; Wickens & Alexander, 2009), and fault management (Moray & Rotenberg, 1989).

Attentional tunneling can be mitigated by salient alarms for neglected tasks (see above), but to be most effective such alarms should be adaptive (see Kaber, this handbook), more likely to be activated if automation infers that neglect is taking place (e.g., following an assessment of prolonged head-down orientation in vehicle control).

In interruption management, several variables influence the fluency of task resumption (Dismukes, 2010; Trafton & Monk, 2007; Monk, Trafton, & Boehm-Davis, 2008; Grundgeiger et al., 3010; Smallman & St. John, 2008; Wickens & McCarley, 2008; Morgan, Patrick et al., 2009; Wickens, Hollands et al., 2012), particularly the choice of when to leave an ongoing task (after a subgoal has been completed) and whether a “placeholder” is imposed when the ongoing task is left (e.g., a mark on the page where reading stopped), in order to increase the fluency of return to the OT.

Voice communication tasks tend to be particularly intrusive in interruptions, leading to premature abandonment of ongoing tasks of higher priority (McFarlane & Latorella, 2002; Damos, 1997).

Many aspects of interruption management flow from the study of prospective memory (Dismukes, 2010; Loukopoulos, Dismukes, & Barshi, 2009), which is the memory to do a future task. In this particular case, the “future task” is re-engaging the ongoing task following the interruption.

There are beginning to be developed design-oriented solutions that can (a) use automation to monitor the progress of certain types of manual work to assess more appropriate times to interrupt (Bailey & Konstan, 2006; Dorneich et al., 2012); (b) provide advanced notification of the importance of the interruption so that the operator can decide whether or not to fully abandon the ongoing task or postpone a switch to the interruption task (Ho, Nikolic, Waters, & Sarter, 2004); (c) provide visual placeholders, like a flashing cursor, that will support rapid reacquisition of an ongoing task after the switch (Trafton, Altmann, & Brock, 2005); and (d) provide support tools such as that described by Smallman and St. John, 2008.

Hybrid Models

There is a set of models describing multitasking that are neither strictly parallel (like multiple resources; see above) nor strictly serial (like queuing theory models of sequential performance), but involve scheduling multiple cognitive processes in the service of two tasks that may sometimes be used in series and sometimes in parallel (Meyer & Kieras, 1997; Liu, 1996). One particularly important approach along this line is that of threaded cognition (Salvucci & Taatgan, 20082011; Salvucci, this handbook). In particular, the authors have proposed a series of guidelines in the design of multitasking environments.

Conclusion

In conclusion, a great deal of research is required to better understand how people handle sequential tasks under time pressure. One of the more intriguing aspects of this issue involves defining the boundary condition of increasing demands when the multitasker abandons hope of concurrent processing and “regresses” to a sequential mode, ceasing the performance of one task altogether. This “point” is sometimes referred to as a “red line” along a scale of increasing mental workload, imposed by tasks (or sets of tasks), and brings us to the next section on mental workload.

Mental Workload

Mental Workload Assessment

 Attention

Figure 2.3 The supply-demand curve of resource allocation, illustrating the concept of the “red line”. Wickens, Christopher; Hollands, Justin G.; Engineering Psychology and Human Performance, 3rd Edition, (c) 2000. Reprinted by permission of Pearson Education, Inc., Upper Saddle River, NJ.

Mental workload may be roughly described as the relation between the attentional resource demands (fuel requirements) imposed by tasks and the resources supplied by the operator in performing those tasks (fuel available “in the tank”). In the former case, resource requirements can be specified by critical task characteristics that impose greater demands, such as the working memory demands of a task, the number of mental operations, the signal-noise ratio of its displayed elements, the compatibility of mapping from display to control, the precision of required control, the time pressure, or simply the number of tasks imposed at one time. Because a given task (p. 50) environment may be characterized by several of these dimensions at once, each expressed in very different units, the issue of how to combine these into a single metric of “mental workload imposed” is quite challenging, to say the least. It is complicated further because demands of a task configuration will decrease with the skill and practice of the performer.

In the case of resources supplied, there is some evidence that measures of “effort investment” may be more quantifiable, in terms of either physiological measures (Tsang & Vidulich, 2006; Kramer & Parasuraman, 2007) such as heart rate variability or pupil diameter, or in terms of subjective measures (Hart & Staveland, 1988; Hill et al., 1992; Tsang & Vidulich, 2006).

Both measures of resources required and resources supplied (invested) are joined in the “supply-demand” function shown in Figure 2.3, in which increasing demands on the task (x- axis) are met with increasing resources supplied (solid line), up to the point at which resources available are “maxed out.” Performance on the task(s) in question (the dashed line) is perfect up to this point, but further increases in demand cannot be met, and performance then declines. In the parlance of the previous section, this point of inflection on both curves is often referred to as the “red line” of workload, in that designers should strive to maintain task demands always slightly to the left of this point. The desire to stay to the left of the inflection is driven by the design goal of maintaining a margin of “reserve capacity” in order to deal with unexpected emergencies should something go wrong.

In addressing issues of workload, designers are confronted with two top-level questions. First, how can we predict or measure the point along the x-axis of Figure 2.3 imposed by a particular task requirement in relation to the “red line.” Given the challenges of assessing either resources required or supplied, this can be a difficult enterprise, although progress is being made via the elaborate development of workload assessment measures and computational models of task demand (Laughery, LaBiere, & Archer, 2006). Second, if workload is either predicted or assessed to be above the red line, what can be done to reduce it? Solutions often can be categorized into those that:

  •  redesign the task (e.g., by changing an interface to use separate resources; by reducing incompatible mappings, by reducing working memory requirements, by facilitating information integration, etc.)
  •  “redesign” the operator by training
  •  impose automation

The third solution, using automation to eliminate or reduce human task demands, leads us to a final section relating automation to attention but also invoking a critical third variable, situation awareness.

Attention, Situation Awareness, Workload, and Automation

At a fundamental level, as suggested above, automation and attention demands (workload) are negatively related: The higher the levels of automation that are invoked, the lower the operator workload. The pilot of the modern aircraft with an automated flight management system can fly a complex route with far less hands-on flying than that of a general aviation airplane, where stick, rudder, and throttle may need to be continuously adjusted. But such a simple relationship is complicated in many ways, particularly given the all-important influence of situation awareness (Endsley, 1995; Endsley, this handbook; Durso, Rawson, & Girotto, 2007; Banbury & Tremblay, 2004; Parasuraman, Sheridan, & Wickens, 2008; Wickens, 2008b). Thus it is now well established that higher levels of automation will degrade SA in two attention-related respects: monitoring/complacency and working memory.

With regard to monitoring, as automation assumes more tasks that would otherwise require human perception & supervision, the need to monitor what automation is doing decreases. In terms of alerting systems discussed earlier, this was described as increasing reliance upon automation (Meyer & Lee, 2004, this handbook), reflected in decreased scanning. Such decreases can be justified as, in some (p. 51) sense, optimal (Moray, 2003; Moray & Inagaki, 2000), given the low likelihood of automation failure. But if the human supervisor is not looking at automation (or the raw data it is processing), he or she will be slower in noticing those very rare failures in the automated task domain. This is what Endsley has described as a reduction in level 1 situation awareness.

With regard to understanding, the relevant phenomenon in cognitive psychology is referred to as the generation effect (Slamecka & Graf, 1978). People are more likely to remember, even briefly, the status of a dynamic system if they have actively responded to change the system than if they have passively witnessed another agent (here automation) making those changes. You remember well the actions you have just taken. The resources invested in making those actions serves you well for future retention. In contrast, decreased memory for (or awareness of) changed state in a highly automated system will leave the monitor of such a system less aware of its precise condition, if a manual takeover is required in a case of a failure. This describes a degradation of Endsley’s level 2 SA (understanding); since in many dynamic systems the current state is predictive of future states, it also translates to a degradation of level 3 SA (prediction).

We note then that, as mediated by changes in automation level, there is a direct relationship between SA and workload, a finding that is partially (although imperfectly) documented by empirical research (e.g., Kaber & Endsley, 2004; see Wickens, 2008; Wickens, Li, Santamaria, Sebok, & Sarter, 2010, for a summary). System designers should therefore seek a compromise in adopting a level of automation, between keeping workload manageable and maintaining SA at a sufficiently high level so that the operator can effectively notice and enter the loop should things go wrong.

It is important to realize, however, that the automation-mediated trade-off (between workload and loss of situation awareness) is not inevitable (Tsang & Vidulich, 2006; Wickens, Li, et al., 2010). For example, on the one hand, it may be possible to increase the level of automation to some degree such that workload will decrease but SA will not. This will happen if the curves of SA and WL decrease against automation level increase are non-linear (Wickens, 2008). On the other hand, there are certainly things that can be done to design that will simultaneously reduce workload while improving SA. Certainly training is one: The skilled operator will have less workload and greater SA than the novice. But importantly, for this chapter, many aspects of display integration can also accomplish the combined goals: A well–designed, integrated, and intuitive display can provide a rapid, easy-to-process picture of a dynamic system (supporting situation awareness), and in so doing reduce the cognitive demands of information access, integration, and working memory, simultaneously lowering workload.

Conclusion

In conclusion, we have seen how both the fuel and the filter metaphors provide a useful way of representing many aspects of attention. Derived from basic theory, these two also provide important implications for system design and cognitive engineering. Yet despite the fact that theoretical concepts of attention have been prominent for over a century (James, 1880; Titchner, 1908) and have been applied to system design for over half that time (e.g., Craik, 1947), much remains to be done. For example, the two metaphors need to be better linked to understand the relationship between scanning, selection, and multitasking. In particular, computational models of how attention operates in the complex world beyond the laboratory must be formulated and subjected to rigorous empirical validation, with complex and heterogeneous tasks, to assess the strategies adopted by workers: when to perform tasks concurrently and when, once the red line is exceeded, to abandon and initiate serial multitasking. This is the invitation to the next generation of researchers.

References

Andre, A. D., & Wickens, C. D. (1992). Layout analysis for cockpit display systems. SID International Symposium Digest of Technical Papers. Paper presented at Annual Symposium of the Society for Information Display. Seattle WashingtonFind this resource:

Aretz, A. J. (1991). The design of electronic map displays. Human Factors, 33, 85–101.Find this resource:

Bailey, B. P., & Konstan, J. A. (2006). On the need for attention-aware systems: Measuring effects of interruption on task performance, error rate, and affective state. Computers in Human Behavior, 23, 685–708.Find this resource:

Banbury, S., & Tremblay, S. (Eds.). (2004). A cognitive approach to situation awareness: Theory and application. Aldershot, England: Ashgate.Find this resource:

Barclay, R. L., Vicari, J. J., Doughty, A. S., Johanson, J. F., & Greenlaw, R. L. (2006). Colonoscopic withdrawal times and adenoma detection during screening colonoscopy. New England Journal of Medicine, 355, 2533–2541.Find this resource:

Barnett, B. J., & Wickens, C. D. (1988). Display proximity in multicue information integration: The benefit of boxes. Human Factors, 30, 15–24.Find this resource:

(p. 52) Beck, R., Lohrenz, M., & Trafton, G. (2010). Measuring search efficiency in complex search tasks. Journal of Experimental Psychology: Applied, 16, 238–250.Find this resource:

Bennett, K. B., & Flach, J. M. (2011). Display and interface design: Subtle science, exact art. Boca Raton, FL: CRC Press.Find this resource:

Bennett, K. B., & Flach, J. (2012). Visual momentum redux. International Journal of Human-Computer Studies, 70, 399–414.Find this resource:

Bennett, K. B., & Flach, J. M. (1992). Graphical displays: Implications for divided attention, focused attention, and problem solving. Human Factors, 34, 513–533.Find this resource:

Boot, W., Kramer, A., & Becic, E. (2007). Capturing attention in the laboratory and the real world. In A. Kramer, D. Wiegmann, & A. Kirlik (Eds.), Attention: From theory to practice (pp. 27–45). Oxford, England: Oxford University Press.Find this resource:

Breznitz, S. (1983). Cry-wolf: The psychology of false alarms. Hillsdale, NJ: Erlbaum.Find this resource:

Broadbent, D. (1958). Perception and communications. New York, NY: Pergamon.Find this resource:

Burian, B. (2007). Perturbing the system: Emergency and off-nominal situations under NextGen. International Journal of Applied Aviation Studies, 8, 114–127.Find this resource:

Burns, C. M., & Hajdukiewicz, J. R. (2004). Ecological interface design. Boca Raton, FL: CRC Press.Find this resource:

Burns, C. M., Skraaning, G., Jamieson, G. A., Lau, N., Kwok, J., Welch, R., & Andresen, G. (2008). Evaluation of ecological interface design for nuclear process control: Situation awareness effects. Human Factors, 50, 663–679.Find this resource:

Carpenter, S. (2002). Sights unseen. APA Monitor, 32, 54–57.Find this resource:

Carswell, C. M., & Wickens, C. D. (1996). Mixing and matching lower-level codes for object displays: Evidence for two sources of proximity compatibility. Human Factors, 38, 1–22.Find this resource:

Craik, K. W. J. (1947). Theory of the human operator in control systems I: The operator as an engineering system. British Journal of Psychology, 38, 56–61.Find this resource:

Damos, D. L. (1997). Using interruptions to identify task prioritization in Part 121 air carrier operations. In R. Jensen (Ed.), In Proceedings of the 9th International Symposium on Aviation Psychology. Columbus, OH: Ohio State University.Find this resource:

Dismukes, R. K. (2010). Remembrance of things future. In D. Harris (Ed.), Reviews of human factors & ergonomics (Vol. 6). Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Dixon, S. R., & Wickens, C. D. (2006). Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence. Human Factors, 48, 474–468.Find this resource:

Dixon, S. R., Wickens, C. D., & McCarley, J. (2007). On the independence of reliance and compliance: Are automation false alarms worse than misses? Human Factors, 49, 564–573.Find this resource:

Dorneich, M. C., Ververs. P. M., Mathan, S., Whitlow, S., & Hayes, C. C. (2012). Considering etiquette in the design of an adaptive system. Journal of Cognitive Engineering and Decision Making, 6 (2), 243–265.Find this resource:

Drury, C. G. (1990). Visual search in industrial inspection. In D. Brogan (Ed.), Visual search (pp. 263–276). London, England: Taylor & Francis.Find this resource:

Drury, C. G. (1994). The speed accuracy tradeoff in industry. Ergonomics, 37, 747–763Find this resource:

Drury, C. G. (2006). Inspection. In W. Karwowski (Ed.), International encyclopedia of ergonomics and human factors (Vol. 2). Boca Raton, FL: Taylor & Francis.Find this resource:

Drury, C. G., & Chi, C. F. (1995). A test of economic models of stopping policy in visual search. IIE Transactions, 27, 382–393.Find this resource:

Drury, C., Spencer, F., & Schurman, D. (1997). Measuring human detection performance in aircraft inspection. In Proceedings of the 41st Annual Meeting of the Human Factors Society. Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Drury, C. G., Maheswar, G., Das, A., & Helander, M. G. (2001). Improving visual inspection using binocular rivalry. International Journal of Production Research39, 2143–2153.Find this resource:

Duncan, J. (1984). Selective attention and the organization of visual information. Journal of Experimental Psychology: General, 119, 501–517.Find this resource:

Durso, F., Rawson, K., & Girotto, S. (2007). Comprehension and situation awareness. In F. Durso (Ed.), Handbook of applied cognition (pp 163–194). Chichester, England: John Wiley.Find this resource:

Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37, 32–64.Find this resource:

Fisher, D. L., Coury, B. G., Tengs, T. O., & Duffy, S. A. (1989). Minimizing the time to search visual displays: The role of highlighting. Human Factors, 31(2), 167–182.Find this resource:

Fisher, J. D., & Tan, K. C. (1989). Visual displays: The highlighting paradox. Human Factors, 31(17–31).Find this resource:

Fitts, P., & Posner, M. I. Human performance. Bellmont, CA: Brooks/Cole.Find this resource:

Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044.Find this resource:

Fougnie, D., & Marois, R. (2007). Executive working memory load induces inattentional blindness. Psychonomic Bulletin & Review, 14, 142–147.Find this resource:

Fowler, R., Williams, W., Fowler, M., & Young, D. (1968). An investigation of the relationship between operator performance and operator panel layout for continuous tasks (Technical Report No. 68–170). Wright Patterson AFB, Ohio: US Air Force Flight Dynamics Lab.Find this resource:

Geisler, W. S., & Chou, K. (1995). Separation of low-level and high-level factors in complex tasks: Visual search. Psychological Review, 102, 356–378.Find this resource:

Goettl, B. P., Wickens, C. D., & Kramer, A. F. (1991). Integrated displays and the perception of graphical data. Ergonomics, 34, 1047–1063.Find this resource:

Gray, W., & Boehm-Davis, D. (2000). Milliseconds matter. Journal of Experimental Psychology: Applied, 6, 322–335.Find this resource:

Gray, W. D., & Fu W. T. (2004). Soft constraints in interactive behavior: The case of ignoring perfect knowledge in-the-world for imperfect knowledge in-the-hea d . Cognitive Science, 28, 359–382.Find this resource:

Grundgeiger, T., Sanderson, P., Macdougall, H., & Balaubramanian, V. (2010). Interruption management in the intensive care uni t . Journal of Experimental Psychology: Applied, 16, 317–334.Find this resource:

Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. H. N. Meshkati (Ed.), Human mental workload (pp. 139–183). Amsterdam, The Netherlands: North Holland.Find this resource:

Hart, S. G., & Wickens, C. D. (2010). Cognitive Workload. NASA Human Systems Integration handbook, Chapter (p. 53) 6. Washington, DC: National Aeronautics and Space Administration.Find this resource:

Hill, S. G., Iavecchia, H., Byers, J., Bittner, A., Zaklad, A., & Christ, R. (1992). Comparison of four subjective workload rating scales. Human Factors, 34, 429–440.Find this resource:

Ho C. Y., Nikolic, M. I., Waters, M., & Sarter, N. B. (2004). Not now! Supporting interruption management by indicating the modality and urgency of pending tasks. Human Factors, 46, 399–410.Find this resource:

Hollands, J. G., Pavlovic, N. J., Enomoto, Y., & Jiang, H. (2008). Smooth rotation of 2-D and 3-D representations of terrain: An investigation into the utility of visual momentum. Human Factors50, 62–76.Find this resource:

Horrey, W. J., Lesch, M. F., & Garabet, A. (2009). Dissociation between driving performance and driver’s subjective estimates of performance and workload in dual task conditions. Journal of Safety Research40, 7–12.Find this resource:

Horrey, W. J., Wickens, C. D., & Consalus, K. P. (2006). Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies. Journal of Experimental Psychology: Applied, 12(2), 67–86.Find this resource:

Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506.Find this resource:

James, W. (1890). Principles of psychology. New York, NY: Holt. (Reprinted in 1950 by Dover Press.)Find this resource:

Jolicoeur, P., & Ingleton, M. (1991). Size invariance in curve tracing. Memory & Cognition, 19(1), 21–36.Find this resource:

Just, M. A., Carpenter, P. A., Keller, T. A., Emery, L., Zajac, H., & Thulborn, K. R. (2001). Interdependence of nonoverlapping cortical systems in dual cognitive tasks. Neuroimage, 14, 417–426.Find this resource:

Kaber, D. B., & Endsley, M. (2004). The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical Issues in Ergonomics Science, 5, 113–153.Find this resource:

Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice Hall.Find this resource:

Kestin, I., Miller, B., & Lockhart, C. (1988). Auditory alarms during anesthesia monitoring. Anestheiology, 69, 106–109.Find this resource:

Keller, M. D., & Rice, S. (2010) System-wide versus component-specific trust using multiple aids. The Journal of General Psychology, 137, 114–128.Find this resource:

Kirwan, B., & Ainsworth, L. (1992). A guide to task analysis. London, England: Taylor & Francis.Find this resource:

Koh, R., Park, T., Wickens, C., Teng, O., & Chia, N. (2011). Differences in attentional strategies by novice and experienced operating theatre scrub nurses. Journal of Experimental Psychology: Applied, 17, 233–246.Find this resource:

Kramer, A. F., & Parasuraman, R. (2007). Neuroergonomics—application of neuroscience to human factors. In J. Caccioppo, L. Tassinary, & G. Berntson (Eds.), Handbook of psychophysiology (2nd ed.). New York: Cambridge University Press.Find this resource:

Kramer, A., Wiegmann, D., & Kirlik, A. (Eds.) (2007). Attention: from theory to practice. Oxford, England: Oxford University Press.Find this resource:

Kroft, P. D., & Wickens, C. D. (2003). Displaying multi-domain graphical database information: An evaluation of scanning, clutter, display size, and user interactivity. Information Design Journal, 11(1), 44–52.Find this resource:

Laudeman, I. V., & Palmer, E. A. (1995). Quantitative measurement of observed workload in the analysis of aircrew performance. International Journal of Aviation Psychology, 5(2), 187–198.Find this resource:

Laughery, K. R., LeBiere, C., & Archer, S. (2006). Modeling human performance in complex systems. In G. Salvendy (Ed.), Handbook of human factors & ergonomics (pp. 967–996). Hoboken, NJ: John Wiley & Sons.Find this resource:

Lees, M. N., & Lee. J. D. (2007). The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics, 50, 1264–1286.Find this resource:

Liao, J., & Moray, N. (1993). A simulation study of human performance deterioration and mental workload. Le Travail Humain, 56(4), 321–344.Find this resource:

Liu, Y. (1996.) Queueing netword modeling of elementary mental processes. Psychological Review, 103, 116–136.Find this resource:

Loukopoulos, L., Dismukes, R. K., & Barshi, I. (2009). The multi-tasking myth/Handling complexity in real world operations. Burlington, VT: Ashgate.Find this resource:

Lu S., Wickens, C.D, Sarter, N., & Sebok, A. (2011). Informing the design of multimodal displays: A meta-analysis of empirical studies comparing auditory and tactile interruptions. In Proceedings of the 2011 meeting of the Human Factors & Ergonomics Society. Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Mack, A., & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT Press.Find this resource:

Madhavan, P., Weigmann, D., & Lacson, F. (2006). Automation failures on tasks easily performed by operators undermine trust in automated aids. Human Factors, 48, 241–256.Find this resource:

Maltz, M., & Shinar, D. (2003). New alternative methods in analyzing human behavior in cued target acquisition. Human Factors, 45, 281–295.Find this resource:

McCarley, J., Wickens, C., Sebok, A., Steelman-Allen, K., Bzostek, J., & Koenecke, C. (2009). Control of attention: Modeling the effects of stimulus characteristics, task demands, and individual differences. University of Illinois Human Factors Division: Urbana, IL: NASA NRA: NNX07AV97A.Find this resource:

McFarlane, D. C., & Latorella, K. A. (2002). The source and importance of human interruption in human-computer interface design. Human-Computer Interaction, 17, 1–61.Find this resource:

McKee, S. P., & Nakayama, K. (1984). The detection of motion in the peripheral visual field. Vision Research, 24, 25–32.Find this resource:

Metzger, U., & Parasuraman, R. (2005). Automation in future air traffic management: Effects of decision aid reliability on controller performance and mental workload. Human Factors, 47, 33–49.Find this resource:

Meyer, D. E., & Kieras, D. E. (1997). A computational theory of executive cognitive processes and multiple-task performance: Part 1: Basic mechanisms. Psychological Review, 104, 3–65.Find this resource:

Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human Factors, 43(4), 563–572.Find this resource:

Meyer, J. (2004). Conceptual issues in the study of dynamic hazard warnings. Human Factors, 46(2), 196–204.Find this resource:

Monk, C., Trafton, G., & Boehm-Davis, D. (2008). The effect of interruption duration and demand on resuming suspended goals. Journal of Experimental Psychology: Applied, 13, 299–315.Find this resource:

Moray, N. (1986). Monitoring behavior and supervisory control. In L. K. K. R. Boff & J. P. Thomas (Eds.), Handbook of perception and performance (Vol. 2, pp. 40–1–40–51). New York, NY: Wiley & Sons.Find this resource:

Moray, N. (2003). Monitoring, complacency, scepticism and eutectic behaviour. International Journal of Industrial Ergonomics, 31(3), 175–178.Find this resource:

(p. 54) Moray, N., Dessouky, M. I., Kijowski, B. A., & Adapathya, R. (1991). Strategic behavior, workload and performance in task scheduling. Human Factors, 33, 607–632.Find this resource:

Moray, N., & Inagaki, T. (2000). Attention and complacency. Theoretical Issues in Ergonomics Science, 1, 354–365.Find this resource:

Moray, N., & Rotenberg, I. (1989). Fault management in process control: Eye movements and action. Ergonomics, 32(11), 1319–1342.Find this resource:

Morgan, P., Patrick, J., Waldron, S., King, S., & Patrick, T. (2009). Improving memory after interruption: exploiting soft constraints and manipulating information access cost. Journal of Experimental Psychology: Applied15, 291–306.Find this resource:

Most, S. B., & Astur, R. S. (2007). Feature based attentional set as a cause of traffic accidents. Visual Cognition, 15(2), 125–132.Find this resource:

Navon, D., & Gopher, D. (1979). On the economy of the human processing systems. Psychological Review, 86, 254–255.Find this resource:

Neisser, U. (1963). Decision time without reaction time: Experiments on visual search. American Journal of Psychology, 76, 376–395.Find this resource:

Norman, D. A., & Bobrow, D. G. (1975). On data-limited and resource-limited processes. Cognitive Psychology, 7, 44–64.Find this resource:

Nunes, A., Wickens, C. D., & Yin, S. (2006). Examining the viability of the Neisser search model in the flight domain and the benefits of highlighting in visual search. In Proceedings of the 50th Annual Meeting of the Human Factors & Ergonomics Society (pp. 35–39). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Olmos, O., Liang, C. -C., & Wickens, C. D. (1997). Electronic map evaluation in simulated visual meteorological conditions. International Journal of Aviation Psychology, 7, 37–66.Find this resource:

Parasuraman, R., Sheridan, T., & Wickens, C. D. (2008). Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Cognitive Engineering and Decision Making, 2, 141–161.Find this resource:

Pashler, H. E. (1998). The psychology of attention. Cambridge, MA: MIT Press.Find this resource:

Peebles, D. (2008). The effect of emergent features on judgments of quantity in configural and separable displays. Journal of Experimental Psychology: Applied, 14, 85–100.Find this resource:

Pomerantz, J. R., & Pristach, E. A. (1989). Emergent features, attention, and perceptual glue in visual form perception. Journal of Experimental Psychology, 15, 635–649.Find this resource:

Previc, F. H. (1998). The neuropsychology of 3-D space. Psychological Bulletin, 124, 123–164.Find this resource:

Previc, F. H. (2000). Neuropsychological guidelines for aircraft control stations. IEEE Engineering in Medicine and Biology, March/April, 81–88.Find this resource:

Proctor, R., & Proctor, J. (2006). Selection and control of action. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (3rd ed., pp. 89–110). New York, NY: John Wiley.Find this resource:

Raby, M., & Wickens, C. D. (1994). Strategic workload management and decision biases in aviation. International Journal of Aviation Psychology, 4(3), 211–240.Find this resource:

Rantanen, E. M., Wickens, C. D., Xu X., & Thomas, L. C. (2004). Developing and validating human factors certification criteria for cockpit displays of traffic information avionics. Technical Report AHFD-04–1/FAA-04–1. Savoy, IL: University of Illinois, Aviation Human Factors Division.Find this resource:

Regan, M., Lee, J., & Young, K. (2009). Driver distraction. Boca Raton, FL: CRC PressFind this resource:

Remington, R. W., Johnston, J. C., Ruthruff, E., Gold, M., & Romera, M. (2001). Visual search in complex displays: Factors affecting conflict detection by air traffic controllers. Human Factors, 42, 349–366.Find this resource:

Rensink, R. A. (2002). Change detection. Annual Review of Psychology, 53, 245–277.Find this resource:

Robertson, G., Czerwinski, M., Fisher, D., & Lee, B. (2009). Human factors of information visualization. In F. Durso (Ed.), Reviews of Human Factors and Ergonomics (Vol. 5). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Rogers, R. D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124, 207–231.Find this resource:

Rosenhotz, R., Li Y., & Nakano, L. (2007). Measuring visual clutter. Journal of Vision, 7, 1–22.Find this resource:

Salvucci, D., & Taatgen, N. A. (2008). Threaded cognition. Psychological Review, 115, 101–130.Find this resource:

Salvucci, D., & Taatgen, N. A. (2011). The multi tasking mind. Oxford, England: Oxford University Press.Find this resource:

Sanders, M., & McCormick, E. (1993). Human factors in engineering and design. New York: Wiley.Find this resource:

Sanquist, T. F., Doctor, P., & Parasuraman, R. (2008). A threat display concept for radiation detection in homeland security cargo screening. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications, 38, 856–860.Find this resource:

Sarno, K. J., & Wickens, C. D. (1995). Role of multiple resources in predicting time-sharing efficiency: Evaluation of three workload models in a multiple-task setting. International Journal of Aviation Psychology, 5(1), 107–130.Find this resource:

Schneider, W., & Shiffrin, R. (1977). Controlled and automatic human information processing I: Detection, search and attention. Psychological Review, 84, 1–66.Find this resource:

Scholl, B. J. (2001). Objects and attention: The state of the art. Cognition, 80, 1–46.Find this resource:

Seagull, F. J., & Sandserson, P. M. (2001). Anesthesiology alarms in context: An observational study. Human Factors43, 66–78.Find this resource:

Seidler, K. S., & Wickens, C. D. (1992). Distance and organization in multifunction displays. Human Factors, 34, 555–569.Find this resource:

Senders, J. (1964). The human operator as a monitor and controller of multidegree of freedom systems. IEEE Transactions on Human Factors in Electronics, HFE-5, 2–6.Find this resource:

Senders, J. (1980). Visual scanning processes (Unpublished doctoral dissertation). University of Tilburg, The Netherlands.Find this resource:

Sheridan, T. (1970). On how often the supervisor should sample. IEEE Transactions on Systems Science and Cybernetics, SSC-6(2), 140–145.Find this resource:

Simons, D. J., & Levin, D. T. (1997). Change blindness. Trends in Cognitive Science, 1(7), 261–267.Find this resource:

Slamecka, N. J., & Graf, P. (1978). The generation effect: Delineation of a phenomena. Journal of Experimental Psychology: Human Learning and Memory, 4, 592–604.Find this resource:

Sorkin, R. D. (1989). Why are people turning off our alarms? Human Factors Bulletin, 32(4), 3–4.Find this resource:

Sorkin, R. D., Kantowitz, B. H., & Kantowitz, S. C. (1988). Likelihood alarm displays. Human Factors, 30(4), 445–459.Find this resource:

Stanton, N. A., & Baber, C. (2008). Modelling of human alarm handling response times: A case study of the Ladbroke Grove rail accident in the UK. Ergonomics, 51, 423–440.Find this resource:

Steelman-Allen, K. S., McCarley, J. S., & Wickens, C. D. (2011). Modeling the control of attention in complex visual displays. Human Factors, 53, 143–153.Find this resource:

(p. 55) St. John, M., & Manes, D. I. (2002). Making unreliable automation useful. 46th Annual Meeting of the Human Factors & Ergonomic Society. Santa Monica Cal.: Human Factors.Find this resource:

St. John, M., & Smallman, H. (2008) Four design principles for supporting situation awareness. Journal of Cognitive Engineering and Decision Making, 2, 118–139.Find this resource:

Sternberg, S. (1966). High speed scanning in human memory. Science, 153, 652–654.Find this resource:

Tang, 2001Find this resource:

Teichner, W. H., & Mocharnuk, J. B. (1979). Visual search for complex targets. Human Factors, 21, 259–275.Find this resource:

Titchner, E. B. (1908). Lectures on the elementary psychology of feeling and attention. New York, NY: MacMillan.Find this resource:

Trafton, J. G., Altman, E. M., & Brock, D. P. (2005). Huh? What was I doing? How people use environmental cues after an interruption. In Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp. 468–472). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Trafton, J. G., & Monk, C. (2007). Dealing with interruptions. Reviews of Human Factors & Ergonomics 3 (Chapter 4). Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Treisman, A. (1986). Properties, parts, and objects. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance (Vol. 2, pp. 31–31/35–70). New York, NY: Wiley and Sons.Find this resource:

Tsang, P., & Vidulich, M. (2006). Mental workload and situation awareness. In G. Salvendy (Ed.), Handbook of human factors & ergonomics (pp. 243–268). New York, NY: John Wiley.Find this resource:

Tulga, M. K., & Sheridan, T. B. (1980). Dynamic decisions and workload in multitask supervisory control. IEEE Transactions on Systems, Man and Cybernetics, SMC-10, 217–232.Find this resource:

Vicente, K. J. (2002). Ecological interface design: Progress and challenges. Human Factors, 44, 62–78.Find this resource:

Welford, A. T. (1967). Single channel operation in the brain. Acta Psychologica, 27, 5–21.Find this resource:

Wickens, C. D. (1980). The structure of attentional resources. In R. Nickerson (Ed.), Attention and performance (Vol. 7, pp. 239–257). Hillsdale, NJ: Erlbaum.Find this resource:

Wickens, C. D. (1984). Processing resources in attention. In R. Parasuraman & R. Davies (Eds.), Varieties of attention (pp. 63–101). New York, NY: Academic Press.Find this resource:

Wickens, C. D. (1993). Cognitive factors in display design. Journal of the Washington Academy of Sciences, 83(4), 179–201.Find this resource:

Wickens, C. D. (2000a). The tradeoff of design for routine and unexpected performance: Implications of situation awareness. In D.J. Garland & M.R. Endsley (Eds.), Situation awareness analysis and measurement. Mahwah, NJ: Lawrence Erlbaum.Find this resource:

Wickens, C. D. (2000b). Human factors in vector map design: The importance of task-display dependence. Journal of Navigation, 53(1), 54–67.Find this resource:

Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177.Find this resource:

Wickens, C. D. (2005). Multiple resource time sharing models. In N. Stanton et al. (Eds.), Handbook of human factors and ergonomics methods (pp. 40–1–40–7). Boca Raton, FL: CRC Press.Find this resource:

Wickens, C. D. (2008a). Multiple resources and mental workload. Human Factors Golden Anniversary Special Issue, 3, 449–455.Find this resource:

Wickens, C. D. (2008b). Situation awareness. Review of Mica Endsley’s articles on situation awareness. Human Factors, Golden Anniversary Special Issue, 50, 397–403.Find this resource:

Wickens, C. D. (2012). Noticing events in the visual workplace: The SEEV and NSEEV models. In R. Hoffman & R. Parasuraman (Eds.), Handbook of applied cognitive engineering. Cambridge, UK: Cambridge University Press.Find this resource:

Wickens, C. D., & Alexander, A. L. (2009). Attentional tunneling and task management in synthetic vision displays. International Journal of Aviation Psychology, 19, 1–17.Find this resource:

Wickens, C. D., Alexander, A. L., Ambinder, M. S., & Martens, M. (2004). The role of highlighting in visual search through maps. Spatial Vision, 37, 373–388.Find this resource:

Wickens, C. D., Bagnall, T., Gosakan, M., & Walters, B. (2011). Modeling single pilot control of multiple UAVs. In M. Vidulich & P. Tsang (Eds.), Proceedings of the 16th International Symposium on Aviation Psychology. Dayton, OH: Wright State University.Find this resource:

Wickens, C. D., & Carswell, C. M. (1995). The proximity compatibility principle: Its psychological foundation and relevance to display design. Human Factors, 37(3), 473–494.Find this resource:

Wickens, C. D., & Colcombe, A. (2007). Performance consequences of imperfect alerting automation associated with a cockpit display of traffic information. Human Factors49, 564–572.Find this resource:

Wickens, C. D., & Dixon, S. R. (2007). The benefits of imperfect automation: A synthesis of the literature. Theoretical Issues in Ergonomics Sciences, 8(3), 201–212.Find this resource:

Wickens, C. D., Dixon, S., Goh, J., & Hammer, B. (2005). Pilot dependence on imperfect diagnostic automation in simulated UAV flights: An attentional visual scanning analysis. In M. Vidulich & P. Tsang (Eds.), 13th International Symposium on Aviation Psychology, Wright-Patterson AFB, Dayton OH.Find this resource:

Wickens, C. D., Goh, J., Helleberg, J., Horrey, W., & Talleur, D. A. (2003). Attentional models of multi-task pilot performance using advanced display technology. Human Factors, 45(3), 360–380.Find this resource:

Wickens, C. D., & Hollands, J. (2000). Engineering psychology and human performance (3rd ed.). Upper Saddle River NJ: Prentice Hall.Find this resource:

Wickens, C. D, Hollands, J., Banbury, S., & Parasuraman, R. (2012). Engineering psychology and human performance (4th Ed). Upper Saddle River, NJ: Pearson.Find this resource:

Wickens, C. D., Hooey, B. L., Gore, B. F., Sebok, A., & Koenicke, C. S. (2009). Identifying black swans in NextGen: Predicting human performance in off-nominal conditions. Human Factors, 51, 638–651.Find this resource:

Wickens, C. D., & Horrey, W. (2009). Models of attention, distraction and highway hazard avoidance. In M. Regan, J. D. Lee, & K. L. Young, (Eds.), Driver distraction: Theory, effects and mitigation. (pp. 57–72). Boco Ratan, Florida: CRC Press.Find this resource:

Wickens, C. D., & McCarley, J. M. (2008). Applied attention theory. Boca Raton, FL: CRC Press.Find this resource:

Wickens, C. D., McCarley, J. S., Alexander, A. L., Thomas, L. C., Ambinder, M., & Zheng, S. (2008). Attention-situation awareness (A-SA) model of pilot error. In D. Foyle & B. Hooey (Eds.), Human performance models in aviation. Boca Raton, FL: Taylor & Francis.Find this resource:

Wickens, C. Prinet, J. Hutchins, S., Sarter, N. & Sebok A (2011). Auditory-visual redundancy in vehicle control interruptions. Two meta analyses. In Proceedings 2011 annual (p. 56) meeting of the Human Factors & Ergonomcs Society. Santa Monica, Calif.: Human Factors.Find this resource:

Wickens, C. Rice, S. Keller, M. D.|Hutchins, S., Hughes, J., & Klayton, K. (2009). False alerts in air traffic control conflict alerting system: Is there a cry wolf effect? Human Factors, 51, 446–462.Find this resource:

Wickens, C. D., Li H., Santamaria, A., Sebok, A., & Sarter, N. (2010). Stages & levels of automation: An integrated meta-analysis. In Proceedings of the 2010 Conference of the Human Factors & Ergonomics Society. Santa Monica: Human Factors and Ergonomics Society.Find this resource:

Wickens, C. D., & Seidler, K. (1997). Information access in a dual task context. Journal of Experimental Psychology: Applied, 3, 196–215.Find this resource:

Wickens, C. D., Ververs, P., & Fadden, S. (2004). Head-up display design. In D. Harris (Ed.), Human factors for civil flight deck design (pp. 103–140). Aldershot, England: Ashgate.Find this resource:

Wickens, C. D., Vincow, M. Schopper, A., & Lincoln, J. (1997). Human performance models for display design. Wright Patterson AFB: Crew Stations Ergonomics Information Analysis Center SOAR.Find this resource:

Wolfe, J. M. (1994). Guided search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1, 202–238.Find this resource:

Wolfe, J. M. (2007). Guided search 4.0: Current progress with a model of visual search. In W. D. Gray (Ed.), Integrated models of cognitive systems (pp. 99–119). New York: Oxford University Press.Find this resource:

Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Neuroscience, 5, 1–7.Find this resource:

Wolfe, J. M., Horowitz, T. S., & Kenner, N. M. (2005). Rare items often missed in visual searches. Nature, 435, 439–440.Find this resource:

Woods, D. D. (1984). Visual momentum: A concept to improve the coupling of person and computer. International Journal of Man-Machine Studies21, 229–244.Find this resource:

Xiao, Y., Seagull, F. J., Nieves-Khouw, F., Barczak, N., & Perkins, S. (2004). Organizational-historical analysis of the “failure to respond to alarm” problems. IEEE Transactions on Systems, Man, and Cybernetics. Part A. Systems and Humans, 34, 772–778.Find this resource:

Yantis, S. (1993). Stimulus-driven attentional capture. Current Directions in Psychological Sciences, 2, 156–161.Find this resource:

Yeh, M., & Wickens, C. D. (2001a). Display signaling in augmented reality: The effects of cue reliability and image realism on attention allocation and trust calibration. Human Factors, 43(3), 355–365.Find this resource:

Yeh, M., & Wickens, C. D. (2001b). Attentional filtering in the design of electronic map displays: A comparison of color-coding, intensity coding, and decluttering techniques. Human Factors, 43(4), 543–562.Find this resource:

Yeh, M., Merlo, J. L., Wickens, C. D., & Brandenburg, D. L. (2003). Head up versus head down: The costs of imprecision, unreliability, and visual clutter on cue effectiveness for display signaling. Human Factors, 45(3), 390–407.Find this resource:Christopher D. Wickens

Christopher Wickens, Human Factors, University of Illinois, Urbana-Champaign, Urbana-Champaign, IL

Posted in Uncategorized | Leave a comment

번역: the post-pandemic slump

 

The post-pandemic slump

The coronavirus pandemic marks the end of longest US economic expansion on record, and it will feature sharpest economic contraction since WWII.

바이러스 팬더믹 때문에 미국 역사상 가장 길었던 경제 팽창 (128개월)이 종지부를 찍었다. 2차 세계대전 이후 가장 가파른 경제 수축이 눈에 띌 것이다

The global economy was facing the worst collapse since the second world war as coronavirus began to strike in March, well before the height of the crisis, according to the latest Brookings-FT tracking index.

최근 브루킹스 연구소 FT 트랭킹 인덱스에 따르면, 위기의 고조 바로 직전 3월에 코로나 19가 전 세계를 강타한 이후, 전 세계 경제가 2차 세계대전 이후 최악의 붕괴 상황에 직면했다.

(파란색 그래프는 선진자본주의 국가 분홍색 그래프는 신흥개발국가-이머징 자본주의 국가

)

2020 will be the first year of falling global GDP since WWII. And it was only the final years of WWII/aftermath when output fell.

2차 세계대전 이후, 2020년은 최초로 지구 전체 GDP가 하락하는 해가 될 것이다. 역사적으로 전세계 GDP 가 하락하는 년도는 2차 세계대전이 끝나갈 무렵 몇 년간 뿐이었다.

 

 

 

JPMorgan economists reckon that the pandemic could cost world at least $5.5 trillion in lost output over the next two years, greater than the annual output of Japan. And that would be lost forever.  That’s almost 8% of GDP through the end of next year. The cost to developed economies alone will be similar to that in the recessions of 2008-2009 and 1974-1975.  Even with unprecedented levels of monetary and fiscal stimulus, GDP is unlikely to return to its pre-crisis trend until at least 2022.

The Bank for International Settlements has warned that disjointed national efforts could lead to a second wave of cases, a worst-case scenario that would leave US GDP close to 12% below its pre-virus level by the end of 2020.  That’s way worse than in the Great Recession of 2008-9.

제이피 모건 계산에 따르면, 코로나 팬데믹은 다음 2년간 5.5조 달러 손실을 유발시킬 수 있고, 이는 일본 연간 GDP 보다 더 큰 액수이다.

2021년 말까지 GDP의 8%를 영구히 잃어버린 셈이다. 특히 선진자본주의 국가들에서 손실량은 2008-2009년 금융공황 시기, 1974-75년 경제공황 시기 손실액수와 비슷하다.

GDP 는 최소한 2022년까지는 코로나 위기 전 수준에 이르지 못할 것이다.

BIS ( 국제 조정 은행) 경고에 따르면, 미국이 전국적인 공조체제를 갖추지 못하면 제 2차 코로나 위기가 닥칠 것이고, GDP가 2020년 말까지 위기 전보다 12% 감소할 것이다. 이는 2008-2009년 금융공황 시기보다 더 악화된 것이다.

The US economy will lose 20m jobs according to estimates from @OxfordEconomicssending unemployment rate soaring by greatest degree since Great Depression and severely affecting 40% of jobs.

미국내 실직자는 2천만명이 될 것이다.이는 1930년대 대공황 이후 최대수치이고 전체 경제활동인구의 40%에 영향을 미칠 것이다.

 

 

And then there is the situation for the so-called ‘emerging economies’ of the ‘Global South’.  Many of these are exporters of basic commodities (like energy, industrial metals and agro foods) which, since the end of the Great Recession have seen prices plummet.

And now the pandemic is going to intensify that contraction.  Economic output in emerging markets is forecast to fall 1.5% this year, the first decline since reliable records began in 1951.

이러한 지구 북반부 선진자본주의 국가들 이외에도, 지구 남반부, 신흥개발국가 상황도 문제다.이들은 에너지, 철강석, 농업 식량 등과 같은 기초 상품 수출국가들이기 때문에, 2008년 금융공황 시기에 이 상품들의 가격 하락을 경험한 바 있다. 지금 현재 코로나 19 팬데믹이 이러한 경기 수축을 강화시키고 있다. 1951년 최초로 측정한 이래, 2020년 신흥개발국가 (이머징 마켓)의 경제생산량은 1.5% 하락할 것이다.

The World Bank reckons the pandemic will push sub-Saharan Africa into recession in 2020 for the first time in 25 years. In its Africa Pulse report the bank said the region’s economy will contract 2.1%-5.1% from growth of 2.4% last year, and that the new coronavirus will cost sub-Saharan Africa $37 billion to $79 billion in output losses this year due to trade and value chain disruption, among other factors. “We’re looking at a commodity-price collapse and a collapse in global trade unlike anything we’ve seen since the 1930s,” said Ken Rogoff, the former chief economist of the IMF.

 

세계은행도 사하라 이남 아프리카 국가들도 지난 25년 동안 최초로 경기침체 국면을 맞을 것으로 내다보고 있다. 세계은행의 ‘아프리카 펄스 리포트’에 따르면, 이 지역 경기침체는 작년 2.4% 성장율과 비교해서, 2.1%~5.1% 정도 수축할 것이다.코로나 바이러스가 몰고온 무역과 가치 연결망의 붕괴로, 사하라 남부 아프리카 국가들은 370억 달러~ 790억 달러의 손실을 입을 것이다. 전 IMF 수석 경제전문가 Ken Rogoff 1930년 이후, 이러한 상품 가격 하락과 국제 무역 붕괴는 경험해 보지 못했다고 말한다.

More than 90 ‘emerging’ countries have inquired about bailouts from the IMF—nearly half the world’s nations—while at least 60 have sought to avail themselves of World Bank programs. The two institutions together have resources of up to $1.2 trillion that they have said they would make available to battle the economic fallout from the pandemic, but that figure is tiny compared with the losses in income, GDP and capital outflows.

Since January, about $96 billion has flowed out of emerging markets, according to data from the Institute of International Finance, a banking group.  That’s more than triple the $26 billion outflow during the global financial crisis of a decade ago.  “An avalanche of government-debt crises is sure to follow”, he said, and “the system just can’t handle this many defaults and restructurings at the same time” said Rogoff.

 

90 개 이상 신흥개발국가들은 IMF에 긴급구제금융을 요청한 상태고, 그 중 60%는 세계은행에 돈을 빌리고 있다. 세계은행과 IMF은 위기 타개책으로 1.2조 달러를 준비해 놓고 있다고 했지만, 이 액수는 소득, GDP, 자본 유출량에 비해 지극히 적은 양에 불과하다.

2020년 1월 이후, 개발국가들로부터 960억 달러 자본이 빠져나왔다. 2008년 금융공황 시기에는 260억 달러 자본이 도피했는데, 그 때와 비교해도 현재 3배 이상이다. 켄 로고프에 따르면, 향후 정부 부채는 눈덩이처럼 불어나고, 경제체제가 이러한 수많은 채무불이행과 재구조조정을 감당할 수 없을 것이다.

 

Nevertheless, optimism reigns in many quarters that once the lockdowns are over, the world economy will bounce back on a surge of released ‘pent-up ‘ demand.  People will be back at work, households will spend like never before and companies will take on their old staff and start investing for a brighter post-pandemic future.

이러한 비관과 달리, 낙관론자들의 견해도 있다. 현재 집단 격리와 폐쇄가 종료되면, 그 동안 억눌린 수요가 분출되어 나오면, 세계 경제는 원래 상태로 되돌아 갈 것이라고 본다.사람들이 다시 직장에 복귀하게 되고, 가계는 다시 소비를 하게 되고, 회사는 옛날 노동자들을 다시 고용하고, 코로나 19 팬데믹 이후 밝은 미래를 기대하면서 더 많은 투자를 할 것이다.

 

As the governor of the Bank of (tiny) Iceland put it:  “The money that now being saved because people are staying at home won’t disappear – it will drip back into the economy as soon as the pandemic is over.  Prosperity will be back.”  This view was echoed by the helmsman of the largest economy in the world.  US Treasury Secretary Mnuchin spoke bravely that : “This is a short-term issue. It may be a couple of months, but we’re going to get through this, and the economy will be stronger than ever,”

Former Treasury Secretary and Keynesian guru, Larry Summers, was in tentative concurrence: “the recovery can be faster than many people expect because it has the character of the recovery from the total depression that hits a Cape Cod economy every winter or the recovery in American GDP that takes place every Monday morning.”  In effect, he was saying that the US and world economy was like Cape Cod out of season; just ready to open in the summer without any significant damage to businesses during the winter.

아이슬랜드 은행장은, 지금 쓰지 않고 있는 돈들이 다 증발하지 않고 있으니까, 팬데믹이 종료되면 그 돈들은 다시 경제를 활성화시킬 것이라고 본다.

세계 경제 강대국의 수장들이 이러한 낙관론을 펼친다.미 재무장관 음누친도 용감하게 이렇게 말했다. “코로나 위기는 단기적인 현안이다. 몇 개월 정도 지나면 경제는 이전보다 더 활성화될 것이다. ”

래리 서머스 역시, “경기 회복은 예상보다 훨씬 더 빠를 수 있다. 왜냐하면 여름 휴양지로 유명한 케이프 코드 경제가 겨울 시기에는 얼어붙어 버린 총체적 경기침체로부터 회복, 혹은 매주 월요일 아침마다 발생하는 미국 GDP의 회복과 같은 특성을 지니고 있고 있기 때문이다.”케이프 코드 Cape Cod 지역의 비수기 경기침체에 비유했다.

 

That’s some optimism.  For when these optimists talk about a quick V-shaped recovery, they are not recognising that the COVID-19 pandemic is not generating a ‘normal’ recession and it is hitting not a just a single region but the entire global economy.  Many companies, particularly smaller ones, will not return after the pandemic.  Before the lockdowns, there were anything between 10-20% of firms in the US and Europe that were barely making enough profit to cover running costs and debt servicing. These so-called ‘zombie’ firms may have found the Cape Cod winter the last nail in their coffins.  Already several middling retail and leisure chains have filed for bankruptcy and airlines and travel agencies may follow.  Large numbers of shale oil companies are also under water (not oil).

이러한 낙관론자들이 신속한 V자 회복을 언급하고 있다. 하지만 이러한 낙관론자들이 놓치고 있는 것은, 코로나 19 위기가 한 지역에 타격을 입히는 것이 아니라 전체 지구 경제에 타격을 입히고 있다는 사실이다.

팬데믹 이후에는 경제 규모가 적은 기업들은 회생하지 못할 것이다. 팬데믹 이전에도 미국과 유럽 기업들의 10~20% 는 가까스로 운영비와 대출변제 자금을 마련할 정도로만 이윤을 창출하고 있었다.

속칭 이러한 좀비 기업들은, 겨울 한철 비수기 케이프 코드가 그들의 관에 마지막 못질을 의미할 것이다. 이미 중간 규모 소매 레져 기업들이 부도 신청을 했고, 항공사 여행사들이 연이어 부도 신청을 할 것이다.셰일 오일 회사 상당수가 이미 지불 불능 상태이다.

 

As leading financial analysts Mohamed El-Erian concluded: “Debt is already proving to be a dividing line for firms racing to adjust to the crisis, and a crucial factor in a competition of survival of the fittest. Companies that came into the crisis highly indebted will have a harder time continuing. If you emerge from this, you will emerge to a landscape where a lot of your competitors have disappeared.”

So it’s going to take a lot longer to return to previous output levels after the lockdowns.  Nomura economists reckon that Eurozone GDP is unlikely to exceed Q42019 level until 2023!

금융 분석가 모하메드 엘 에리안 Mohamed El-Erian 에 따르면, “이미 채무가 이번 코로나 19 위기 상황에서 살아 남느냐 마느냐를 결정하는 결정적 요소가 되었다” “이러한 적자생존 경쟁에서 살아남아 돌아오면, 수많은 당신 경쟁자들은 이미 사라지고 없다는 것을 발견하게 될 것이다”

따라서 자택 감금, 격리 lockdown 이후에 위기 이전 생산 수준에 이르기까지는 상당한 시간이 걸릴 것이다.

 

일본 노무라 경제 연구소 계산에 따르면, 유로존 GDP는 2023년까지는 Q42019 수준까지 도달하지 못할 것이다.

And remember, as I explained in detail in my book The Long Depression, after the Great Recession there was no return to previous trend growth whatsoever. When growth resumed, it was at slower rate than before.

내 책 “The Long Depression 장기 침체”에서, 2008년 금융공황 이후에 어찌되었든 간에 그 이전 성장 추세로 돌아가지 못했다. 성장이 재개되더라도, 그 이전보다 그 속도는 느릴 것이다.

 

(1인당 실질 GDP 와 1947~2007년 추이: 2019년 말에, 1인당 GDP는 2008년 이전과 비교했을 때, 그 성장 추이가 마이너스 15%였다.2008-2009년 경기침체 말에는 마이너스 9%였던 것이.)

Since 2009, US per capita GDP annual growth has averaged 1.6%.  At the end of 2019, per capita GDP was 13% below trend growth prior to 2008. At the end of the 2008–2009 recession it was 9% below trend. So, despite a decade-long expansion, the US economy fell further below trend since the Great Recession ended. The gap is now equal to $10,200 per person—a permanent loss of income.  And now Goldman Sachs is forecasting a drop in per capita GDP that would wipe out all the gains of the last ten years!

2009년 이후, 미국 1인당 연간 GDP 성장율은 평균 1.6%였다. 2009년 말에, 1인당 GDP 는 2008년 이전과 비교했을 때, 성장율이 13%나 뒤처져 있었다. 2008-2009 경기침체 말에는 1인당 GDP 성장율이 9% 정도 더 적었다. 2008년 경기침체 이후, 지난 10년간 지속적인 경기 팽창 국면이었지만, 2008년 경기침체가 끝난 이후에도 미국 경제는 성장 추이보다 훨씬 더 아래에 위치하고 있다. 그 차액은 1인당 연간 1만 200달러에 해당한다. (소득의 영구적 손실)

골드만삭스 예측으로는, 미국 1인당 GDP 손실액은 지난 10년간 증가분과 동일할 것이다.

 

Then there is world trade.  Growth in world trade has been barely equal to growth in global GDP since 2009 (blue line), way below its rate prior to 2009 (dotted line).  Now even that lower trajectory (dotted yellow line).  The World Trade Organisation sees no return to even this lower trajectory for at least two years.

 

세계 무역을 보자. 파란 선 2009년 이후, 세계 무역 성장은 전세계 GDP 성장율과 거의 비슷하다.그러나 2009년 이전 성장율 추이 (점선 ) 과 비교했을 때는, 차이가 크다는 것을 알 수 있다.노란 점선 (2011-2018) 성장율 추이와 비교해도, 밑에 위치하고 있다는 것을 알 수 있다.세계무역기구 (WTO)는 다음 2년 동안은 이 노란 점선도 회복하기 힘들다고 보고 있다.

But what about the humungous injections of credit and loans being made by the central banks around the world and the huge fiscal stimulus packages from governments globally.  Won’t that turn things round quicker?  Well, there is no doubt that central banks and even the international agencies like the IMF and the World Bank have jumped in to inject credit through the purchases of government bonds, corporate bonds, student loans, and even ETFs on a scale never seen before, even during the global financial crisis of 2008-9.  The Federal Reserve’s treasury purchases are already racing ahead of previous quantitative easing programmes.

다음으로, 천문학적인 중앙은행의 융자 대출 지원과 정부 경기부양책은 어떠할 것인가?중앙은행, 세계은행과 IMF는 국채, 회사채, 등록금 대출, 상장지수펀드 (ETF) 을 대규모로 매입해서 긴급구제금융지원을 하고 있는데, 이는 2008년 금융공황 시기에서도 볼 수 없었던 것이다.

미 연준은 이미 과거 양적완화(QE) 방식을 뛰어넘는 채권 매입 방식을 추진하고 있다.

And the fiscal spending approved by the US Congress last month dwarfs the spending programme during the Great Recession.

3월 미 의회가 승인한 정부 지출은 2008년 경기침체 시기보다 더 많은 액수이기도 하다.

 

 

I have made an estimate of the size of credit injections and fiscal packages globally announced to preserve economies and businesses.  I reckon it has reached over 4% of GDP in fiscal stimulus and another 5% in credit injections and government guarantees. That’s twice the amount in the Great Recession, with some key countries ploughing in even more to compensate workers put out of work and small businesses closed down.

내가 계산해본 바로는, 전 세계적으로 재정 경기부양의 경우 GDP의 4% 이상, 신용대출과 정부보증 대출로는 GDP의 5%가 투하되었다.

2008년 금융공황 시기 때보다 2배나 더 많은 액수다.

몇몇 경제강대국들의 경우, 일자를 잃은 노동자, 폐업한 소상공인들을 지원하는 금액은 2008년 경기침체 시기보다 더 많았다.

 

These packages go even further in another way. Straight cash handouts by the government to households and firms are in effect what the infamous free market monetarist economist Milton Friedman called ‘helicopter money’, dollars to be dropped from the sky to save people.  Forget the banks; get the money directly into the hands of those who need it and will spend.

각 정부는 또 다른 지원 패키지를 발표할 것이다. 밀튼 프리드먼이 말한 ‘헬리콥터 머니’, 즉 정부가 직접 가계와 기업에 현금을 지급할 것이다.

Post-Keynesian economists who have pushed for helicopter money, or people’s money, are thus vindicated.

은행들은 잊어버려라. 정부가 직접 돈이 필요한 사람들에게 현금을 나눠주고, 사람들이 소비를 할 것이다. 은행을 통하지 않고 정부가 직접 현금을 나눠줘야 한다는 포스트 케인지안들의 주장은 따라서 옳은 것으로 입증되었다.

In addition, suddenly the idea, which up to now was rejected and dismissed by mainstream economic policy, has become highly acceptable, namely fiscal spending financed, not by the issue of more debt (government bonds), but by simply ‘printing money’, ie the Fed or the Bank of England deposits money in the government account to spend.

또한 주류경제학이 그 동안 부정해오고 무시해온 구상, 즉 정부가 빚을 지는 방식인 ‘국채 발행’이 아니라 정부가 ‘화폐’를 찍어내는 방식을 통해 ‘정부 재정 지출’을 조달하자는 ‘현대통화이론’이 수용가능한 것으로 받아들여지고 있다. 이런 방식에서는 미 연방은행이나 영국은행이 정부가 지출할 돈은 예금만 하고 있으면 된다.

Keynesian commentator Martin Wolf, having sniffed at MMT before, now says: “abandon outworn shibboleths. Already governments have given up old fiscal rules, and rightly so. Central banks must also do whatever it takes. This means monetary financing of governments. Central banks pretend that what they are doing is reversible and so is not monetary financing. If that helps them act, that is fine, even if it is probably untrue. …There is no alternative. Nobody should care. There are ways to manage the consequences. Even “helicopter money” might well be fully justifiable in such a deep crisis.”

한때 현대통화이론 (MMT)을 비웃었던 케인지안 경제비평가 마틴 울프(Martin Wolf)가 이제는 “낡은 원칙은 버려라. 이미 정부는 옛날 재정 원칙들을 포기했고, 다 정당한 이유가 있어서 그런 조치를 내렸다. 중앙은행도 또한 강구할 수 있는 모든 조치를 취할 때이다. 이것이 의미하는 바는 정부가 직접 중앙은행의 발권력을 통해 재정조달을 하는 것이다.

중앙은행은, 자기들이 지금 하고 있는 조치들은 나중에 다 되돌릴 수 있고, 그래서 중앙은행의 발권력을 통한 재정조달은 아닌 척 한다. 이런 방식을 통해 중앙은행이 조치를 취한다면, 비록 이게 진실이 아닐 지라도, 그 방식은 괜찮다. 다른 대안도 없고, 아무도 개의치 않는다. 향후 그 조치로 인해 발생하는 결과들도 수습할 방법들이 있다. 심지어 “헬리콥터 화폐 ” 도 이러한 심각한 위기 상황에서는 정당화되고도 남는다. ”

 

The policies of Modern Monetary Theory (MMT) have arrived! Sure, this pure monetary financing is supposed to be temporary and limited but the MMT boys and girls are cock a hoóp that it could become permanént, as they advocate.  Namely governments should spend and thus create money and take the economy towards full employment and keep it there.  Capitalism will be saved by the state and by modern monetary theory.

현대화폐이론이 무대 위에 올라왔다. 이 순수 중앙은행의 발권력을 통한 재정조달(MF) 은 한시적이고 제한적이어야 한다. 그러나 현대통화이론 주창 소년 소녀들은 이 발권재정조달이 영구적일 수 있다고 자랑스러워 하며 좋아한다. 다시 말해서 정부는 지출해야 하고, 따라서 화폐를 발행해서 완전고용을 향해 달려나가야 하고, 그것을 위해 최선을 다해야 한다. 국가와 현대통화이론이 자본주의를 구원해 줄 것이다.

I have discussed in detail in several posts the theoretical flaws in MMT from a Marxist view.  The problem with this theory and policy is that it ignores the crucial factor: the social structure of capitalism.  Under capitalism, production and investment is for profit, not for meeting the needs of people.  And profit depends on the ability to exploit the working class sufficiently compared to the costs of investment in technology and productive assets.  It does not depend on whether the government has provided enough ‘effective demand’.

The assumption of the radical post-Keynesian/MMT boys and girls is that if governments spend and spend, it will lead to households spending more and capitalist investing more.  Thus, full employment can be restored without any change in the social structure of an economy (ie capitalism).  Under MMT, the banks would remain in place; the big companies, the FAANGs would remain untouched; the stock market would roll on.  Capitalism would be fixed with the help of the state, financed by the magic money tree (MMT).

 

난 마르크스주의적 관점에서 몇 차례 현대통화이론(MMT)의 이론적 결점들에 대해 논의한 바 있다. 현대통화이론은 자본주의의 사회 구조라는 중대한 요소를 무시하고 있다는 점이다. 자본주의 하에서 생산과 투자 목표는 이윤추구이지 사람들의 필요 그 자체를 충족시키는 것이 아니기 때문이다. 또한 이윤 크기는 기술과 생산 설비 투자와 비교해서 노동력을 얼마나 충분히 착취하느냐에 달려있지, 정부가 충분한 유효수요를 창출했다는 것에 달려있지 않다.

이 급진적인 포스트 케인지안 현대화폐이론 소년소녀들의 가정이란, 만약 정부가 계속해서 지출하면, 이것은 가계 지출과 자본 투자의 증가로 이어질 것이라는 것이다. 따라서 사회적 경제구조의 아무런 변화 없이도 완전 고용이 달성될 수 있다고 본다.

현대화폐이론에 따르면, 은행도 늘 그대로 하던대로 기능하고, 대기업들, 팡스(FAANGs: 페이스북, 아마존, 애플, 넷플릭스, 알파벳 (옛 구글) )도 아무런 변화없이 존재하고, 주식시장도 그대로 굴러갈 것이다. 자본주의 문제는 국가의 도움으로 해결되고, 마술 화폐 나무 (MMT: magic money tree)는 국가에 돈을 조달해줄 것이다.

FAANG (In finance, “FAANG” is an acronym that refers to the stocks of five prominent American technology companies: Facebook (FB), Amazon (AMZN), Apple (AAPL), Netflix (NFLX); and Alphabet (GOOG) (formerly known as Google))

(cock a hoop: 당신이 했던 것을 자랑스러워하고 좋아하다: 마음에 들어하다pleased and proud about something that you have done )

*참고: monetary financing : ( 정부의 직접 발권을 통한 재정 조달 : 세금이나 차입 방식이 아니라 정부가 새 돈을 찍어냄으로써 국가재정을 조달하는 방식 : 중앙은행의 영구적인 통화 공급 확대를 통해 상환부담 없이 조달하는 것을 의미: 재정적자를 의미한다. 이자부담부채 방식이 아니라 본원통화의 증가 방식을 통한 것을 의미: 밀튼 프리드먼의 ‘헬리콥터 머니’

 

Michael Pettis is a well-known ’balance sheet’ macro economist based in Beijing.  In a compelling article, entitled MMT heaven and MMT hell, he takes to task the optimistic assumption that printing money for increased government spending can do the trick.  He says: “the bottom line is this: if the government can spend these additional funds in ways that make GDP grow faster than debt, politicians don’t have to worry about runaway inflation or the piling up of debt. But if this money isn’t used productively, the opposite is true.

He adds: “creating or borrowing money does not increase a country’s wealth unless doing so results directly or indirectly in an increase in productive investment…  If U.S. companies are reluctant to invest not because the cost of capital is high but rather because expected profitability is low, they are unlikely to respond to the trade-off between cheaper capital and lower demand by investing more.” You can lead a horse to water, but you cannot make it drink.

중국 베이징에서 일하는 마이클 페티스는 ‘대차대조표’ 거시 누적경제 전문가로 잘 알려져있다. “현대통화이론 천국과 지옥” 이라는 아주 눈길 끄는 논문에서, 정부 지출 확대를 위해 화폐를 발행하는 것이 소기의 성과를 거둘 것이라는 낙관적인 가정을 아주 적나라하게 비판했다.”핵심은 이것이다. 만약 정부가 이러한 추가 재원을 지출해서, 부채 보다 GDP 성장 속도를 빠르게 할 수 있다면, 정치가들은 고삐풀린 인플레이션이나 채무 누적은 걱정할 필요가 없다. 그러나 만약 이 돈이 생산적으로 사용되지 않는다면, 그런 낙관적 결과와는 반대 결과가 발생할 것이다.”

“새 돈을 찍어내거나 돈을 빌리는 것이 직간접적으로 생산적인 투자의 증가로 이어지는 않는한 그 방식들이 한 국가의 부를 증대시키지 않을 것이다. 만약 미국 기업들이, 자본 비용이 많아서가 아니라, 예상 이윤이 낮기 때문에 투자를 꺼리는 것이라면, 투자를 늘림으로써 ‘더 값싼 자본’과 ‘더 줄어든 수요’ 사이 상쇄에 반응할 것 같지 않다. 말을 물가에 데려갈 수 있지만, 말에게 물을 직접 먹일 수는 없다.

I suspect that much of the monetary and fiscal largesse will end up either not being spent but hoarded, or invested not in employees and production, but in unproductive financial assets – no wonder the stock markets of the world have bounced back as the Fed and the other central banks pump in the cash and free loans.

내가 의심하는 것은 다음과 같다. 정부의 후한 통화 지원금과 재정 기금이 지출되는 것이 아니라 비축될 수 있다. 또한 그 돈이 노동자와 생산에 사용되는 것이 아니라 비생산적인 금융 자산에 투자될 수 있다. 미 연방은행 (FED)과 다른 중앙은행들이 현금과 무이자 대출을 실시했을 때, 세계의 주식시장들이 다시 활성화되었음은 자명한 사실이다.

Indeed, even leftist economist Dean Baker doubts the MMT heaven and the efficacy of such huge fiscal spending.  “It is actually possible that we could be seeing too much demand, as a burst of post-shutdown spending outstrips the immediate capacity of the restaurants, airlines, hotels, and other businesses. In that case, we may actually see a burst of inflation, as these businesses jack up prices in response to excessive demand.”  – ie MMT hell.  So he concludes that “generic spending is not advisable at this point.”

Well, the proof of the pudding is in its eating and we shall see.  But the historical evidence that I and others have compiled over the last decade or more, shows that the so-called Keynesian multiplier has limited effect in restoring growth, mainly because it is not the consumer who matters in reviving the economy, but capitalist companies.

And there’s new evidence on the power of Keynesian multiplier. It’s not been one to one or more, as often claimed, ie. 1% of GDP increase in government spending does not lead to a 1% of GDP increase in national output.  Some economists looked at the multiplier in Europe over the last ten years. They concluded that “in contrast to previous claims that the fiscal multiplier rose well above one at the height of the crisis, however, we argue that the ‘true’ ex-post multiplier remained below one.”

And there is little reason that it will be higher this time round.  In another paper, some other mainstream economists suggest that a V-shaped recovery is unlikely because “demand is endogenous and affected by the supply shock and other features of the economy. This suggests that traditional fiscal stimulus is less effective in a recession caused by our supply shock. … demand may indeed overreact to the supply shock and lead to a demand-deficient recession because of “low substitutability across sectors and incomplete markets, with liquidity constrained consumers.” so that “various forms of fiscal policy, per dollar spent, may be less effective”.

 

심지어 좌파 경제학자 딘 베이커 (Dean Baker)도 현대화폐이론 천국과 그러한 막대한 양의 재정 지출의 효과를 의심스런 눈초리로 쳐다보고 있다.

” (코로나 19로 인한 폐쇄 이후에 막대한 정부 지출이 식당, 항공사, 호텔, 다른 비즈니스의 현재 능력보다 더 클 때, 초과 수요가 발생할 것이다. 이러한 산업들은 초과수요에 반응하면서 가격을 대폭 인상하게 되면, 인플레이션이 발생할 것이다. 현대화폐이론의 지옥이 현실화된다.” 따라서 그는 조언하길 “현재 특정하지 않는 포괄적인 정부지출은 바람직 하지 않다.”, 하여튼 푸딩을 증명하는 방법은 직접 맛을 보는 것이니까, 한번 지켜볼 필요가 있다. 나와 다른 연구자들이 지난 10년 넘게 축적해온 역사적 증거들에 따르면, 소위 케인즈 승수는 경제 성장을 복구하는데 큰 효과를 거두지 못했다. 왜냐하면 경기를 부흥시키는데 중요한 대상이 소비자가 아니라 자본가 대기업들이었기 때문이다. 케인즈 승수의 힘에 대한 새로운 증거가 있다. 보통 알고 있듯이 1대 1 혹은 1대 1 이상이 아니다. 정부 지출에서 GDP의 1%가 증가가 국민총생산 GDP 의 1% 증가로 이어지지 않는다. 몇몇 경제학자들이 지난 10년 동안 유럽에서 승수효과를 관찰했다.

그들의 결론은, 재정 승수가 위기 최고정점에서 1 이상까지 증가했다는 과거 주장과는 반대로, 실제 사후 승수는 1 미만에 그쳤다.”

그리고 이번에는 승수가 더 높을 것이라는 합당한 이유도 많지 않아 보인다. 다른 연구보고서에서 다른 주류 경제학자들은 V-자 회복은 발생할 것 같지 않다고 제언하고 있다. 왜냐하면 ” 수요는 내부로부터 발생하고 (endogenous) , 공급 충격과 경제의 다른 특질에 의해 영향을 받기 때문이다…. 보고서에 따르면, 전통적인 재정 부양책은 우리의 공급 충격 때문에 발생한 경기침체에는 별로 효과적이지 않다…. 산업부문간의 대체가능성이 적고, 불완전한 시장, 유동성 제약에 빠진 소비자들 때문에, 수요가 공급 충격에 과잉 반응해서 ‘수요 부족 경기침체’가 발생할 것이기 때문에, 다양한 형태의 재정정책이 큰 효과를 낳지 않을 것이다”

 

 

 

But what else can we do?  So “despite this, the optimal policy to face a pandemic in our model combines as loosening of monetary policy as well as abundant social insurance.”  And that’s the issue.  If the social structure of capitalist economies is to remain untouched, then all you are left with is printing money and government spending.

 

그러나 우리는 어떤 다른 정책을 사용할 수 있는가? ” 이러한 문제점에도 불구하고, 코로나 19 팬데믹을 해결하는 최적의 정책은 통화정책을 느슨하게 하고, 사회보험을 강화시키는 것이다.”

그리고 이것이 문제다. 자본주의 경제의 사회적 구조가 하나도 변화하지 않고 그대로 존속한다면, 우리들에게 남아 있는 건 정부가 새로 찍어낸 돈과 정부 지출 뿐이다.

 

 

Perhaps the very depth and reach of this pandemic slump will create conditions where capital values are so devalued by bankruptcies, closures and layoffs that the weak capitalist companies will be liquidated and more successful technologically advanced companies will take over in an environments of higher profitability.  This would be the classic cycle of boom, slump and boom that Marxist theory suggests.

코로나 19 팬데믹 때문에 발생한 경기침체 (슬럼프)의 깊이와 범위는 다음과 같은 조건들을 창출해 낼 것이다. 파산, 폐업, 해고 때문에 자본 가치는 하락하게 됨에 따라, 돈없는 회사들은 매각될 것이고,  기술적으로 더 발전한 회사들은 고수익성이라는 조건에서 더 성공하게 될 것이다. 마르크스주의 이론이 주장하는 ‘호항, 침체 그리고 호황’이라는 고전적인 경기순환이 발생할 것이다.

 

 

Former IMF chief and French presidential aspirant, the infamous Dominique Strauss-Kahn, hints at this“the economic crisis, by destroying capital, can provide a way out. The investment opportunities created by the collapse of part of the production apparatus, like the effect on prices of support measures, can revive the process of creative destruction described by Schumpeter.”

전 IMF 총재, 프랑스 대선 후보 였던 악명높은 도미니크 스트라우스-칸이 다음과 같이 넌시시 던졌다 “자본을 파괴함으로써 경제 위기는 출구를 만들 수 있다. 생산 설비 부분의 붕괴로 인한 투자 기회들은, 마치 지원 체계의 가격에 미친 영향처럼, 슘페터가 묘사한 창조적 파괴 과정을 성공적으로 이끌어낼 것이다. ”

 

 

 

 

Despite the size of this pandemic slump, I am not sure that sufficient destruction of capital will take place, especially given that much of the bailout funding is going to keep companies, not households, going.  For that reason, I expect that the ending of the lockdowns will not see a V-shaped recovery or even a return to the ‘normal’ (of the last ten years).

In my book, The Long Depression, I drew a schematic diagram to show the difference between recessions and depressions. A V-shaped or a W-shaped recovery is the norm, but there are periods in capitalist history when depression rules. In the depression of 1873-97 (that’s over two decades), there were several slumps in different countries followed weak recoveries that took the form of a square root sign where the previous trend in growth is not restored.

The last ten years have been similar to the late 19th century.  And now it seems that any recovery from the pandemic slump will be drawn out and also deliver an expansion that is below the previous trend for years to come.  It will be another leg in the long depression we have experienced for the last ten years.

팬데믹 슬럼프 규모에도 불구하고, 자본의 충분한 파괴가 발생할 것인지에 대해서 난 확신이 서지 않는다.  왜냐하면 정부의 구제 지원금 대부분은 시민들이 아니라 회사를 유지하는데 사용될 것이기 때문이다.

이런 이유 때문에, 나는 ‘격리,폐쇄’ 가 종료된 이후에, V-자 경기회복이나 ‘지난 10년과 같은 정상’으로 복귀는 발생하지 않을 것이라고 전망하고 있다.

“장기 불황 the Long Depression”이라는 내 책에서, 경기후퇴와 불황의 차이를 보여주기 위해서 도식적인 다이어그램을 제시했다. V-자 혹은 W-자 회복은 보통 규범이었다. 그러나 자본주의 역사에서 불황이 지배하는 시기들이 있었다.

20년이 넘는 1873년~1897년의 불황 시기에, 여러 나라들에서 몇번의 슬럼프가 발생했다. 그 이후에 회복세는 약해서 제곱근 형태를 띠었고, 이전 성장 추세는 회복되지 않는다. 지난 10년 시기는 19세기 후기와 비슷하다.  코로나 19 팬데믹 슬럼프로부터 회복시간은 점점더 길어지고, 앞으로 몇 년 동안에는 경기 팽창을 하더라도 과거 추세 성장보다 더 아래에 머무를 것이다.

 

 

Posted in Uncategorized | Leave a comment

코로나 19 대유행 이후 경제 슬럼프 – 마이클 로버츠

.

The post-pandemic slump

The coronavirus pandemic marks the end of longest US economic expansion on record, and it will feature sharpest economic contraction since WWII.

코로나 바이러스 팬더믹 때문에 미국 역사상 가장 길었던 경제 팽창 (128개월)이 종지부를 찍었다. 2차 세계대전 이후 가장 가파른 경제 수축이 눈에 띌 것이다.

(*feature: distinctive mark of sth: 어떤 것의 특질을 구성하다. 특성이 그것이다. 중요한 부분이다. )

The global economy was facing the worst collapse since the second world war as coronavirus began to strike in March, well before the height of the crisis, according to the latest Brookings-FT tracking index.

2차 세계대전 이후, 2020년은 최초로 지구 전체  GDP가 하락하는 해가 될 것이다.

역사적으로 전세계  GDP 가 하락하는 년도는 2차 세계대전이 끝나갈 무렵 몇 년간 뿐이었다.

 

 

파란색 그래프는 선진자본주의 국가

분홍색 그래프는 신흥개발국가-이머징 자본주의 국가

2020 will be the first year of falling global GDP since WWII. And it was only the final years of WWII/aftermath when output fell.

JPMorgan economists reckon that the pandemic could cost world at least $5.5 trillion in lost output over the next two years, greater than the annual output of Japan. And that would be lost forever.  That’s almost 8% of GDP through the end of next year. The cost to developed economies alone will be similar to that in the recessions of 2008-2009 and 1974-1975.  Even with unprecedented levels of monetary and fiscal stimulus, GDP is unlikely to return to its pre-crisis trend until at least 2022.

제이피 모건 계산에 따르면, 코로나 팬데믹은 다음 2년간 5.5조 달러 손실을 유발시킬 수 있고, 이는 일본 연간  GDP 보다 더 큰 액수이다.

2021년 말까지   GDP의 8%를 영구히 잃어버린 셈이다. 특히 선진자본주의 국가들에서 손실량은 2008-2009년 금융공황 시기, 1974-75년 경제공황 시기 손실액수와 비슷하다.

GPD 는 최소한 2022년까지는 코로나 위기 전 수준에 이르지 못할 것이다.

 

 BIS ( 국제 조정 은행) 경고에 따르면, 미국이 전국적인 공조체제를 갖추지 못하면 제 2차 코로나 위기가 닥칠 것이고,  GDP가 2020년 말까지 위기 전보다 12% 감소할 것이다. 이는 2008-2009년 금융공황 시기보다 더 악화된 것이다. 미국내 실직자는 2천만명이 될 것이다.

이는 1930년대 대공황 이후 최대수치이고  전체 경제활동인구의 40%에 영향을 미칠 것이다.

이러한 지구 북반부 선진자본주의 국가들 이외에도, 지구 남반부, 신흥개발국가 상황도 문제다.

이들은 에너지, 철강석, 농업 식량 등과 같은 기초 상품 수출국가들이기 때문에, 2008년 금융공황 시기에 이 상품들의 가격 하락을 경험한 바 있다. 지금 현재 코로나 19 팬데믹이 이러한 경기 수축을 강화시키고 있다.  1951년 최초로 측정한 이래, 2020년 신흥개발국가 (이머징 마켓)의 경제생산량은 1.5% 하락할 것이다.

세계은행도 사하라 이남 아프리카 국가들도 지난 25년 동안 최초로 경기침체 국면을 맞을 것으로 내다보고 있다. 세계은행의 ‘아프리카 펄스 리포트’에 따르면, 이 지역 경기침체는 작년 2.4% 성장율과 비교해서, 2.1%~5.1% 정도 수축할 것이다.

코로나 바이러스가 몰고온 무역과 가치 연결망의 붕괴로, 사하라 남부 아프리카 국가들은 370억 달러~ 790억 달러의 손실을 입을 것이다.  전  IMF 수석 경제전문가  Ken Rogoff 1930년 이후, 이러한 상품 가격 하락과 국제 무역 붕괴는 경험해 보지 못했다고 말한다.

90 개 이상 신흥개발국가들은    IMF에 긴급구제금융을 요청한 상태고, 그 중 60%는 세계은행에 돈을 빌리고 있다. 세계은행과 IMF은 위기 타개책으로 1.2조 달러를 준비해 놓고 있다고 했지만, 이 액수는 소득, GDP,  자본 유출량에 비해 지극히 적은 양에 불과하다.

2020년 1월 이후, 개발국가들로부터 960억 달러 자본이 빠져나왔다. 2008년 금융공황 시기에는 260억 달러 자본이 도피했는데, 그 때와 비교해도 현재 3배 이상이다. 켄 로고프에 따르면, 향후 정부 부채는 눈덩이처럼 불어나고, 경제체제가 이러한 수많은 채무불이행과 재구조조정을 감당할 수 없을 것이다.

 

이러한 비관과 달리, 낙관론자들의 견해도 있다. 현재 집단 격리와 폐쇄가 종료되면, 그 동안 억눌린 수요가 분출되어 나오면, 세계 경제는 원래 상태로 되돌아 갈 것이라고 본다.

사람들이 다시 직장에 복귀하게 되고, 가계는 다시 소비를 하게 되고, 회사는 옛날 노동자들을 다시 고용하고, 코로나 19 팬데믹 이후 밝은 미래를 기대하면서 더 많은 투자를 할 것이다.

아이슬랜드 은행장은, 지금 쓰지 않고 있는 돈들이 다 증발하지 않고 있으니까, 팬데믹이 종료되면 그 돈들은 다시 경제를 활성화시킬 것이라고 본다.

세계 경제 강대국의 수장들이 이러한 낙관론을 펼친다.

미 재무장관 음누친도 용감하게 이렇게 말했다. “코로나 위기는 단기적인 현안이다. 몇 개월 정도 지나면 경제는 이전보다 더 활성화될 것이다. ”

래리 서머스 역시, “경기 회복은 예상보다 훨씬 더 빠를 수 있다. 왜냐하면 여름 휴양지로 유명한 케이프 코드 경제가 겨울 시기에는 얼어붙어 버린 총체적 경기침체로부터 회복,  혹은 매주 월요일 아침마다 발생하는 미국   GDP의 회복과 같은 특성을 지니고 있고 있기 때문이다.”

케이프 코드   Cape Cod 지역의 비수기 경기침체에 비유했다.

이러한 낙관론자들이 신속한  V자 회복을 언급하고 있다. 하지만 이러한 낙관론자들이 놓치고 있는 것은, 코로나 19 위기가 한 지역에 타격을 입히는 것이 아니라 전체 지구 경제에 타격을 입히고 있다는 사실이다.

팬데믹 이후에는 경제 규모가 적은 기업들은 회생하지 못할 것이다. 팬데믹 이전에도 미국과 유럽 기업들의 10~20% 는 가까스로 운영비와 대출변제 자금을 마련할 정도로만 이윤을 창출하고 있었다.

속칭 이러한 좀비 기업들은, 겨울 한철 비수기  케이프 코드가 그들의 관에 마지막 못질을 의미할 것이다. 이미 중간 규모 소매 레져 기업들이 부도 신청을 했고, 항공사 여행사들이 연이어 부도 신청을 할 것이다.

셰일 오일 회사 상당수가 이미 지불 불능 상태이다.   금융 분석가 모하메드 엘 에리안 Mohamed El-Erian 에 따르면, “이미 채무가 이번 코로나 19 위기 상황에서 살아 남느냐 마느냐를 결정하는 결정적 요소가 되었다”  “이러한 적자생존 경쟁에서 살아남아 돌아오면, 수많은 당신 경쟁자들은 이미 사라지고 없다는 것을 발견하게 될 것이다”

따라서 자택 감금, 격리 lockdown 이후에 위기 이전 생산 수준에 이르기까지는 상당한 시간이 걸릴 것이다.

일본 노무라 경제 연구소 계산에 따르면, 유로존  GDP는 2023년까지는 Q42019 수준까지 도달하지 못할 것이다.

내 책 “The Long Depression 장기 침체”에서, 2008년 금융공황 이후에 어찌되었든 간에 그 이전 성장 추세로 돌아가지 못했다. 성장이 재개되더라도, 그 이전보다 그 속도는 느릴 것이다.

2009년 이후, 미국 1인당 연간   GDP  성장율은 평균 1.6%였다.  2009년 말에,  1인당  GDP 는 2008년 이전과 비교했을 때, 성장율이 13%나 뒤처져 있었다. 2008-2009 경기침체 말에는 1인당 GDP 성장율이 9% 정도 더 적었다. 2008년 경기침체 이후, 10년 넘게 경기팽창 국면이었지만,

 

 

under water: 좋지 않은 일이 벌어지다:

( used for saying that you should stop thinking about something bad that happened in the past and you should forgive people who did bad things Don’t worry – it’s all just water under the bridge.  )

 

 

 

(*pent up:pent up emotions are strong feelings, for example anger, that you do not express so that they gradually become more difficult to control

)

 

 

 

 

 

The Bank for International Settlements has warned that disjointed national efforts could lead to a second wave of cases, a worst-case scenario that would leave US GDP close to 12% below its pre-virus level by the end of 2020.  That’s way worse than in the Great Recession of 2008-9.

 

The US economy will lose 20m jobs according to estimates from @OxfordEconomicssending unemployment rate soaring by greatest degree since Great Depression and severely affecting 40% of jobs.

And then there is the situation for the so-called ‘emerging economies’ of the ‘Global South’.  Many of these are exporters of basic commodities (like energy, industrial metals and agro foods) which, since the end of the Great Recession have seen prices plummet.

And now the pandemic is going to intensify that contraction.  Economic output in emerging markets is forecast to fall 1.5% this year, the first decline since reliable records began in 1951.

The World Bank reckons the pandemic will push sub-Saharan Africa into recession in 2020 for the first time in 25 years. In its Africa Pulse report the bank said the region’s economy will contract 2.1%-5.1% from growth of 2.4% last year, and that the new coronavirus will cost sub-Saharan Africa $37 billion to $79 billion in output losses this year due to trade and value chain disruption, among other factors. “We’re looking at a commodity-price collapse and a collapse in global trade unlike anything we’ve seen since the 1930s,” said Ken Rogoff, the former chief economist of the IMF.

More than 90 ‘emerging’ countries have inquired about bailouts from the IMF—nearly half the world’s nations—while at least 60 have sought to avail themselves of World Bank programs. The two institutions together have resources of up to $1.2 trillion that they have said they would make available to battle the economic fallout from the pandemic, but that figure is tiny compared with the losses in income, GDP and capital outflows.

Since January, about $96 billion has flowed out of emerging markets, according to data from the Institute of International Finance, a banking group.  That’s more than triple the $26 billion outflow during the global financial crisis of a decade ago.  “An avalanche of government-debt crises is sure to follow”, he said, and “the system just can’t handle this many defaults and restructurings at the same time” said Rogoff.

Nevertheless, optimism reigns in many quarters that once the lockdowns are over, the world economy will bounce back on a surge of released ‘pent-up ‘ demand.  People will be back at work, households will spend like never before and companies will take on their old staff and start investing for a brighter post-pandemic future.

As the governor of the Bank of (tiny) Iceland put it:  “The money that now being saved because people are staying at home won’t disappear – it will drip back into the economy as soon as the pandemic is over.  Prosperity will be back.”  This view was echoed by the helmsman of the largest economy in the world.  US Treasury Secretary Mnuchin spoke bravely that : “This is a short-term issue. It may be a couple of months, but we’re going to get through this, and the economy will be stronger than ever,”

Former Treasury Secretary and Keynesian guru, Larry Summers, was in tentative concurrence: “the recovery can be faster than many people expect because it has the character of the recovery from the total depression that hits a Cape Cod economy every winter or the recovery in American GDP that takes place every Monday morning.”  In effect, he was saying that the US and world economy was like Cape Cod out of season; just ready to open in the summer without any significant damage to businesses during the winter.

That’s some optimism.  For when these optimists talk about a quick V-shaped recovery, they are not recognising that the COVID-19 pandemic is not generating a ‘normal’ recession and it is hitting not a just a single region but the entire global economy.  Many companies, particularly smaller ones, will not return after the pandemic.  Before the lockdowns, there were anything between 10-20% of firms in the US and Europe that were barely making enough profit to cover running costs and debt servicing. These so-called ‘zombie’ firms may have found the Cape Cod winter the last nail in their coffins.  Already several middling retail and leisure chains have filed for bankruptcy and airlines and travel agencies may follow.  Large numbers of shale oil companies are also under water (not oil).

As leading financial analysts Mohamed El-Erian concluded: “Debt is already proving to be a dividing line for firms racing to adjust to the crisis, and a crucial factor in a competition of survival of the fittest. Companies that came into the crisis highly indebted will have a harder time continuing. If you emerge from this, you will emerge to a landscape where a lot of your competitors have disappeared.”

So it’s going to take a lot longer to return to previous output levels after the lockdowns.  Nomura economists reckon that Eurozone GDP is unlikely to exceed Q42019 level until 2023!

And remember, as I explained in detail in my book The Long Depression, after the Great Recession there was no return to previous trend growth whatsoever. When growth resumed, it was at slower rate than before.

 

1인당 실질   GDP 와 1947~2007년 추이

2019년 말에, 1인당  GDP는 2008년 이전과 비교했을 때, 그 성장 추이가 마이너스 15%였다.

2008-2009년 경기침체 말에는 마이너스 9%였던 것이.

지난 10년간 지속적인 경기 팽창 국면이었지만, 2008년 경기침체가 끝난 이후에도 미국 경제는 성장 추이보다 훨씬 더 아래에 위치하고 있다.  그 차액은 1인당 연간 1만 200달러에 해당한다. (소득의 영구적 손실)

골드만삭스 예측으로는, 미국 1인당  GDP 손실액은 지난 10년간 증가분과 동일할 것이다.

 

 

Since 2009, US per capita GDP annual growth has averaged 1.6%.  At the end of 2019, per capita GDP was 13% below trend growth prior to 2008. At the end of the 2008–2009 recession it was 9% below trend. So, despite a decade-long expansion, the US economy fell further below trend since the Great Recession ended. The gap is now equal to $10,200 per person—a permanent loss of income.  And now Goldman Sachs is forecasting a drop in per capita GDP that would wipe out all the gains of the last ten years!

 

Then there is world trade.  Growth in world trade has been barely equal to growth in global GDP since 2009 (blue line), way below its rate prior to 2009 (dotted line).  Now even that lower trajectory (dotted yellow line).  The World Trade Organisation sees no return to even this lower trajectory for at least two years.

세계 무역을 보자.

 

파란 선 2009년 이후, 세계 무역 성장은 전세계  GDP 성장율과 거의 비슷하다.

그러나 2009년 이전 성장율 추이 (점선 ) 과 비교했을 때는, 차이가 크다는 것을 알 수 있다.

노란 점선 (2011-2018) 성장율 추이와 비교해도, 밑에 위치하고 있다는 것을 알 수 있다.

세계무역기구 (WTO)는 다음 2년 동안은 이 노란 점선도 회복하기 힘들다고 보고 있다.

 

 

But what about the humungous injections of credit and loans being made by the central banks around the world and the huge fiscal stimulus packages from governments globally.  Won’t that turn things round quicker?  Well, there is no doubt that central banks and even the international agencies like the IMF and the World Bank have jumped in to inject credit through the purchases of government bonds, corporate bonds, student loans, and even ETFs on a scale never seen before, even during the global financial crisis of 2008-9.  The Federal Reserve’s treasury purchases are already racing ahead of previous quantitative easing programmes.

 

다음으로, 천문학적인  중앙은행의 융자 대출 지원과 정부 경기부양책은 어떠할 것인가?

중앙은행, 세계은행과 IMF는 국채, 회사채, 등록금 대출, 상장지수펀드 (ETF) 을 대규모로 매입해서 긴급구제금융지원을 하고 있는데, 이는 2008년 금융공황 시기에서도 볼 수 없었던 것이다.

미 연준은 이미 과거 양적완화(QE) 방식을 뛰어넘는 채권 매입 방식을 추진하고 있다.

3월 미 의회가 승인한 정부 지출은 2008년 경기침체 시기보다 더 많은 액수이기도 하다.

 

And the fiscal spending approved by the US Congress last month dwarfs the spending programme during the Great Recession.

I have made an estimate of the size of credit injections and fiscal packages globally announced to preserve economies and businesses.  I reckon it has reached over 4% of GDP in fiscal stimulus and another 5% in credit injections and government guarantees. That’s twice the amount in the Great Recession, with some key countries ploughing in even more to compensate workers put out of work and small businesses closed down.

 

내가 계산해본 바로는, 전 세계적으로 재정 경기부양의 경우  GDP의 4% 이상, 신용대출과 정부보증 대출로는 GDP의 5%가 투하되었다.

2008년 금융공황 시기 때보다 2배나 더 많은 액수다.

몇몇 경제강대국들의 경우, 일자를 잃은 노동자, 폐업한 소상공인들을 지원하는 금액은 2008년 경기침체 시기보다 더 많았다.

각 정부는 또 다른 지원 패키지를 발표할 것이다. 밀튼 프리드먼이 말한 ‘헬리콥터 머니’, 즉 정부가 직접 가계와 기업에 현금을 지급할 것이다.

은행들은 잊어버려라. 정부가 직접 돈이 필요한 사람들에게 현금을 나눠주고, 사람들이 소비를 할 것이다.   은행을 통하지 않고 정부가 직접 현금을 나눠줘야 한다는 포스트 케인지안들의 주장은 따라서 옳은 것으로 입증되었다.

또한 주류경제학이 그 동안 부정해오고 무시한

 

 

These packages go even further in another way. Straight cash handouts by the government to households and firms are in effect what the infamous free market monetarist economist Milton Friedman called ‘helicopter money’, dollars to be dropped from the sky to save people.  Forget the banks; get the money directly into the hands of those who need it and will spend.

Post-Keynesian economists who have pushed for helicopter money, or people’s money, are thus vindicated.

In addition, suddenly the idea, which up to now was rejected and dismissed by mainstream economic policy, has become highly acceptable, namely fiscal spending financed, not by the issue of more debt (government bonds), but by simply ‘printing money’, ie the Fed or the Bank of England deposits money in the government account to spend.

Keynesian commentator Martin Wolf, having sniffed at MMT before, now says: “abandon outworn shibboleths. Already governments have given up old fiscal rules, and rightly so. Central banks must also do whatever it takes. This means monetary financing of governments. Central banks pretend that what they are doing is reversible and so is not monetary financing. If that helps them act, that is fine, even if it is probably untrue. …There is no alternative. Nobody should care. There are ways to manage the consequences. Even “helicopter money” might well be fully justifiable in such a deep crisis.”

The policies of Modern Monetary Theory (MMT) have arrived! Sure, this pure monetary financing is supposed to be temporary and limited but the MMT boys and girls are cock a hoóp that it could become permanént, as they advocate.  Namely governments should spend and thus create money and take the economy towards full employment and keep it there.  Capitalism will be saved by the state and by modern monetary theory.

I have discussed in detail in several posts the theoretical flaws in MMT from a Marxist view.  The problem with this theory and policy is that it ignores the crucial factor: the social structure of capitalism.  Under capitalism, production and investment is for profit, not for meeting the needs of people.  And profit depends on the ability to exploit the working class sufficiently compared to the costs of investment in technology and productive assets.  It does not depend on whether the government has provided enough ‘effective demand’.

The assumption of the radical post-Keynesian/MMT boys and girls is that if governments spend and spend, it will lead to households spending more and capitalist investing more.  Thus, full employment can be restored without any change in the social structure of an economy (ie capitalism).  Under MMT, the banks would remain in place; the big companies, the FAANGs would remain untouched; the stock market would roll on.  Capitalism would be fixed with the help of the state, financed by the magic money tree (MMT).

Michael Pettis is a well-known ’balance sheet’ macro economist based in Beijing.  In a compelling article, entitled MMT heaven and MMT hell, he takes to task the optimistic assumption that printing money for increased government spending can do the trick.  He says: “the bottom line is this: if the government can spend these additional funds in ways that make GDP grow faster than debt, politicians don’t have to worry about runaway inflation or the piling up of debt. But if this money isn’t used productively, the opposite is true.

He adds: “creating or borrowing money does not increase a country’s wealth unless doing so results directly or indirectly in an increase in productive investment…  If U.S. companies are reluctant to invest not because the cost of capital is high but rather because expected profitability is low, they are unlikely to respond to the trade-off between cheaper capital and lower demand by investing more.” You can lead a horse to water, but you cannot make it drink.

I suspect that much of the monetary and fiscal largesse will end up either not being spent but hoarded, or invested not in employees and production, but in unproductive financial assets – no wonder the stock markets of the world have bounced back as the Fed and the other central banks pump in the cash and free loans.

Indeed, even leftist economist Dean Baker doubts the MMT heaven and the efficacy of such huge fiscal spending.  “It is actually possible that we could be seeing too much demand, as a burst of post-shutdown spending outstrips the immediate capacity of the restaurants, airlines, hotels, and other businesses. In that case, we may actually see a burst of inflation, as these businesses jack up prices in response to excessive demand.”  – ie MMT hell.  So he concludes that “generic spending is not advisable at this point.”

Well, the proof of the pudding is in its eating and we shall see.  But the historical evidence that I and others have compiled over the last decade or more, shows that the so-called Keynesian multiplier has limited effect in restoring growth, mainly because it is not the consumer who matters in reviving the economy, but capitalist companies.

And there’s new evidence on the power of Keynesian multiplier. It’s not been one to one or more, as often claimed, ie. 1% of GDP increase in government spending does not lead to a 1% of GDP increase in national output.  Some economists looked at the multiplier in Europe over the last ten years. They concluded that “in contrast to previous claims that the fiscal multiplier rose well above one at the height of the crisis, however, we argue that the ‘true’ ex-post multiplier remained below one.”

And there is little reason that it will be higher this time round.  In another paper, some other mainstream economists suggest that a V-shaped recovery is unlikely because “demand is endogenous and affected by the supply shock and other features of the economy. This suggests that traditional fiscal stimulus is less effective in a recession caused by our supply shock. … demand may indeed overreact to the supply shock and lead to a demand-deficient recession because of “low substitutability across sectors and incomplete markets, with liquidity constrained consumers.” so that “various forms of fiscal policy, per dollar spent, may be less effective”.

But what else can we do?  So “despite this, the optimal policy to face a pandemic in our model combines as loosening of monetary policy as well as abundant social insurance.”  And that’s the issue.  If the social structure of capitalist economies is to remain untouched, then all you are left with is printing money and government spending.

Perhaps the very depth and reach of this pandemic slump will create conditions where capital values are so devalued by bankruptcies, closures and layoffs that the weak capitalist companies will be liquidated and more successful technologically advanced companies will take over in an environments of higher profitability.  This would be the classic cycle of boom, slump and boom that Marxist theory suggests.

Former IMF chief and French presidential aspirant, the infamous Dominique Strauss-Kahn, hints at this“the economic crisis, by destroying capital, can provide a way out. The investment opportunities created by the collapse of part of the production apparatus, like the effect on prices of support measures, can revive the process of creative destruction described by Schumpeter.”

Despite the size of this pandemic slump, I am not sure that sufficient destruction of capital will take place, especially given that much of the bailout funding is going to keep companies, not households, going.  For that reason, I expect that the ending of the lockdowns will not see a V-shaped recovery or even a return to the ‘normal’ (of the last ten years).

In my book, The Long Depression, I drew a schematic diagram to show the difference between recessions and depressions. A V-shaped or a W-shaped recovery is the norm, but there are periods in capitalist history when depression rules. In the depression of 1873-97 (that’s over two decades), there were several slumps in different countries followed weak recoveries that took the form of a square root sign where the previous trend in growth is not restored.

The last ten years have been similar to the late 19th century.  And now it seems that any recovery from the pandemic slump will be drawn out and also deliver an expansion that is below the previous trend for years to come.  It will be another leg in the long depression we have experienced for the last ten years.

Posted in Uncategorized | Leave a comment

박정희, 대선 참여 시사, 62년 6월 5일

박정희가 대선에 참여할 것을 시사하다. 516군사 쿠데타 이후 거의 1년 후에 발표하다.

이후락 (최고의회, 공보실장)의 언론 공작 탁월.

하루에 수백통씩 국민들이 박정희더러 대통령으로 출마하라고 편지를 쓰고 있다고 보도하다.

 

 

62 06 05 박정희 민정이양 후 대선 출마 시사 이후락

Posted in Uncategorized | Leave a comment

Revenue and its sources Die Vulgärökonomie (the theory of surplus value)

Revenue and its sources Die Vulgärökonomie

my copy of “Theorien Theorien Über den Mehrwert” fallen apart over time that was taken  over from one of my colleagues in S.K.

[893] Karl Marx, Theorien Über den Mehrwert
Vierter Band des “Kapitals” . MEW 26-2. Beilagen.
p.448

|| 893] Die vollständige Versachlichung, Vermehrung (Verkehrung)und Verrücktheit des Kapitals als zinstragendes Kapital – worin jedoch nur die innre Natur der kapitalistischen Produktion, [ihre] Verrücktheit, in handgreiflichster Form erscheint – ist das Kapital als „Compound interest” bearing5, wo es als ein Moloch erscheint, der die ganze Welt als das ihm gebührende Opfer verlangt, durch ein mysteriöses Fatum jedoch seine gerechten, aus seiner Natur selbst hervorgehenden Forderungen nie befriedigt, stets durchkreuzt sieht.

이자 산출 자본이란  자본주의적 생산의 본성과 광기가 아주 실감나게 피부에 와닿는 자본형태이다.  그런데 이 이자 산출자본의 완전한 사물화, 주객전도와 광기의 표현이 바로  ‘복리 이자 산출’ 자본이라 할 수 있다. 이것은 마치 고대 아모나이트 신, 몰로흐처럼, 이 모든 세계를  자신의 제물인양 행동한다. 그렇지만 미신적인 운명 때문에 자기본성으로부터 우러나온 응당한 요구들은 결코 충족되지 않고 오히려 지속적으로 좌절된다.

Versachlichung : = Verdinglichung (reification) Vergegenständlichung  동일어이다. 사물화, 물화, 대상화 : 사람과 사람 사이 관계가 사물과 사물사이의 관계, 사람들 사이 관계가 제거되고 사물과 사물사이의 ‘사회적’ 관계가 형성되다. 마르크스의 진단과 비판이다.

Verkehrung = (inversion: 주객 전도) ; 본말이 전도되다. 주인과 객이 뒤바뀌다.

Verrücktheit : unvernünftig: geistesgestört, wahnsinnig (crazy, mentally defective, intensive, nervous ; 미친, 정신이상인, 비이성적인 ; 신경쇠약증세를 보이는 ; 좌불안석인)그래서 ‘광기’라고 하는 게 포괄적인 의미에서 우리 말에 가깝다. 마르크스는 <자본>의 본성이 이성적이지 않다고 본 것이다. 윤리적 정치적 판단이 들어가 있다.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | Leave a comment

이 왜 당을 발전시키는가? :도시유동좌파 (홍반장)를 찾아서

<평화노동당>이 왜 당을 발전시키는가? 어떻게 당원을 늘일 것인가?  도시유동좌파 (홍반장) 탄생

 

logo small.jpg 

 

<평화노동당>을 하면 왜 우리 당에 좋은가? <평화 노동당>이 말하는 새활터전에서 정치적 실천 주체로서 노동자, 정치적 자유 시민의 의미는 무엇이고, 조직화 방식의 특징은 무엇인가? 바로 “도시 유동 좌파들(홍반장들)”의 조직화이다. 

 

왜 <평화노동당>은 도시 공간(생활터전) 에서 정치를 강조하는가? 그리고 누구를 조직할 것인가? 

 

 

<노동당>의 로고 “우리가 노동자다”를 보면서, 노동자 개념을 ‘등질성’ ‘단일성’ 개념으로만 파악하고 있다는 생각이 든다. 정치 실천적 측면에서 ‘우리 모두는 노동자이다’는 것만을 강조하는 것보다는, 차이와 동일성을 동시에 고려하는 게 바람직하다. 특히 ‘노동 과정’ 바깥, 일터 바깥 실천활동들의 정치화에 대해서 고민해야 한다. 이것이 <평화노동당>에서 말하는 <노동 정치>의 혁신이다. <평화 노동당>이 구좌파와 신좌파의 종합적 융해라고 말하는 것도 이러한 맥락이다. 도시 공간에서는 ‘계급’이나 ‘노동’ 패러다임을 넘어서는 ‘공적 서비스’ 영역이 중요한 정치적 투쟁 (이해관계의 차이와 대립: 사적 자산의 증가냐 공적 행복이냐)’의 장이다. <평화노동당>은 이러한 공적 서비스 영역에서 정치적 주체를 발굴하고 찾고자 한다. 그것은 노동자도 될 수 있고, 아닐 수도 있다. 

 

– 노동자 계급의 등질성이 아니라, 분화라는 단어에 주목해야 하는 이유

 

 

자본주의 산업구조의 다변화와 복잡성, 이에 따른 노동자들의 분화, 계급 계층 자체 뿐만 아니라, 계급의식의 분절과 분화( differentiation)을 어떻게 좌파적으로 해석하고, 실제 정치현장에서 누가 무엇을 매개로 좌파 정치를 할 수 있는가? 좌파정당으로서 우리는 무엇을 해야 하는가? 이에 대한 답변으로서, 정치주체를 노동계급의 단일성이나 중심성을 강조하는 데서 찾는 게 아니라, 도시유동좌파 (Urban fluid radicals)로 잠정적으로 규정하고, 이들이 생활터전에서 공적 서비스 영역에서 정치 주체로 등장하게 만들고 ‘유동적’ 네트워크를 만들어 조직화 하는 것을 우리 당의 정치적 목표로 제안한다.

 

 

이미 이것은 우리가 부분적으로 실천하고 있는 것도 있다. 하지만 보다 더 개념적으로 명료하게 도드라지게 만들 필요가 있어서, ‘도시 유동 좌파’라는 말을 쓴다. 쉽게 말해서 ‘도시 유동 좌파’는 우리 동네 일이 터지면 달려오는 ‘홍반장’이다. 사적인 재산 증식보다는 공적 행복을 먼저 실천하는 ‘홍반장들(복수)’이다.

 

 

이것은, https://http://www.newjinbo.org/xe/5985394 평화노동당 당명 제안서에서 <노동>의 신 선언, <노동정치>의 혁신과 언관된다.

 

 

우리의 하루는 잠 8시간, 일 8시간, 노동 바깥 활동 8시간으로 이루어져 있다. <노동>의 신 선언은 노동 과정에서 노동자의 진정한 자유와 일터 정치를 다루고 있고, 노동 과정 바깥에서는 “생활 터전에서 정치적 자유 시민으로서” 노동자이다.

 

 

홍반장 (도시 유동좌파)는 특히 노동 이외 사회,문화,정치,예술 활동 과정에서 ‘공적 행복’을 실천하는 자를 지칭한다. 왜 ‘도시 유동좌파 (홍반장)’ 탄생이 중요한가? 왜 우리 당은 여기에 주목하고 실천 기획을 짜야 하는가? 그것은 노동자를 비롯해서 우리 동네 이웃들의 정치의식과 계급의식이 형성되고, 충돌하고,변형되고, 변화되는 정치적 공간이 바로 도시 생활터전이기 때문이다.

 

 

97년 이후 노동정치가 노동조합정치로 국한되어 버리고, 또 대중들에게 각인된 ‘노동’이란, 해고반대 임금인상과 같은 몇 가지 경제적인 복지후생을 놓고 투쟁하는 ‘서민’정도이고, 특히 보수 미디어는 정치적 자유시민으로서 노동자들을 묘사하는 게 아니라, ‘불만족스런 경제적 동물’로 묘사한다.

 

 

 

도시 유동 좌파란, 임노동 관계 (고용 피고용 관계)가 아니더라도, 다시말해서 동일한 계급이 아니더라도, 자유로운 정치 시민들이 생활터전 현안별로 유동적으로 팀을 만들어 실천하는 집단 개인을 의미한다.

 

 

도시 유동좌파를 굳이 <노동조합>이나 <노동정치>와 비교를 하자면, 계급적 존재 기반에서 비롯되는 정치적 경제적 이해관계가 동일하지 않더라도, 생활터전에서 공적 행복을 기준으로 서로 ‘연대’할 수 있다는 것이다.

 

 

이러한 영역들 중에 ‘공공 서비스’라는 부분을 하나 예로 들어보자.

 

 

공공 서비스 영역과 공적 행복, 그리고 ‘도시 유동좌파(홍반장)’의 역할에 대해서 알아보자.생태 환경의 중요한 주제, 도시 계획에서 우리 일상과 떼놓을 수 없는 자전거를 한번 사례로 들어보자.

 

 

자전거 도로에서 핵심적인 것은, 자전거 도로 확충이라는 서비스도 문제이지만, 도시공간에서 도로 (길)에서 ‘민주주의’ 문제이다. 다시 말해서 도로 (길)에도 권력관계가 있고, 이 권력을 어떻게 배분할 것인가? 문제가 발생한다. 버스-대형트럭 운전사, 자동차 주인, 자전거 주행자, 보행자 등의 권리는 도로 (공적 서비스영역) 위에서 서로 충돌한다. 어떻게 이 충돌과 갈등을 누가 어떻게 해결할 것인가?  홍반장(도시 유동 좌파)의 출동이다. 갈등을 해결하러 가는 것이다. 

 

 

 이러한 문제군들에서는, <노동>이나 계급 패러다임에서 규정된 특정 계급이 아니라, ‘도로 민주주의, 길 권력의 민주적 분배’에 관심이 있는 동네 사람들이 모여서 ‘정치’를 하는 것이다.

 

 

<평화 노동당>의 가치, 바로 생활터전에서 정치적 자유시민,문화인을 배출해 내는 것을 목표로 한다. 개인의 사유재산 보호나 축적이 아니라, 공적인 행복을 위해서 어디든지 달려가는 ‘도시 유동좌파’의 배출과 조직화를 목표로 한다는 것이다.

 

 

생활터전에서 공적 행복의 종류들은 무수하고 다양하다. 따라서 계급 패러다임이나 노동중심성이라는 단어로 고정적으로 조직화되는 게 아니라, 현안에 따라 “유동적으로”조직을 만드는 것이다. 당은 무엇을 할 것인가? 이러한 ‘유동적 도시 좌파들’을 네트워크로 묶어내는 작업을 해야 한다. <평화 노동당>의 당원 증가 비법이다. 

 

 

당은 이슈별로 기민해야 하고 신속해야 한다. 그리고 공적 행복의 문제 진단과 문제 해결에 민첩해야 한다. 

 

 

1. 생활 터전에서 ‘공공 서비스’, 공적 행복 영역에서 정치 발견

 

자전거 도로가 자기 동네에 제대로 되어 있는지, 자동차, 버스, 자전거, 보행인들의 권리는 어떻게 실현되고 있는가? 이를 제안하고, 감시하고, 실천하는 사람들이 있어야 한다. 

 

도로 위에 1) 자동차 2) 버스 3) 자전거 도로 4) 보행길… 권리 충돌 가능성. 정치의 영역이다.

 

자전거 도로 권력 분배 1.jpg 

2. 도시 도로 설계에서부터 ‘도시 유동 좌파’는 정치에 개입해야 한다.

도로 민주주의를 실천하고, 도로 권력의 민주적 배분을 위해서 분투해야 한다.

 

 

자전거 도로 권력 분배 3.jpg

 

3. 깨어있는 의식 속에 권력이 존재한다... 벽화가 인상적이다. (동네 도시 공간에 주민들이 참여하다) 

 

 

자전거 도로 깨어있는 시민들 속에 권력이 있다.jpg

 

 

 

Posted in Uncategorized | Leave a comment

에서 본 ,아쉽다! 민노총(조합)과 정당의 역할분담을 명료하게 하자

<평화노동당>에서 본 <노동당>지지자들: 정당 역할, 정당의 정치적 임무가 뚜렷이 부각되지 못한 점이 아쉽다. 

 

logo small.jpg 

 

 당으로로서 <노동정치>에 대한 새로운 선언이 필요하다 !  <우리가 노동자다>라는 동어반복이 아니라 ! 

 

 

전통적으로 논의되어 온 정당(당)과 노동조합과의 관계를 지금 심층적으로 다룰 수는 없습니다. 그러나 당게시판에 올라온 <노동당> 지지자들의 공통된 특징이 하나 있는데, 그것은 정당과 노동조합의 역할 분담에 대한 논의가 없거나, 당이 해야 할 자기 역할에 대해서 심층적으로 다루지 않다는데 있습니다.

 

 

<노동당> 지지자들의 글은, 정당의 자기 역할에 대한 부분은 빠져있거나 빈곤하고, 대부분 최근 나온 <한국의 신자유주의와 노동체계: 노동운동의 고민과 길찾기: 임영일 외: 노동의 지평 출판사: 2013> 주제들인, 대안적 노조운동의 모색입니다. 제 9장 <대안적 노조운동의 모색>을 저술한 정일부(한국 노동운동연구소 부소장)님의 이야기와 동일한 주제인 것입니다. 정일부님이 할 일과 우리 당이 할 일은 기계적으로 분리될 수 없습니다. 그러나 우리 당의 ‘노동정치’에 대한 입장을 서술하지 않는 것은, 정당으로서 해야할 일에 대한 정치적 임무를 다하지 않는 것입니다.

 

 

그리고 <노동당>을 지지하는 분들이 논리적 일관되게 어떤 공통점이 있을 필요는 없지만, 왜 <노동당>이어야 하는가? 굳이 <노동당>이어야 하는가에 대한 공통요소를 발견하기 힘듭니다.

 

 

한가지 예를들면, 남종석님의 글의 주요한 요지는 왜 <노동당>이어야 하는가를 설명하기 보다는, 현재 진보신당이 민중운동을 추동해야 한다는 추상적인 수준에서 정치운동의 방향만을 제시하고 있을 뿐입니다. 좌파정당으로서 우리 당이 해야할 일을 남종석님은 다음과 같이 말하고 있습니다. “(노동당)이란 당명은, 우리의 정체성과 토대를 어디에 두고 있는가를 분명히 함으로써 당의 계급적 정체성을 뚜렷이 할 수 있다.” 그리고 진보신당이 민주노총과 노동자 정치에 적극적으로 ‘개입’해야 한다고 주장하고 있습니다.

 

 

 

이러한 <노동당> 지지글의 문제점은, 현재 진보신당이 정치적으로 실력이 부족하니까, 노동자들을 찾아서 떠나자, 새로운 여행을 해보자는 의지 표명만을 할 뿐이지, 실제로 우리 당이 ‘정치 정당’으로서 무슨 역할을 해야 하는가?에 대한 이야기가 결여되어 있습니다. 아울러 정당과 노동조합의 역할 분담이 어떻게 이뤄져야 하는가에 대한 의견도 빠져있습니다. 우리는 끊임없이 새로운 노동자 주체들을 찾아 떠나야 합니다. 너무 당연합니다. 문호개방해야 합니다. 

 

 

진보신당이 현재 한국 좌파, 진보운동의 중심이 아니고, 자타가 공인하는 지도구심이 아니라는 것은 다 알고 있는 사실입니다.  남종석님의 본래 의도 “새로운 진보좌파의 비상을 사고”하기 위해서는 “우리 모두는 노동자”라는 동어반복 (tautology)만 이야기할 게 아니라, 정치정당으로서 진보신당의 ‘신 노동 정치’ 내용은 무엇이어야 하고, 과거와는 무엇이 달라야 하는가를 주장해야 합니다.

 

 

필자는 우리 당이 노동자 정치정당 추진위, 변혁모임, 진보적 시민단체나 개인들에게 적극적으로 문호를 개방해서 가급적이면 같은 정당에서 활동해야 한다고 봅니다. 그러나 현재 우리 당의 정치적 임무를 명료하게 해놓지 않고, 외부 손님들을 맞이 하겠다, 혹은 같이 하겠다는 것은 그들에 대한 예의도 아니고, 장기적으로 진정으로 하나가 되는 정치 프로젝트는 아니라고 봅니다.

 

 

 

<노동당> 당명이 구좌파여서 문제가 아니라, <노동당> 당명을 제출한 우리 당원들이 실제 우리당의 정치적 임무,그것도 노동조합이나 민주노총과 같은 총연맹과의 관계를 어떻게 할 것인가를 명료하게 하지 못하는데 있다고 봅니다. 필자는 당은 당연히 민주노총과 협력도 해야 하고, 또 민주노총의 한계와 문제점을 비판해야 한다고 보지만, 정당이 노동조합 정치의 자기 정화능력까지 다 무시하거나 뺏을 필요는 없다고 봅니다. 또 민주노총의 개혁은 노총 자체 스스로 해야 하고, 좌파정당이 해야 할 일은 노동조합에게 떠 넘겨서는 안됩니다. 민주노총 자체가 사회주의자나 좌파조직으로 구성된 게 아니기 때문입니다. 민주노총과의 협력적 관계를 유지하기 위해서라도 당과 노동조합의 차이, 역할 분담에 대해서 다시 토론해야 할 때입니다.

 

 

 

아래 도표는 위 주장을 보다더 명료하게 하기 위해서 조금 인위적으로 정당과 노동조합과의 역할 분담을 표로 만들어 본 것입니다. <평화노동당> 제안서에서도 <노동정치>의 신 선언을 설명하면서 이야기했지만, 다시 한번 정당과 노동조합의 역할 분담이라는 측면에서, 진보신당이 지난 5년간 상대적으로 방기했거나 부족했던 정치적 임무를 적어봅니다. 

 

아래 표 내용을 남종석님이 모른다는 게 아닙니다. 재창당을 하면서 좌파정당으로서 우리 당이 무엇을 할 것인가? 무엇을 수행해야만, 외연도 확충하고 내부 당 통합도 높일 것인가?를 고민해야 하는데, ‘노동’ 패러다임에 대한 고민들, 즉 당에서 해야 할 역할에 대한 언급보다는, ‘노동 세력들’에 대한 언급과 ‘노동자 주체’에 대한 동어반복적 강조에 그치고 있다는 게 제 문제의식의 핵심입니다.

 

 

                정당과 노동조합의 역할 분담    

                                                좌파 정당 

                                                    노동조합

 

 

정치활동

주요임무

 

 

전체 직종을 아우르는 재분배(세금), 분배 (노동소득), 자산 소득(빌딩, 토지 지대, 금융자산), 생산 수단에 계급 계층 차별적 요소를 진단하고 좌파 정치 요소를 발견해 낸다.

 

-> 16개 시도당에서 자기 지역 주민들, 노동자들의 실태 조사에 근거한 정치 실천 기획을 수립한다.

 

 

 

 

 

작업장으로 국한해서 노동조합의 정치활동은 노동3권과 관련된 정치활동이다.( 노동조합원들이 다 좌파나 사회주의자가 아니기 때문에, 그리고 조합원 자격은 반드시 정치적 입장이 좌파일 필요도 없기 때문이다. )

 

 

 

문제 해결 접근 방식 : 법률,제도 영역에 대한 정치적 전면전

 

 

 

자본주의에 기초한 한국 민법 체계가 어떻게 계급지배 도구로 사용되고 있는지를 분석해 내고 비판한다. 97년 이후 노골적으로 노동조합 탄압 및 분쇄 도구로 사용되고 있는 노동자나 노조에 대한 회사 재산권 침해 고소, 노동자 노조 재산 가압류, 손해배상 청구에 대한 저항 및 대응. 전 사회적인 여론전을 전개한다. 법률적 지원 팀을 만들어 지속적인 노동조합 방어 투쟁을 전개한다.

 

 

 

회사나 현장에서 해당 노동자들은 파업이나 사보타지와 같은 직접 행동에 돌입한다.

 

 

 

 

공론장에서 여론형성과 시민사회에서 정치활동

 

 

노동정치를 급진화하고 좌파적인 방향으로 이끌어내기 위해서는 시민사회에서 일상적인 ‘노동정치’ 여론을 당에서 만들어 내야 한다. 예를들어서 조.중.동의 귀족노조 이데올로기에 대해서, 방어적 차원에서 매일매일 대응하고 저항 담론을 형성해야 한다. 그리고 적극적으로 공격적인 차원에서는 노동자들의 ‘공적 행복’이 무엇인가를 시민사회 속으로 전파해야 한다.

 

 

 

 

노동조합은 보수적 반-노조 이데올로기에 저항하고 노조의 정치활동을 방어하기 위해서는, 지역 공동체와의 연대를 실시해야 한다. 소위 말해서 기업의 사회기여 프로젝트(재벌들의 불우이웃 돕기)를 능가하는 지역공동체 주민 연대 정치 프로그램들을 직접 실천해 낸다.

 

 

계급의식의 형성

 

 

 

 

 

한국 자본주의의 특성 중에 중요하게 지적되어야 할 요소는 교육제도이다. 한국의 노동자들의 계급의식 형성(자기 정체성)을 가로막고 있는 것은 바로 지배계급과 기득권 세력들이 공교육은 물론 사교육까지 동원해서 계급의식 형성을 아이때부터 20세까지 철저하게 통제하고 있다는 것이다. 좌파 정당의 임무는 실제로 교육제도를 개혁하는 실천과 더불어,이러한 지배계급의 정치적 공세를 뚫어낼 수 있는 사상적이고 문화적인 이데올로기 투쟁을 매일 매일 전개해야 한다.

 

 

 

노동조합 가입과 활동 자체가 좌파적이거나 사회주의적 정치활동은 아니다. 그러나 노동조합 활동과 가입은 한국과 같은 낮은 노조 조직율에서는 매우 중요한 정치활동의 전제 조건이 된다. 하지만 노동조합 가입 자체가 노동자들의 계급의식이 좌파적으로 자동적으로 형성되는 것은 아니다. 노동 3권 자체는 형식적 절차적 (부르조아) 권리이고, 노동 3권이 실현된다고 해서 자본주의 체제 자체나 노동자-자본의 권력관계가 전복되지 않는다. 하지만 노동조합 내부에서 정치 활동은 기본적인 민주주의 정치의 학습이자, 좌파 정치로 발전할 씨앗이라고 볼 수 있다.

 

 

 

 

정치적 경쟁 대상

 

 

좌파 정당의 경쟁 대상은 정치권 내부에서는 새누리당, 민주당 등이다. 노동정치와 관련된 주제들은 바로 새누리당의 노동정치, 민주당식 노동정치를 통해서, 현장에서 노동정치가 걸러지고 변형되고 왜곡되기도 하고 새로운 ‘노동정치’를 생산해 내기도 한다.

 

좌파정당의 임무는, 노동현안 자체가 현재 새누리당, 민주당이라는 전문 정치 영역으로 이동할 때, 발생하는 노동정치의 변형, 왜곡, 새로운 문제 발생들에 대비하고 그에 맞는 정치적 전략과 전술을 만들어야 한다.

 

입법 활동은 물론이고, 입법활동이 아니더라도, 새누리당과 민주당의 ‘노동정치’ 기획들은 다양한 채널들을 통해서 실천되고 있다.

 

 

 

 

노동조합에서 경쟁대상은 해당 기업이나 기업주와 고용주이다. 단위 노조건, 총연맹 차원이건 해당 경쟁자들은 일차적으로 고용주와 자본가들,경영자들이다. 아군을 형성하는 방식은 당연히 노조 바깥 사람들과의 연대이다. 노동변호사들, 지역주민들 동조, 여론 형성, 다른 정당들과의 제휴등도 포함된다. 하지만 일차적으로 중요하게 연구해야 할 경쟁 대상은 고용주와 자본가들이다.

 

 

 

공간, 글로벌 자본과 자본의 지리적 이동, 노동력의 국제적 이동 : 국제 정치 연대 형성

 

 

 

좌파 정당의 연구소에서 해야할 일이 바로 세계 자본주의의 동학과 지배계급의 통치 전략에 대한 분석과 그에 기초한 노동정치의 전략 수립니다. 자본의 이윤율 증가는 반드시 노동조합과 노동자정치의 궤멸 전략과 연계가 되어 있다. 1970년대 중반 이후 30년간 금융자본은 산업자본으로부터 독립해 역으로 산업자본을 지배해나가기 시작한지 오래다. 아울러 아시아 중국, 인도, 동아시아 국가들의 자본주의 시장제도의 도입으로, 지구 자본주의 질서와 축적 체제는 급변하고 있다. 한국 좌파는 아시아 다른 나라 정치권들과 적극적으로 연대하고 지속적인 교류 프로그램을 만들어야 한다.

 

 

 

한국 노동조합, 민주노총도 아시아 국가들의 노동조합과 국제적 연대를 적극적으로 모색해야 한다.한국 자본이 해외로 이동하고, 반면 아시아 노동자들이 한국으로 이동함에 따라, 아시아 노동자들의 권리와 한국 노동자들의 권리가 맞물려 있기 때문이다.

 

 

 

 

 

 

추공

2013.07.19 17:46:11

로고는 대박입니다. ㅋㅋ

원시님의 문제의식, 즉 노조와 당의 구별을 아주 표까지 만들어 주셨네요. 많이 배웁니다. 근데 원시님의 평화노동당을 전 지지합니다만. 다만 아쉬운 것은 “노동과정”이라는 제한된 개념을 굉장히 포괄적으로 사용하고 있다는 생각을 해봅니다. 맑스의 정치경제학에서 가장 아쉬운 부분은 재생산 부분 혹은 비공식 부분을 정치경제학 내부에서 다루지 않는다는 것인데요. 전 이 노동을 중요하게 다루어야 한다고 생각합니다. 어찌보면 원시님의 노동 개념은 기존의 노동담론보다는 확장되어 있지만 사회서비스나 돌봄 노동과 같은 네그리적 표현을 빌자면 “사회적 노동자”의 문제의식은 잘 보이지 않더군요. 그건 무지개사회당은 구별되는 지점일 수도 있다는 생각을 해봅니다. 살짝 아쉬운 부분이라고 봅니다. 아시다시피 노동과정 바깥에 존재하는 노동자가 되고 싶어도 못되는 배제된 자들의 문제의식을 담고 있어야 한다고 봅니다. 관심 감사합니다. 

전 선청성 칭찬결핍증후군을 앓고 있어서 좋은 말은 쓸줄 모릅니다. 이해해주세요

Posted in Uncategorized | Leave a comment

의 장점: 허구적 대립구도 대 타파

 

 <평화노동당>의 장점: 허구적 대립구도 <무지개 사회당> 대 <노동당> 타파하고, ‘과거’를 공정하고 솔직하게 평가하고 미래를 설계하자 !

 

<평화노동당>의 입장에서 본 <무지개 사회당> 대 <노동당>의 대립구도는 허구적인 측면이 있고, 다른 한편으로는 실제 당 성장전략, 정치노선, 조직노선과 관련된 입장 차이를 드러내는 측면도 있다. 그러나 후자 역시 제한된 범위에서 이뤄지고 있고, 당 장기/단기 성장전략, 조직노선, 당헌 당규 등이 당명 논의에 앞서 심층적으로 수행되고 완료되지 않았기 때문에, 당명칭 논의에 지금 이 모든 것들을 다 쏟아붓고 있고, 예선전 없는 결승전만 당원들은 쳐다볼 수 밖에 없는 상황이다.

 

 

이런 한계점이 있다는 것을 밝혀두고, 당 명칭에 대한 논의를 시작한다. 그리고 실천적으로 이 문제를 고민했으면 한다. ‘노동’은 자기 관성으로 쇠퇴하고 있고 고립된 측면이 있으며, ‘신 사회운동 가치들’이 민주당이나 안철수가 아닌 ‘좌파’ ‘사회주의지향’으로 발전하고 정치적 결실을 맺기 위해서는 적어도 6~7년, 10년 이상은 걸릴 것이라는 현재 조건 속에서 이 문제를 바라봤으면 한다. 

 

 

허구적 대립구도란 바로 이것이다. <노동당>에서는 <무지개 사회당>에서 이야기하는 다양한 정치가치들과 노동이 대립되지 않는다고 주장하고 있다. 반면  <무지개 사회당>에서는 <노동당>이 노동중심성만 강조하고, <노동당>이 노동자 범위란 광범위한 직업군을 지시한다고 선언하지만, 그건 새로울 게 없고, 결국에 과거 올드 패션 노동조합, 민주노총의 ‘노동’이나, 비정규직 노동자 투쟁 주체를 지칭하는 것 아니냐는 주장이다.

 

 

사실 <노동당>의 주장도 내부 논리적으로는 일관성이 있다. 왜냐하면 <노동당>이라고 해서 집권해서 행정부를 구성하면 노동부만 책임지는 게 아니고, 여성부, 생태문화부, 도시건설, 스포츠, 교육부, 경제, 외교부, 통일부, 자원부 등 다 책임지기 때문이다.

 

 

그러나 다른 한편, 이러한 <노동당>의 답변은 미래에 그렇게 하겠다는 또 한번의 약속이지, 과거 진보신당이나 민주노동당, 사회당의 정치실천에 대한 평가, 위기에 대한 내적 성찰이나, 진지한 고민은 아니다. 그래서 <평화노동당> 제안자 입장에서 볼 때, 이러한 <노동당>의 답변은 <무지개 사회당> 일부 입장에 대한 시각 교정용으로 적당하다.  다시말해서 다양한 ‘무지개’만 강조하고, 후자 <사회당>이 좌파나 사회주의자를 지칭하는지, 아니면 무정부주의자나 리버럴리스트 등을 지칭하는지, 민주당 지지자인지 구별이 잘 되지 않는 주장들에 대한 시각교정용으로 적당하다. 하지만 제대로 된 답변이나 생산적 토론과정은 아니라고 본다.

 

 

<무지개 사회당>은 지금까지 발표된 글을 보면, ‘무지개’의 의미는 분명하게 드러난 반면에, ‘사회’는 불분명하고 다의적으로 해석가능하다. ‘무지개’는 한국정치사 맥락에서 보면 80-90년대 ‘노동(운동)’ 개념틀, 그리고 <무지개 사회당>지지자들이 빠뜨리고 있는 게 있는데, 독재타도) 민주화’라는 개념틀에서 주요한 핵심 주제어들로 떠오르지 못하거나, 주변화된 신정치 내용들이 21세기형 좌파정당 밥상의 반찬이 되어야 한다고 주장하고 있는 것이다.

 

 

여기에서 주변화된, 아니면 새누리당, 민주당, 민주노동당, 진보신당에서 공통적으로 들러리 서거나, 비례대표 할당제 배당이나 받는 정도에 그치는 것이 아니라, 신 정치의 영역들 (여성해방과 성평등, 반전평화, 반핵, 생태 운동, 교육운동, 종교운동, 장애인, 성소수자, 인종차별 철폐등)을 당의 주된 정치 사업과 향후 대안적 사회모델의 정치적 내용으로 채우자는 주장이다.

 

 

그렇다면 신 정치와 구 정치 내용들(이 둘은 내용상으로 명료하게 완전히 딱 둘로 나눠지는 것은 아니지만, 사회운동이라는 측면에서 구분한 것임)의 차이, 융해 가능성에 대해서 알아보자.

 

 

<평화 노동당> 당명칭은 신 정치와 구 정치 내용을, 한국현실을 고려했을 때, 이 두가지 내용들을, 신좌파와 구좌파의 정치적 내용들로 채워나갈 것인가에 대한 고민이 담겨져 있다.

 

 

노동 패러다임의 혁신 필요성: 

 

 

신사회운동 (여성해방, 성소수자 운동, 반전 반핵 평화 운동, 인종차별 철폐 등)등이 전통적인 좌파나 사회주의 운동에서 강조한 ‘노동’ 개념틀로 설명이 되지 않는 부분들이 있다. 그렇다고 한다면 기존 좌파들의 ‘노동’에 대한 이해와 개념을 수정할 필요가 있거나, 아니면 기존 ‘노동’ 패러다임으로 설명이 되지 않거나 포함하지 못하는 새로운 진보가치들과 좌파적 내용들을 적극적으로 포섭해야 하는 것이다.

 

 

한국의 사례를 하나 들어보자. 2008년 촛불 시위는 일종의 시민 저항 운동, 시민불복종 (헌법 자체를 무시하지 않는 범위에서 법률과 부당한 국가권력에 저항하는 운동) 성격을 띠었다. 그 폭발점은 쇠고기(광우병), 식품 안전성, 정부 무능력, 한미 FTA라는 국제 정치, 그 배우에 깔린 미국의 농축산 자본 권력과 로비, 그리고 그에 저항하는 한국 주권의식이다. 이런 주제들은 전통적인 제조업 임금노동자 대 자본가의 ‘착취’에서 비롯된 사회 저항이나 데모가 아니다.

 

 

따라서 이런 촛불시위의 성격과 원인들을 분석할 때, 또 우리 당의 미래 주체들을 고민할 때, 쇠고기를 먹는 사람들은 다 노동자이니까, <노동당>에서도 이 문제를 다 적극적으로 해결할 수 있다. 이렇게만 공표한다고 해서, 위의 정치적 과제들이 설명되는 것은 아니다.

 

 

신 정치, 신좌파의 내용들, 해외 사례도 마찬가지이다. 1966년에 미국에서 창설된 흑표범당 (블랙 팬서 파티)은 흑인 민권 운동 이후에 흑인들이 만든 ‘흑인 사회주의자 정당’이었고, 당원이 1만 명에 육박하기도 했다. 이 인종차별 문제는 ‘노동자-자본가’ 대립구도와 무관하거나 아무런 상관이 없는 것은 아니지만 (미국에서 계급차별과 인종문제는 당연히 결합되어 있다), 전통적인 ‘노동-자본’의 대립이라는 틀로만 설명될 수 없는 다른 영역들이 있다는 것이다.

 

 

여성 해방, 성평등 운동도 마찬가지이다. 필자 역시 연인으로부터 듣는 가장 무서운 소리가 “내가 마르크스주의 남자와도 살아봤고, 나치주의자 남자와도 결혼해서 살아봤는데, 둘 다 똑같더라. (가부장적인 태도는 둘 다 차이가 거의 없고, 정치적 견해만 차이가 있었다)” 이 독일 여인네의 문장이다. 너무 쉽게 <노동당>에서는 ‘노동’과 ‘여성해방, 생태가치, 반전 반핵 평화, 인종차별 철폐” 등을 하나로 묶어 ‘봉합’해서는 안된다. ‘봉합’이 아니라고 주장한다면 그 ‘접합, 공유지대’에 대한 설명을 정확히 해주고, 과거 왜 그렇게 실천되지 못했는가를 해명해야만 정치적인 책무를 다한 것이다. 

 

 

그렇다면 우리는 질문을 바꿔야 한다. 만약 신좌파 내용들, 신 정치 영역들에서, 왜 노동자들은 그 운동의 주체로 나서지 못하는가? 우리 당, 과거 사회당, 민주노동당, 진보신당은 왜 노동자들, 그리고 당원들이 그 신정치 영역으로 뛰어들지 못했으며, 신좌파 내용들을 실천하는 비-당원들은 우리 당의 당원이 되지 못했는가? 이게 우리들이 지금 던져야 하는 질문이다.

 

<평화 노동당> 입장에서 볼 때, 현재 당내에서 <무지개 사회당>과 <노동당> 사이의 허구적 대립구도을 위의 질문으로 바꿔야 한다고 본다.

 

 

그렇다면 대안은 무엇인가?

 

첫 번째는, <평화 노동당>의 노동 개념틀의 혁신

https://http://www.newjinbo.org/xe/5984159 우리 당은 ‘노동자 중심성’보다는, 생활터전에서 노동자들이 정치 중심 주체로 나서야 한다고 주장해야 한다. 생활터전(일터, 쉼터, 놀이터, 사는 동네)에 자본의 논리가 침략하고 공격하는 이 모든 총체적인 ‘자본의 팽창 전쟁’에 대해 저항해야 한다. 회사와 공장의 임금 상승과 복지 후생을 넘어선 운동이다.

 

 

두 번째, 이러한 노동 패러다임의 혁신 이외에도, ‘노동’ 개념틀 바깥 사회 정치 영역의 내용들을 다양한 진보의 가치들을 실천하는 주체들을 당 안으로 적극적으로 끌여들이는 ‘문호개방’ 정책을 펼쳐야 한다. ‘노동’의 들러리나 액서서리, 혹은 비례대표제 할당제 맞추는 모양새 갖추기를 거부해야 하고, 미래 행정 정부의 ‘대안 세력’이 되는 ‘좌파들’이 모이는 당으로 만들어야 한다.

 

 

이것이 촛불시위가 갖는 정치적 의미 (시민불복종 운동의 긍정성)과 부정성 한계 (정치적 지향점의 불분명함, 현정부 욕하는 수준에 그치거나 추상적인 수준에서 정권 타도 구호로 그침. 대선이나 총선에서는 대부분 민주당 지지하거나 그에 준하는 세력들을 지지하는 것으로 끝남 등)를 극복할 수 있다. 

Posted in Uncategorized | Leave a comment