What is Attention ? by Christopher Wickens

.

Abstract and Keywords

This chapter describes attention in cognitive engineering and in design by metaphors of the filter, that selects incoming sensory information for perception, and the fuel that supports all stages of information processing with a limited supply of resources, and that therefore limits multi-tasking We describe applications of the filter to noticing events, alarm design, supervisory control and display layout, display integration, and visual search. We then consider two aspects of multi-task performance: when fuel is available to support concurrent processing, predicted by a multiple resource model, and when the task demands are sufficiently high, as to force sequential processing, and consideration of task and interruption management strategies. Finally, we consider the role of mental workload in automation and situation awareness. Where relevant, the chapter highlights the role of computational models.

Keywords: attentionmulti-taskinginterruption managementmultiple resourcestime-sharingdisplay integrationvisual scanningalarmsvisual search

Fundamentals

What Is Attention?

Attention may be described as one of a fundamental set of limits to human performance (along with, for example, memory or control precision) on the amount of information that can be processed per unit of time. Of use for the current chapter is the consideration of two metaphors of attention, as a filter and as a fuel (Kramer, Wiegmann, & Kirlik, 2007; Wickens & McCarley, 2008). As a filter, it describes the limits and constraints on the sensory systems (particularly the eyes and ears) to accept and process varying events and elements, up to the level of perception, where the meaning of those events is understood. Thus we conventionally describe the filter metaphor as selective attention. As a fuel it describes the limits and constraints on all information processing operations—perception, working memory, decision, and action—to operate concurrently, whether in the service of a single task or in multitasking. That is, attention characterizes a sort of limited mental energy, fuel, or “resource” that facilitates performance of the relevant process. For example, as the worker “tries harder” to understand a difficult instruction, he or she may lose focus on monitoring other changing variables in the work environment. Thus we can apply the fuel metaphor to divided attention between tasks and processes.

Importantly, this dichotomy of metaphors can be broken down by the extent to which the two attention operations succeed or fail. We speak for example of the success of the filter as guiding attention (often our eyes) to relevant sources of information or events in the world; we speak of failures of selective attention as both failures to notice those events at all, and distraction as failures to focus attention on important information as attention is diverted to less important things. We speak of “success” of divided attention, when we can multitask effectively, doing two things at once as well as either (p. 37) alone. In contrast, failure of divided attention, a matter of degree, ranges from a small dual task decrement in one or the other of two tasks to a complete abandonment of one of them and postponement of its initiation until the other is completed (serial task switching).

What Is Attention in Design?

At a fundamental level, we conceptualize design from a human factors standpoint as an engineering process, whereby the balance between two measurable constructs, performance and workload, is optimized. This balance is complicated in two respects. First, “performance” is itself multifaceted, and in particular in many systems we consider both routine performance and performance in unexpected or “off-nominal” conditions (Burian, 2007; Wickens, Hooey, Gore, Sebok, & Koenicke, 2009). The former is typically the goal of design, but effective human response to off-nominal unexpected conditions depends upon design that supports accurate situation awareness of the task (and the environment in which the task is being performed (Burns et al., 2008; Wickens, 2000a). Such design may not necessarily help routine performance and may sometimes even compromise it. The second complication is that workload should not necessarily be minimized for optimal design, but must be preserved within a range in the middle. This chapter addresses the role of attention in characterizing variables of performance, situation awareness, and workload.

Attention Allocation

As we discuss below, attention may be allocated at two different levels. At the highest level, we can speak of attention—the fuel—as allocated to tasks, as tasks may be defined by distinct semi-independent goals (Kirwan & Ainsworth, 1992). Thus the vehicle driver has the task of lane keeping, a second task of navigating, and a third task of dealing with in-vehicle technology (e.g., radio listening, cell phone conversation). Tasks are distinct in this sense in that they usually compete for attentional resources. At the lowest level, we can speak of attention—the filter—as allocated to elements within the environment as well as to internal cognition. Thus, in the vehicle example, the single task of navigation (and higher-level attention directed to the goal of successful navigation) may need to be accomplished by dividing or allocating visual attention (the filter) between a map and the search for landmarks and road signs outside; or between reading a navigation display, recalling the correct option, and placing the fingers on the correct key for menu choice; or between searching for the road signs and rehearsing the route number to be searched for. In our discussion below, we consider both levels of attention.

A Brief History: Single-Channel Theory and Automaticity

There are two concepts, single-channel processing and automaticity, that are fundamental to most findings and theories in attention, and indeed define endpoints on a sort of continuum from attentional failure to attentional success. Both are deeply rooted in the history of the study of attention (James, 1890; Titchner, 1908).

Single-channel theory (Craik, 1947; Welford, 1967; Pashler, 1998; Broadbent, 1958), the more pessimistic view of human attention, underlies the notion that attention can be focused on only one task at a time, as if performing one task so totally occupies the “single channel” of human cognition and information processing that any other task (usually one arriving later or deemed of lesser importance) must wait, unstarted, until the higher-priority task is completed. Its proponents have cited data in which people must perform two tasks of very high demands at once (like reacting in emergency to an unexpected roadway hazard while dialing a cell phone) or perform two tasks that compete for incompatible resources (like reading a paper document and reading a computer screen).

In stark contrast, the more optimistic view, automaticity (James, 1890; Schneider & Shiffrin, 1977; Fitts & Posner, 1963) defines circumstances when a task requires essentially no attention at all; if it has no attention demands, then ample attention (reserve resources) can be allocated to performing other tasks concurrently without decrement. Classic examples here include walking and talking, or driving (lane keeping) and listening to the radio. In both pairs, the first-mentioned task is so “easy” or automated that it requires little attention. Attention

Figure 2.1 Three examples of the performance-resource function.

Single-channel behavior and the perfect time sharing invoked by automaticity of course represent two endpoints on a continuum that can be best defined by the degree of attentional resources necessary to obtain a given level of performance. Such a relation between resources and performance is described by the performance-resource function (PRF; Norman & Bobrow, 1975), three examples of which are shown in Figure 2.1. The graph line at the bottom (A) suggests a task that would invoke single-channel behavior, since full resources must be allocated to obtain (p. 38) perfect performance (or indeed any performance at all). The curve at the top (C) represents an automated task. Perfect performance can be obtained with little or no attention. The graph in the middle (B) highlights the continuum between SCT and automaticity; performance improves up to a point as more resources are allocated to it, but it eventually reaches a level where “trying harder” will not improve performance.

Importantly, the transition from A → B → C can describe both an intrinsic change in the objective difficulty (complexity or demand value) of the task, or in the subjective difficulty of the task as rendered across three levels of skill development (e.g., novice, journeyman, expert). Important also is the observation that tasks A and C may be performed at equivalent levels in single task conditions; however, when a concurrent task is added, task A will suffer, but C will not.

In the following pages, we describe several general design issues relevant to attention (or attention issues that can be addressed by design)—the role of the filter in noticing, information, access, and search; the role of both the filter and fuel in information integration; the role of the fuel in multitasking that is both parallel and serial; the role of the fuel in mental workload prediction and measurement; and the relationship between workload, situation awareness, and automation. Within each section, we address, where relevant, certain validated computational models that can serve the engineering design community.

Noticing and Alerting

Selective attention as the filter can be seen to “tune” toward certain physical events in the environment, while filtering out others. Designers can capitalize on this by assuring that such tuning is focused on important events. Thus a critical design implication of attention is rendered by the attention-capturing properties of alarms and alerts that will direct operators’ attention to events (and locations) that a designer (and sometime automation) has deemed to be important. The fundamental basis of this approach lies in the fact that people are not very good monitors for infrequent or unexpected events if these are not highlighted in some way, a phenomenon recently described as change blindness (Carpenter, 2002; Rensink, 2002; Simons & Levine, 1997; St. John & Smallman, 2008; Wickens Hooey et al., 2009a) or inattentional blindness (Mack & Rock, 1998). The latter is a form of change blindness that occurs when a change is not noticed, even when directly looking at it.

Alert Salience

Research has identified a number of features of warning systems that will capture attention by making events salient (Boot, Kramer, & Becic, 2007, Itti & Koch, 2000)). For example, appearances of new “objects” in the scene will capture attention, and onsets (increases in luminance) will be more effective in attention capture than will offsets (decreases in luminance or contrast, or disappearing objects; Yantis, 1993). Whether appearing or disappearing, the noticing or attention-capturing properties of these transients is much better when the visual contrast of the changes is larger, the signal/noise ratio is higher (less clutter around the change event location), when visual or cognitive load is lower, and when they occur near or close to foveal vision, than when they are in the periphery (McCarley et al., 2009; Wickens et al., 2009; Steelman-Allen, McCarley, & Wickens, 2011; McKee & Nakayama, 1984). This loss in sensitivity with increasing eccentricity is estimated to be approximately 0.8%/degree (McCarley et al., 2009; Wickens, Alexander, et al., 2003). An extreme example of this eccentric presentation is when the to-be-noticed-event is not in the visual field at all when it occurs (e.g., the eye is closed in a blink or the head is turned beyond about 60 degrees away from the changing element. In these instances, referred to as “completed changes” (Rensink, 2002), change is very hard to notice when fixation is restored to the location of the change.

To some extent, the attention-capturing properties of the physical event (measurable for example by luminance contrast differences) are also modified by knowledge-driven or cognitive processes. One such process is expectancy. We will better notice events if they are expected (Wickens, Hooey, et al., 2009); for example, if the operator knows that a system is operating near its limits, he or will more likely expect the warning that those limits have been exceeded and therefore notice the alert when it appears, even if it may not be in foveal vision. A second process (p. 39) is tuning, whereby people are able to “tune” their monitoring to certain event features, to enhance noticing when events contain those features (Most & Astur, 2007; Folk, Remington, & Johnson, 1992, Wolfe & Horowitz, 2004). An obvious case is when the tuned feature is location; people can tune their attention by simply directing their gaze toward the location where an alert is likely to be. But they can also tune attention to be receptive to certain features at a given location: For example, in most cockpit situations, attention is tuned to a red event (e.g., a red light onset) because of the high priority given to red as a warning.

The difference between the attention-capturing processes defined by physical elements in the environment (e.g., signal-noise ratio) and the attention-tuning processes defined by worker expectations illustrates the more general contrast between what are termed “bottom-up” and “top-down” influences on perception. A final, strong effect on attention capture or noticeability is the ongoing non-visual (auditory and cognitive) workload at the time an event occurs (Fougnie & Marois, 2007).

A computational model called N-SEEV (noticing—salience effort expectancy value; Wickens, Hooey, et al., 2009; Steelman-Allen et al., 2011; Wickens, 2012) can be used to predict the likelihood of detecting an event as a combined function of its salience (Itti & Koch, 2000), expectancy, peripheral eccentricity (from foveal vision), and overall workload. However, in the workplace, as opposed to the laboratory, it is often challenging to determine what the eccentricity of a particular event at a given location may be, as the eyes can be scanning many different locations around the workplace. The SEEV model, the second component of N-SEEV model, predicts the course of this workplace scanning as a context in which the event to be noticed (N) occurs. The SEEV model will be described in a later section.

Beyond the visual modality, there are of course differences in attention-capturing properties between modalities. Most critically, vision is hampered in noticing events in that only about 2 × 2 = 4 squared degrees of a momentary visual field of around 60 × 60 degrees is occupied by foveal vision at any time; that is, only around 0.1%, and noticing degrades rapidly outside of this region. In contrast, events in either the auditory or tactile modality are not much constrained by sensor orientation; they are said to be omni-directional, and so auditory (and more recently tactile) warnings have been validated as superior alerts. However, within these non-visual modalities, issues of bottom-up capture (signal-to-noise ratio) and tuning or expectancy play the same role that they do in vision. As an example, auditory warnings may not be effective in noisy or conversation-rich environments, nor tactile alerts in an environment with extensive physical activity (e.g., a soldier crashing through heavy timber).

Nevertheless, a meta-analysis of noticing events within a visual workplace indicates that the auditory and tactile modality are 15% more effective (faster, more accurate) in capturing attention than are visual interrupting events, even when the latter events are adjacent (in the best case) to the location of the ongoing visual tasks (Wickens, Prinet et al., 2011; Lu, Wickens et al., 2011; Sarter, this handbook).

Alert Reliability

Most alert systems are imperfect in their reliability. They are designed with algorithms to integrate raw physical data to infer an important or “danger” state (e.g., a malfunction, a fire, or a predicted collision), and if this integrated product exceeds a threshold, the alert activates. However, the raw data are often noisy, and in the case of predictive alerts, circumstances in the environment may change after the alert is given to make the forecast event less likely. The longer this span of prediction is, the lower the reliability. As the obvious consequence, as described by Meyer (20012004) and Meyer and Lee (this handbook), alerts can make one of two types of decision errors: deciding there is not a problem when there is (a “miss”) and deciding that there is a problem when there is not (a “false alert”). When considering the consequences of these two types of errors, most designers quite reasonably assume that misses (or delayed alerts) are worse than false alerts, and so they choose to adjust the threshold lower so that the false alarms are more prevalent. In this case, when the FA rate increases, the system often produces the well-known “cry wolf” problem (Breznitz, 1983; Dixon, Wickens, & McCarley, 2007; Wickens, Rice, et al., 2009; Xiao et al., 2004), whereby operators may turn their attention away from the alerts when they occur and hence are more likely to respond late, or not at all, to true alerts.

Alert Dependence: An Attentional Analysis in a Dual-Task Environment

The effect of alarm reliability can be placed within the broader context of the multitask environment in which alarms are most critical, and the consideration of two cognitive states and two aspects of attention with which those states are associated (p. 40) (Meyer, 20012004; Meyer & Lee, this handbook; Dixon & Wickens, 2006; Maltz & Shinar, 2003). Thus, in most applications, a busy operator in a multitask environment (driving, flying, health care operations) is depending upon the automation to (1) alert him or her if there is a problem, but (2) be “silent” if all is well so that he or she can comfortably turn full attention to the concurrent tasks and away from the domain of the alerted event. As Meyer describes, an operator who responds rapidly to the alerts when they occur is demonstrating the compliance to the alert system; one who retains full attention to the concurrent tasks when the alert is silent is demonstrating reliance on the alerts. Thus the psychological constructs of compliance and reliance represent two independent aspects of operator dependence upon the alert system (Meyer & Lee, this handbook).

With regard to attention, when the overall reliability of the alert system degrades, both types of automation errors (misses or late alarms and false alarms) may increase. However, a designer-imposed shift in the alert threshold can mitigate the rise in one at the expense of the other. In these cases, data suggest that a rise in false alert rate, with miss rate held constant, will cause a progressive loss in compliance. This “cry wolf” effect can be objectively measured by the response rate, by the response time (to address the alarm), and by a selective attention measure of the time it takes to look at or switch attention to) the alerting domain (Wickens, Dixon, Goh, & Hammer, 2005). Conversely, an increase in miss rate, with FA rate more or less constant, will lead to a progressive loss in performance on the concurrent task with lower reliance as more attentional resources are reallocated continuously to monitoring the automated domain even when “all is well” (Wickens & Colcombe, 2007). This allocation is directly manifest as increased scanning to any visual display of “raw data” within the alerted domain (Wickens, Dixon, Goh, & Hammer, 2005). These human adjustments in response to failure event frequency may be described as optimal or “eutactic” (Moray & Inagaki, 2000), much as human signal detectors optimally adjust beta in response to signal frequency, as discussed in McCarley & Benjamin (this handbook).

The influences of false alert rate on compliance and miss rate on reliance are not entirely independent in two respects. First, if the threshold of an alert system with constant reliability is varied by the designer, it is obvious that reliance and compliance measures will change in opposite directions. Second, there is some evidence that increasing FA rate not only degrades compliance but will also degrade reliance (Dixon, Wickens, & McCarley, 2007; Dixon & Wickens, 2006), as if false alarms, being more salient and noticeable than misses, lead to an overall reduction in trust in (and therefore dependence on) the system. So, from the perspective of the impact on human performance in the multitask environment, it appears that FA-prone systems are more problematic than miss-prone (or late-alert-prone) systems. But of course, a full analysis of the appropriate balance between misses and false alarms in alert system design must take into account the primary issue of the costs of overall system misses versus false alerts (i.e., should both the human and the alert system miss the dangerous event).

Amplifying and mitigating the AFA problem. Three factors amplify the AFA problem. First, for any given threshold setting, the lower the base rate of events, the greater will be the false alert rate, at least as measured by the proportion of alerts that are false. In some circumstances this can be as high as 0.90. Indeed, in one case (border monitoring for nuclear fuel), it reached 100% (Sanquist, Doctor, & Parasuraman, 2008).

Second, in environments with multiple independent alerts and low thresholds (e.g., the intensive care unit; Seagull et al., 2001), if the probability of a false alert in any given system is even modestly high, then the probability that a single alert within the total workspace will be false can be extraordinarily high. A recent study at a medical center revealed that the typical health care worker was exposed to approximately 850 alerts in a typical workday, many of them undoubtedly false. Nurses experience 841 nuisance alerts/day. Kestin, Miller, and Lockhart (1988) estimated that in the typical operating room an alarm was triggered every 4.5 minutes.

In such circumstances with multiple alarm systems, some of them more prone to false alarms than others, people tend to generalize across the population of all systems, distrusting the good as well as the bad (Keller & Rice, 2010).

Third, the problems with false alarms can obviously be amplified to the extent that those alerts themselves are annoying and intrusive. A visual alert that is false can be fairly effectively “filtered,” since as we noted above, only when it is in the fovea is it most salient. In contrast, the down side of the omni-directionality of auditory or tactile alerts is that the attentional filter cannot restrict access. The increased annoyance accompanying such intrusive false alerts will increase the tendency of workers to deactivate them, or at least try to ignore them (Sorkin, 1989).

(p. 41) Finally, there is emerging evidence that people respond differently when false alerts are clearly “bad” (e.g., the user can obviously perceive that there is no danger) versus when they are “plausible” (e.g., a danger threshold was approached but not quite passed; Lees & Lee, 2007; Wickens, Rice, et al., 2009; see also Madhavan, Wiegmann, & Lacson,, 2006). “Cry wolf” behavior is more likely in the former case than in the latter. However, in order for humans to make this determination that a false alarm is plausible, they must be able to monitor the “raw data” independently from, and in parallel with, the automated sensors.

The mitigating solutions for the AFA problem range from the highly intuitive to the less obvious, as we describe below.

  •  Increasing alerting system sensitivity in discriminating safe from dangerous conditions. Often algorithms can be improved and an approach taken over time in developing the airborne traffic alert (TCAS) as designers responded to pilots’ complaints about the high false alarm rate (Rantanen, Wickens, Xu, & Thomas, 2004). An important question in this regard is how low such sensitivity (or reliability) can be before an alerting system becomes no longer effective. One review of alerting studies indicated that with reliabilities above about 0.80 (mean of FA and miss rate), for humans operating in a multitask environment (where attentional resources were at a premium), performance of a human supported by an imperfect alerting system would be better than that of the unaided human (Wickens & Dixon, 2007).
  •  Instructing users about the inevitable necessity of some false alarms in uncertain environments, and particularly when the event base rate is lower. Such instructions can render the false alerts as more “forgivable,” particularly if they are not bad false alerts, as described above.
  •  Implementing context sensitive mechanisms that may raise the threshold during circumstances when the base rate is known to be quite low, and lower it when the base rate is higher (e.g., fire alerts during fire season versus rainy season).
  •  Providing the user with rapid (and ideally continuously available) access to the raw data in parallel with the automation. Hence, to the extent that false alerts are in the “plausible,” not the “bad,” category described above, such access will diminish cry-wolf problems. Indeed, in such a system with raw data access, the activation of the alert may actually reinforce the human’s own raw data monitoring behavior (if the human detected the pending event before the alert sounded), as well as confirm to the human that the system is in fact well functioning (albeit a little too sensitive). These characteristics appear to have mitigated the “alarm false alarm” issue in some segments of air traffic control (Wickens, Rice, et al., 2009).
  •  Developing “likelihood alarms” in which the alert system itself can express its own degree of uncertainty when events occur that are close to the threshold (Sorkin, Kantowitz, & Kantowitz, 1988; St. Johns & Manes, 2002; Wickens & Colcombe, 2007). Such uncertain-class events can then be associated with a physical sign (e.g., an amber signal) that is less urgent than “sure events” (e.g., red flashing) but more urgent than the sign of “all clear” (e.g., green, or no sign at all). Some evidence suggests that likelihood alerts provide better overall sensitivity than simple two-state alerts (on-off).
  •  Informative alerts. Many complaints about alerts are associated with frustration that, while informing that something has gone wrong, they say little about what is wrong and what to do about it. Such concerns, addressed by making the alerts more informative (e.g., voice alerts), lead us beyond their attention-capturing properties to consideration of the further information properties associated with alerts and other displays, the issue we turn to in the next section.

Attention & Attention Travel in Information Processing

Display Layout

Attention, both its filter and fuel capabilities, is particularly challenged in a spatially distributed workspace such as that confronted by the pilot, driver, health care worker, or process controller, where multiple sources of information must be processed as a basis for action and not simply monitored. Such processing may consist of multitasking (as when the driver examines a map while endeavoring to maintain some attention to the roadway), or it may consist of information integration, as when the pilot compares the map with the visual view of landmarks outside the airplane to assure that he or she is on the right track. In such circumstances, we see that attention must travel from place to place, an analog to physical travel, and that such travel is not effortless, particularly in a widely distributed visual workspace.

In these circumstances, designers often have an opportunity to “lay out” some aspects of the (p. 42) workspace to minimize net travel time, according to seven specific principles (Wickens, Vincow, Schopper, & Lincoln, 1997), as we describe in the following. The first two of these principles depend upon defining a “normal line of sight” (NLOS); that is, in a seated workspace, a line about 20 degrees below the horizon extending from the eyes (Sanders & McCormick, 1993). With regard to the point where the line intersects the workspace surface:

  1. 1. The most important displays should be closer to the NLOS. (This applies particularly to displays whose changes are critical to be noticed in a timely fashion.)
  2. 2. The most frequently used displays should be closest to the NLOS.
  3. 3. Pairs (or N-tuples) of displays used for a single task (i.e., that must be integrated or compared and are therefore typically used in sequence) should be close together. In some cases this may involve database overlay, as when terrain and weather are superimposed in a pilot’s navigational map so that a safe route through both hazards can be planned (Kroft & Wickens, 2003).
  4. 4. Displays related to a single class of information should be close together, or grouped. This will aid in visual search, as we will see below.
  5. 5. Displays should be positioned close to the controls that affect those displays (display-control compatibility; Proctor & Proctor, 2006).

We note that in particular, principles 2 (frequency of use) and 3 (relatedness) are designed to minimize the total attention travel time. This optimization, if not followed, may lead to slower performance (since attention travel takes time) but, in a worst case, when attention travel is very effortful, may lead to a relevant display not being visited at all.

Given the role of attention travel in display layout optimization, it is important to realize that travel cost (or information access cost) is not a linear function of distance, but instead can be seen to have at least three components (see Wickens, 1993; Wickens & McCarley, 2008): (1) When displays are close together, so that the eye can scan from one to the other without head movement (within about 20 degrees), the cost is minimal and does not change with separation distance. (2) When the displays are separated by more than 20–30 degrees, head movements are required to move the eyes from one to the other, imposing not only a substantially increased cost, but one that grows with the distance (angle) of head movement. (3) Sometimes displays just cannot be accessed by head movements alone, but rather, require body rotation (checking the blind spot in a car) or, increasingly, key presses or mouse movements to access a particular “page” in a menu or multifunction display. In the latter case, the “distance” of attention travel can be calculated in part by the number of key presses and in part by the cognitive complexity of menu navigation. (e.g., number of options; Seidler & Wickens, 1992, Wickens & Seidler, 1997). Greater information access can not only impose direct time costs but also inhibit information retrieval (Gray & Fu, 2004) and may alter the overall strategy and accuracy of task performance (Morgan, Patrick, et al., 2009).

An important question for designers to answer is what happens when principles “collide” or oppose each other. Suppose, for example, that frequency-of-use dictates that a particular display be close to the NLOS, but integration requires that the same display be close to another, which (for other reasons) has been positioned far from the NLOS. Which principle is more costly to violate? A study that addressed this question had pilots fly with eight different display layouts that either conformed to or violated each of three different principles; frequency of use, integration (sequence of use), and display-control compatibility (Andre & Wickens, 1992). The results revealed that the sequence-of-use principle (close positioning of displays to be integrated for the same task) dominated the frequency-of-use principle, as assessed by overall pilot performance. Both of these dominated display-control compatibility. The impact of these human performance weightings, coupled with others, has been represented in various display layout models summarized in Wickens, Vincow, et al. (1997), which have integrated various elements that influence the efficiency of attention travel, as described above, to provide “figure of merit” estimates of display layout optimization (e.g., Fowler, Williams, Fowler, & Young, 1968).

There are two additional attention-guided principles that can be applied to display layout: A principle of (6) consistency dictates that displays should remain in the same consistent location so that they can always be found (selective attention directed there) with minimal interference. Adhering to this principle will not only lead to standardization of layouts across different systems (e.g., aircraft instrument panels always adhere to the basic “T” formation for locating four critical instruments), but adherence will also provide a resistant force against flexible reconfigurable display layouts, where designers may chose to reposition displays as a function of work phase (e.g., phase (p. 43) of flight, or normal vs. abnormal operations), or workers may be given the option of moving displays according to their preference. While such flexibility provides some advantages, these may be offset by the lack of consistency (Andre & Wickens, 1992).

A principle of (7) clutter avoidance is one that resists the forces to either put too many displays in a workspace or, in adhering to frequency of use, place all displays tightly clustered or even overlapping. Close proximity achieved via minimizing spatial separation will create clutter—difficulty of focusing attention on individual elements—whenever the spatial separation is less than around 1 degree of visual angle (Broadbent, 1982), and particularly when the elements overlap or are overlaid, as in a HUD display, or a map with text labels overlaying ground features, or overlaying an ATC map (Wickens, 200b).

Head-up displays and head-mounted displays accomplish this by superimposing instruments over an important forward view. The benefit (of not having to move the eyes between the instruments and the forward view) is partially offset by the clutter costs of closely placed information (Wickens, Ververs, & Fadden, 2004). We note here that a special case of close spatial proximity for information to be integrated is represented by geographical database overlay; for example, a map of terrain and weather for an aircraft pilot. When the two databases must be integrated (e.g., to find a safe path avoiding both terrain and weather), the close proximity (0 distance) of an overlay provides better performance than a side-by-side presentation of each, despite the greater clutter of the overlay (Kroft & Wickens, 2003; Wickens, 2000b).

The Proximity Compatibility Principle

The theoretical basis for the particular advantage of close proximity displays for information that needs to be integrated (principle 3) lies in the multitasking required as the human must retain (often by rehearsal) information from a first-accessed source, while attention travels to the second source for it to be accessed and then compared or combined. At a minimum, the time for travel will degrade memory for the first source. However, if locating the second source requires some search through a cluttered field or (worse yet) accessing another screen via a key press or turning a page, then the mental effort of such access will compete with the retention. This principle, that information that must be integrated in the mind (close mental proximity) should also be close together on a display (close physical proximity), is referred to as the proximity compatibility principle (Wickens & Carswell, 1995; Wickens & McCarley, 2008) and will be addressed further below.

The SEEV Model of Visual Attention Travel

Attention travel across displays and visual workspaces requires eye movements. While in reading text these movements are relatively linear and systematic, in monitoring multi-element displays to supervise dynamic systems, like those of the anesthesiologist, pilot, driver, or process control supervisor, scan paths will be much less predictable. Assisting these predictions is the SEEV model, which was introduced in the previous section in the context of the noticing-SEEV (NSEEV) model of event detection. SEEV predicts steady state scanning around the workspace before the event to be noticed occurs. The integration of its four components—S = salience, E = effort, E = expectancy, and V = value—is based on the prior modeling of Senders (19641980), Sheridan (1970), and Moray (1986), and these are combined additively to predict the distribution of fixation locations. Then, when the to-be-noticed event (TBNE) is scheduled to occur at a specific location in this workspace, SEEV will predict the distribution of eccentricities of that location from the fovea, which in turn predicts the likelihood of detection (diminishing with increasing eccentricity).

The SEEV model has been validated to predict the percentage of time looking at different areas of interest or displays with 80%–90% validity, in workspaces ranging from the live surgical operating table (Koh, Park, Wickens, Teng, & Chia, 2011) to simulations of vehicle driving (Horrey, Wickens, & Consalus, 2006) to both the conventional cockpit (Wickens, Goh, et al., 2003) and the more automated cockpit (Wickens, McCarley, et al., 2008; Steelman-Allen et al., 2011). As noted above, when N is added to SEEV, SEEV then provides the context for predicting eccentricity of the TBNE. N-SEEV has been able to predict pilot detection of a variety of unexpected events both within and outside the cockpit with reasonably high accuracy (r = 0.75; Steelman-Allen et al., 2011, Wickens, 2012, Wickens, Hooey, et al., 2009).

The SEEV model predicts how attention is actually allocated across displays. Without the unwanted influence of salience and effort, how attention SHOULD be allocated across displays is defined purely by expectancy (frequency of use and frequency of sequential use) and value. These parameters have been combined in several (p. 44) computational models of display layout, as discussed above (see Wickens, Vincow, et al., 1997, for review of these).

Display Integration

Design Principles

 Attention

Figure 2.2 Creating proximity in an air traffic display via linking and color.

As noted in the previous section, simply moving displays close together to reduce information access cost can create clutter. There are other means of creating closeness or “proximity” between two or more display elements and hence aid the movement of attention between them, techniques that can loosely be referred to as “display integration.” Many of these are incorporated within the proximity compatibility principle introduced above (see also Wickens & McCarley, 2008). Thus, when spatial proximity cannot be achieved for two elements that are to be integrated (as, for example, when comparing two elements on a map whose coordinates are fixed), the following two techniques can be employed:

  •  Linking, by constructing a physical line between the two, as a line connecting two points on a line graph. Attention can be said to “follow the line,” just as following a road between two geographical locations facilitates the travel from one to another (Jolicoeur & Ingleton, 1991).
  •  Common color, by combining linking and common color. Consider the air traffic control display shown schematically in Figure 2.2 in which planes A and D are at the same altitude and on a collision course. Clearly the controller must mentally integrate the trajectories of the two to determine where and when this collision might take place. Having automation construct a graphic link between them and illuminate them in a distinct color (e.g., red) will facilitate this mental integration computing the anticipated point, time, and separation of closest passage.

Besides spatial proximity, linkage, and color, a fourth technique of display integration involves moving two elements so close together that they essentially “fuse” into a single object, a technique known as object integration. For example, a single data point on a correlation plot represents two elements, an X and a Y value (Goettl, Wickens, & Kramer, 1991). The “artificial horizon” on a pilot’s attitude display represents pitch and roll by a single line that can rotate and translate. A single icon object on a weather map may contain several attributes of information. One advantage of object integration, supported by a great deal of research on attention (e.g., Treisman, 1986; Carswell & Wickens, 1996; Duncan, 1984; Scholl, 2001), results because all attributes of a single object are processed more or less in parallel, whereas two separate objects are more likely to be processed in series; hence there is greater efficiency of divided attention between two attributes of the single object display than between two objects.

A fifth technique for display integration, and one that sometimes accompanies object integration, is the creation of emergent features (Pomerantz & Pristach, 1989; Bennett & Flach, 2011). This results when multiple elements of a given display “configure” to create a new feature that is not inherent in any of the objects themselves. For example, four bar graphs (e.g., representing engine temperature on four systems) that are all aligned to the same baseline will present an emergent feature of “equality,” which is the co-linearity of their tops, when all are at the same level. Such emergent features can greatly benefit performance to the extent that the feature itself “maps” directly to a critical integration quantity necessary for monitoring and control (Bennett & Flach, 1992; Bennett & Flach, this handbook; Peebles, 2008). If the features are perceptually salient (like the co-linearity above or the symmetrical appearance of certain geometric objects), then direct perception can allow the integration to be achieved without imposing extensive cognitive effort (Vicente, 2002).

Note that the association of object displays with emergent features results because the formation of an object by dimensions, like the length, height, and width of sides and tops of a rectangle, will almost always create emergent features (like the size and shape of the rectangle) that would not exist were the dimensions presented in isolation from each other (e.g., as separate bar graphs; Barnett & Wickens, 1988). However, we also note that if the emergent features of the object are not mapped to critical integration task parameters, such object (p. 45) integration may be of no benefit, and other means of configuring the individual variables may provide better emergent features.

Display Proximity and Clutter

As we have noted above, close proximity achieved via minimizing spatial separation will create clutter. This is one distinct advantage of object integration. Two (or more) attributes of a single object are processed in parallel and hence unlikely to interfere with each other’s processing, in contrast to two separate objects occupying the same space (e.g., overlay). Various computational models of clutter have been proposed (e.g., Rosenhotz, Li, & Nakano, 2007; Beck, Lohrenz, & Trafton, 2010).

Extensions of Proximity Compatibility and Object Integration

Two important design concepts related to proximity compatibility are those of visual momentum (Woods, 1984; Aretz, 1991; Wickens & McCarley, 2008; Bennett & Flach, 2012) and ecological interface displays (Vicente, 2002; Burns & Hajdukiewicz, 2004; Burns et al., 2008). Both have, at their core, the goal of fluently moving attention across complex multi-element workspaces in order to facilitate integration and comparison. Visual momentum is a technique designed to facilitate mental integration of two or more different “views” of a single spatial area or network. For example, one technique of visual momentum would involve presenting a global view of the full workspace, alongside a more localized zoom-in view, with the region of the local view highlighted in the global view (Aretz, 1991; Olmos, Liang, & Wickens, 1997; Tang, 2001). Such highlighting allows rapid movement of attention between the two views. A second technique is continuous “panning” rather than abrupt switching between two views of the same region, but from different orientations (Hollands et al., 2008). Visual momentum concepts are particularly valuable when visualizing complex information (Robertson, Czerwinski, et al., 2009; Wickens, Hollands, Banbury, & Parasuraman, 2012).

The concept of an ecological interface is more complex, and space here does not allow much coverage except to note that for very complex systems like power plants, industrial process control industries, or human physiology, there are ways of presenting the multiple variables in such a manner that they directly signal certain critical constraints of the environment or “ecology” that they represent (Burns & Hajdukiewicz, 2004; Burns et al., 2008); not surprisingly, many of these “ways” capitalize on emergent features and configural displays to graphically represent constraints and boundary conditions in the system (e.g., the balance between mass and energy, or between inflow and outflow, which characterizes stability). Such ecological displays are often found to be most beneficial in fault management, a particular situation when variables must be integrated in new and different ways to diagnose the source of a fault and project its implications to system safety and productivity (Burns, this Handbook, Vicente, 2002; Burns & Hajdukiewicz, 2004).

Visual Search

Visual search is a selective attention function, similar to both noticing and supervisory sampling. However, unlike noticing, search is more goal directed toward finding a predetermined target. In doing so, attention (often coupled with the eyes) usually moves sequentially until the target is found or a decision is made that it is not present (Drury, 2006; Wickens & McCarley, 2008). Search is a key component in many industrial inspection tasks (Drury, 19902006). Thus the primary cognitive demands associated with search precede locating the object; whereas the primary task in noticing typically follows the triggering event. That said, many variables affect both tasks in the same way: Both usually involve eye movements (when noticing involves a visual event), both are inhibited by a cluttered background and cognitive workload, and both are improved when the target (in search) or the TBNE (in noticing) is salient (flashing, high-contrast, moving, etc.). Importantly, and for a given level of salience, a target will be more likely to be found in a search task than it will be noticed in a noticing task. This difference reflects the added top-down influence of the goal direction of the search task; the search is “tuned” to certain target properties. Both tasks are also influenced by top-down expectancy in other ways. In search, there are two sources of expectancy. Expectancy for target location influences where we look first, and expectancy of whether there is a target at all influences how long we continue a search when the target has not been found (Wolfe, Horowitz, & Kenner, 2005; Drury & Chi, 1995).

From a design perspective, long, tedious searches can have two detrimental influences. First, they can often sacrifice worker efficiency, as, for example, when a computer service worker must spend several seconds searching for a target on a screen, repeating the operation hundreds of times over a workday. (p. 46) In these circumstances, even milliseconds of added search delay can accumulate large costs (Gray & Boehm-Davis, 2000). Second, they can often inhibit safety, particularly in vehicle control, when long head-down searches (e.g., for a destination on an electronic map) can leave the driver exposed to roadway hazards (Wickens & Horrey, 2009). In another example, analysts computed that the long search time on a railway traffic map spelled the difference between safety and a fatal railway crash, as dispatchers spent 18 precious seconds attempting to locate the source train, causing a flashing collision alert (Stanton & Babar, 2008). This elapsed time spelled the tragic difference between the dispatcher commanding a braking action in time, and too late.

Improving Search

In response concerns such as those described above, a number of attention principles can speak to ways that search can be improved. Some of these solutions include:

  •  Target enhancement. In some circumstances, simple solutions like improving workplace lighting can increase the discriminability between targets and non-targets, a definite advantage when the targets themselves are subtle (like cracks in the hull of an aircraft; Drury, Spencer, & Schurman, 1997).
  •  Signal-noise enhancement. Creative solutions can identify ways to differentially amplify the target over the non-targets. For example, if targets are identified by different depths in a three-dimensional display, then providing the user with the ability to change the viewpoint on that display will produce differential motion of targets vs. non-targets(Drury et al., 2001, Drury, 2006).
  •  Selective highlighting. To the extent that the searcher (or another agent) can define features possessed by the target, display technology can then artificially enhance all elements possessing those features—for example, by painting them a different color or increasing their intensity. Thus, for example, in air traffic control, all aircraft flying at a common altitude may be highlighted as particularly relevant because they are more likely to be on a collision course than those at different altitudes (Remington, Johnson, Ruthruff, Gold, & Romera, 2001). Of course, such attention-guidance automation imposes the danger that it could be less than fully reliable (Yeh & Wickens, 2001a; Yeh, Merlo, Wickens, & Brandeberg, 2003; Fisher & Tan, 1989; Metzger & Parasuraman, 2005). For example, highlighting could be imposed on an element that is not a target, or, more seriously, it could fail to highlight one that is. (These two classes of highlighting errors parallel the two classes of alerting errors discussed previously.) Studies of highlighting validity indicate that people naturally tend to search the highlighted items first (Fisher, Coury, Tengs, & Duffy, 2009), and if there is uncertainty as to whether a target is present or not, people may truncate the search if they fail to find it in the highlighted subset. This behavior would lead to a miss if the target was not highlighted.
  •  Search field organization. In many search fields (e.g., a computer screen), it is possible to impose an organization on the elements to be searched: a linear list or grid. Such organization aids search in two respects. It can help people keep track of examined and not-yet-examined items without excessive burden on memory. It can also avail the opportunity for designers to place the items most likely to be the target of search near the top (for example, the most frequently used items in a computer menu), given the tendency for people to search from top to bottom.
  •  Search instructions and target expectancy. As noted, the expectancy of whether a target is present or not can influence the amount of effort spent on continuing the search when a target is not yet found. Search shows a clear speed-accuracy trade-off, such that longer searches are more likely to turn up a target (Drury, 1994). On the one hand, instructions that emphasize the value of finding the target will produce greater success (but longer search times; Barclay, Vicari, Doughty, Johanson, & Greenlaw, 2006). On the other hand, a low target expectancy will more likely produce a premature termination, leading to a miss (Wolfe et al., 2005). Furthermore, when there may be multiple targets (such as malignant nodules in an x-ray), instructions can counter the tendency to stop the search after a first target is found and impose the search in an exhaustive manner (Barclay et al., 2006).

Modeling Search: The Serial Self-Terminating Model

The serial self-terminating search (STSS) model proposed by Sternberg (1966) is based on data from Neisser (1963) by which attention searches a field of non-targets sometimes containing a target. The (p. 47) model predicts the time to locate the target or, if it is not present, to decide that it is not. Accordingly, the model predicts that each non-target element is inspected in series, requiring a constant time (T) to decide that each is not the target, until the target is reached and a response is made. Thus the search is self-terminated. When the target is not present, all items must be inspected. When the target is present, on average half the items will be inspected. Hence, the slope of the search time as a function of the size of the search field (N) is NT when the target is absent, and NT/2 when it is present. Various versions of search models have borrowed from the basic elements of the SSTS model (Drury, 1994; Drury et al., 2001; Teichner & Mocharnuk, 1979; Yeh & Wickens, 2001b; Fisher et al., 1989; Fisher & Tan, 1989; Beck et al., 2010; Nunes, Wickens, & Yin, 2006).

Several modifications and elaborations of this model can be made. For example, if the target is more confusable with the non-targets, T will increase (hence increasing the slope; Geisler & Chou, 1995). If the target is defined by a single salient feature (e.g., red in a sea of green), the slope is essentially 0, describing a parallel search process (all items inspected at once). Wolfe (19942007, Wolfe & Horowitz, 2004) has proposed a “guided search” model by which initially several non-targets in the search field can be immediately filtered out (i.e., in parallel), but search through the remainder is serial. This approach has been taken to modeling the benefits of highlighting certain key elements of the search field that are assumed to be most relevant, as discussed above (Fisher, Coury, et al., 1989; Beck et al., 2010; Nunes et al., 2006; Yeh & Wickens, 2001b; Wickens, Alexander, et al., 2004).

Attention to Tasks: Multiple Resources

When two tasks must be performed within a narrow window of time, there are two qualitatively different ways in which this can be managed: They can be time-shared, wherein the performance of each task is ongoing concurrently, as when listening to a cell phone while driving (Regan, Lee, & Young, 2011; Wickens, Hollands et al., 2012). This is divided attention between tasks. Alternatively, they can be performed in sequence, as when a driver stops the car before answering the cell phone call. Each situation has very different implications and different sorts of processing operations underlying the success and failure of multitasking, so we consider each in turn.

Concurrent Task Performance: Multiple Resources

According to one prominent theory of multitasking, the multiple resource theory (Navon & Gopher, 1979; Wickens, 19801984200220052008a), there are three fundamental elements dictating how well a given task will be performed concurrently with another. First, most intuitively, the difficulty or attentional resource demand of both tasks will influence time sharing. Easier tasks (those of lower mental workload, or greater automaticity) will be time shared more effectively (Kahneman, 1973).

Second, a greater degree of shared versus separate resources within the human’s information processing structure will increase interference. Wickens (2002) has developed a conception of what those separate resources might be in a way that is consistent with neurophysiological data (Just et al., 2001). For design purposes, these can be broken down in terms of four dichotomies, with “different resources” defined by the two levels of each dichotomy, as follows:

  •  processing stages—perceptual-cognitive (working memory) versus response selection and execution of action
  •  processing codes—spatial versus verbal/linguistic
  •  processing modalities (within perception)—visual versus auditory (and there is now emerging evidence that the tactile channel defines a third perceptual resource category; Lu, Sarter, & Wickens, 2011)
  •  visual channels (within visual modality)—focal (object recognition) versus ambient (motion processing) vision (Previc, 19982000)

Accordingly, as a design and analysis tool (Wickens, 20022005; Wickens, Bagnall, Gosakan, & Walters, 2011), a given task may be defined by levels within one or more of the four dimensions. The interference between two tasks can then be partially predicted by the number of dimensions on which their demands share common levels. This prediction of dual task interference is then augmented by summing the total resource demands of the two tasks (independent of their resource competition). A computational version of this model is described in more detail in Wickens, 2005; Sarno and Wickens, 1995; and Wickens, Bagnall et al., 2011.

The third element in predicting success or failure in divided attention between tasks is the allocation policy between them (Norman & Bobrow, 1975; Navon & Gopher, 1979). Intuitively, the more favored task of a pair (the primary task) will preserve its performance close to the single task level, whereas the less favored (the secondary task) will show a greater decrement. This simple feature, allocation (p. 48) policy, describes why the automobile accident rate while using cell phones, while substantial, is not higher than it is: Most drivers still do treat lane keeping and hazard monitoring as a task of higher priority than that of phone conversation.

There is one final factor not accommodated by multiple resource theory that can account for differences in the effectiveness of concurrent task performance, and that is confusion, caused by the similarity of elements within the two tasks (Wickens & Hollands,2000). The more similar those elements are, the more likely there will be cross talk between the two such that, for example, elements of one task show up in the response to the second task. A classic example relates to the challenge of patting your head while rubbing your stomach. Another might be trying to tally or copy student test scores while listening to basketball scores. Note, however, that similarity-based confusion is most likely to occur when the tasks already share some demand for common resources (e.g., in the above two examples, both spatial manual tasks or both auditory/verbal tasks using digits).

Sequential Performance & Task Management

Even when an operator may try to perform two tasks in parallel (albeit with degraded performance on one or both), this may become impossible either because one or both are of high resource demand or because they compete for common incompatible resources, like speaking two different messages at once (the voice can speak only one at a time) or looking to two sources of widely spaced visual inputs. In these circumstances, once the limits of multiple resources have dictated that concurrence is impossible, the first two elements of multiple resource theory (demand and resource structure) no longer play a role in predicting interference. However the third element—allocation policy—now occupies center stage as the most important factor in sequential task management: which task is performed and which is completely abandoned or neglected, and for how long.

Two general scenarios underlie the manifestation of sequential task management strategies, both involving a decision process of which task to perform, and both partially embedded within the framework of queuing theory (Moray, Dessouky, Kijowski, & Adapathya, 1991). One of these is the study of task switching (e.g., Rogers & Monsell, 1995; Goodrich, this handbook), and the other is the study of interruption management (e.g., Trafton & Monk, 2007). In the former case, the operator is confronted with two tasks and must choose one to initiate first. In the latter case, the operator is already performing one (the “ongoing task”—OT) when a second task (the “interrupting task”—IT) arrives, and must decide whether (or for how long) to continue the OT before switching to the IT, then when to return to the OT. Here researchers often focus on the quality of OT upon return (e.g., how fast it is resumed, whether it is resumed where it was “left off,” etc. Trafton & Monk, 2007; Wickens, Hollands et al., 2000).

In both cases, queuing theory can sometimes be applied to determine optimal strategies of task (and interruption) management (Moray et al., 1991; Liao and Moray, 1993). Some of these strategies are quite intuitive, such as when two tasks differ in their importance (or penalty for delayed completion), the more important should be undertaken first. However, when a large number of task features vary between the two, such as their length, their expected duration, their difficulty, the decay of information within a task while it is neglected, or the uncertainty in priority, then assessing optimal solutions becomes very complex. Indeed, in these circumstances it can easily be argued that the mental workload (and time) cost of a human computing the optimal strategy will consume sufficient resources to offset the very goal of trying to make the optimal choice (Raby & Wickens, 1994; Laudeman & Palmer, 1995).

While there are many design-relevant research conclusions in this area, many of these are also based upon only limited data, or data collected in fairly simple laboratory environments. The following paragraphs describe some of the more important of these.

More optimal task switching can be achieved with a preview of upcoming tasks (e.g., its duration (Tulga & Sheridan, 1980).

Very slow task switching in multitask environments is suboptimal (Raby & Wickens, 1994), and optimal switching frequency can at least partially be dictated by optimal models (Moray, 1986; Wickens, McCarley, et al., 2008). Particularly in widely distributed visual workspaces, task switching can be partially captured by eye movements, using the (p. 49) SEEV model to prescribe optimal switching (Kohe et al., 2011).

Very slow task switching characterizes what is sometimes referred to as “attentional tunneling” or “attentional narrowing,” where critical areas of interest (and tasks served by those areas) are neglected for long periods of time, inviting failures to notice key events in those areas (Wickens & Alexander, 2009; Wickens & Horrey, 2009), particularly when those events are unexpected (Wickens, Hooey et al., 2009). In these instances, the “task” that is neglected is often considered the task of maintaining situation awareness (see below).

Three qualitatively different task features tend to induce attentional tunneling, these being extreme levels of interest (such as an engaging cell phone conversation (Horrey, Lesch, & Gabaret, 2009), compelling realistic displays (e.g., a 3-D navigational display; Wickens & Alexander, 2009), and fault management (Moray & Rotenberg, 1989).

Attentional tunneling can be mitigated by salient alarms for neglected tasks (see above), but to be most effective such alarms should be adaptive (see Kaber, this handbook), more likely to be activated if automation infers that neglect is taking place (e.g., following an assessment of prolonged head-down orientation in vehicle control).

In interruption management, several variables influence the fluency of task resumption (Dismukes, 2010; Trafton & Monk, 2007; Monk, Trafton, & Boehm-Davis, 2008; Grundgeiger et al., 3010; Smallman & St. John, 2008; Wickens & McCarley, 2008; Morgan, Patrick et al., 2009; Wickens, Hollands et al., 2012), particularly the choice of when to leave an ongoing task (after a subgoal has been completed) and whether a “placeholder” is imposed when the ongoing task is left (e.g., a mark on the page where reading stopped), in order to increase the fluency of return to the OT.

Voice communication tasks tend to be particularly intrusive in interruptions, leading to premature abandonment of ongoing tasks of higher priority (McFarlane & Latorella, 2002; Damos, 1997).

Many aspects of interruption management flow from the study of prospective memory (Dismukes, 2010; Loukopoulos, Dismukes, & Barshi, 2009), which is the memory to do a future task. In this particular case, the “future task” is re-engaging the ongoing task following the interruption.

There are beginning to be developed design-oriented solutions that can (a) use automation to monitor the progress of certain types of manual work to assess more appropriate times to interrupt (Bailey & Konstan, 2006; Dorneich et al., 2012); (b) provide advanced notification of the importance of the interruption so that the operator can decide whether or not to fully abandon the ongoing task or postpone a switch to the interruption task (Ho, Nikolic, Waters, & Sarter, 2004); (c) provide visual placeholders, like a flashing cursor, that will support rapid reacquisition of an ongoing task after the switch (Trafton, Altmann, & Brock, 2005); and (d) provide support tools such as that described by Smallman and St. John, 2008.

Hybrid Models

There is a set of models describing multitasking that are neither strictly parallel (like multiple resources; see above) nor strictly serial (like queuing theory models of sequential performance), but involve scheduling multiple cognitive processes in the service of two tasks that may sometimes be used in series and sometimes in parallel (Meyer & Kieras, 1997; Liu, 1996). One particularly important approach along this line is that of threaded cognition (Salvucci & Taatgan, 20082011; Salvucci, this handbook). In particular, the authors have proposed a series of guidelines in the design of multitasking environments.

Conclusion

In conclusion, a great deal of research is required to better understand how people handle sequential tasks under time pressure. One of the more intriguing aspects of this issue involves defining the boundary condition of increasing demands when the multitasker abandons hope of concurrent processing and “regresses” to a sequential mode, ceasing the performance of one task altogether. This “point” is sometimes referred to as a “red line” along a scale of increasing mental workload, imposed by tasks (or sets of tasks), and brings us to the next section on mental workload.

Mental Workload

Mental Workload Assessment

 Attention

Figure 2.3 The supply-demand curve of resource allocation, illustrating the concept of the “red line”. Wickens, Christopher; Hollands, Justin G.; Engineering Psychology and Human Performance, 3rd Edition, (c) 2000. Reprinted by permission of Pearson Education, Inc., Upper Saddle River, NJ.

Mental workload may be roughly described as the relation between the attentional resource demands (fuel requirements) imposed by tasks and the resources supplied by the operator in performing those tasks (fuel available “in the tank”). In the former case, resource requirements can be specified by critical task characteristics that impose greater demands, such as the working memory demands of a task, the number of mental operations, the signal-noise ratio of its displayed elements, the compatibility of mapping from display to control, the precision of required control, the time pressure, or simply the number of tasks imposed at one time. Because a given task (p. 50) environment may be characterized by several of these dimensions at once, each expressed in very different units, the issue of how to combine these into a single metric of “mental workload imposed” is quite challenging, to say the least. It is complicated further because demands of a task configuration will decrease with the skill and practice of the performer.

In the case of resources supplied, there is some evidence that measures of “effort investment” may be more quantifiable, in terms of either physiological measures (Tsang & Vidulich, 2006; Kramer & Parasuraman, 2007) such as heart rate variability or pupil diameter, or in terms of subjective measures (Hart & Staveland, 1988; Hill et al., 1992; Tsang & Vidulich, 2006).

Both measures of resources required and resources supplied (invested) are joined in the “supply-demand” function shown in Figure 2.3, in which increasing demands on the task (x- axis) are met with increasing resources supplied (solid line), up to the point at which resources available are “maxed out.” Performance on the task(s) in question (the dashed line) is perfect up to this point, but further increases in demand cannot be met, and performance then declines. In the parlance of the previous section, this point of inflection on both curves is often referred to as the “red line” of workload, in that designers should strive to maintain task demands always slightly to the left of this point. The desire to stay to the left of the inflection is driven by the design goal of maintaining a margin of “reserve capacity” in order to deal with unexpected emergencies should something go wrong.

In addressing issues of workload, designers are confronted with two top-level questions. First, how can we predict or measure the point along the x-axis of Figure 2.3 imposed by a particular task requirement in relation to the “red line.” Given the challenges of assessing either resources required or supplied, this can be a difficult enterprise, although progress is being made via the elaborate development of workload assessment measures and computational models of task demand (Laughery, LaBiere, & Archer, 2006). Second, if workload is either predicted or assessed to be above the red line, what can be done to reduce it? Solutions often can be categorized into those that:

  •  redesign the task (e.g., by changing an interface to use separate resources; by reducing incompatible mappings, by reducing working memory requirements, by facilitating information integration, etc.)
  •  “redesign” the operator by training
  •  impose automation

The third solution, using automation to eliminate or reduce human task demands, leads us to a final section relating automation to attention but also invoking a critical third variable, situation awareness.

Attention, Situation Awareness, Workload, and Automation

At a fundamental level, as suggested above, automation and attention demands (workload) are negatively related: The higher the levels of automation that are invoked, the lower the operator workload. The pilot of the modern aircraft with an automated flight management system can fly a complex route with far less hands-on flying than that of a general aviation airplane, where stick, rudder, and throttle may need to be continuously adjusted. But such a simple relationship is complicated in many ways, particularly given the all-important influence of situation awareness (Endsley, 1995; Endsley, this handbook; Durso, Rawson, & Girotto, 2007; Banbury & Tremblay, 2004; Parasuraman, Sheridan, & Wickens, 2008; Wickens, 2008b). Thus it is now well established that higher levels of automation will degrade SA in two attention-related respects: monitoring/complacency and working memory.

With regard to monitoring, as automation assumes more tasks that would otherwise require human perception & supervision, the need to monitor what automation is doing decreases. In terms of alerting systems discussed earlier, this was described as increasing reliance upon automation (Meyer & Lee, 2004, this handbook), reflected in decreased scanning. Such decreases can be justified as, in some (p. 51) sense, optimal (Moray, 2003; Moray & Inagaki, 2000), given the low likelihood of automation failure. But if the human supervisor is not looking at automation (or the raw data it is processing), he or she will be slower in noticing those very rare failures in the automated task domain. This is what Endsley has described as a reduction in level 1 situation awareness.

With regard to understanding, the relevant phenomenon in cognitive psychology is referred to as the generation effect (Slamecka & Graf, 1978). People are more likely to remember, even briefly, the status of a dynamic system if they have actively responded to change the system than if they have passively witnessed another agent (here automation) making those changes. You remember well the actions you have just taken. The resources invested in making those actions serves you well for future retention. In contrast, decreased memory for (or awareness of) changed state in a highly automated system will leave the monitor of such a system less aware of its precise condition, if a manual takeover is required in a case of a failure. This describes a degradation of Endsley’s level 2 SA (understanding); since in many dynamic systems the current state is predictive of future states, it also translates to a degradation of level 3 SA (prediction).

We note then that, as mediated by changes in automation level, there is a direct relationship between SA and workload, a finding that is partially (although imperfectly) documented by empirical research (e.g., Kaber & Endsley, 2004; see Wickens, 2008; Wickens, Li, Santamaria, Sebok, & Sarter, 2010, for a summary). System designers should therefore seek a compromise in adopting a level of automation, between keeping workload manageable and maintaining SA at a sufficiently high level so that the operator can effectively notice and enter the loop should things go wrong.

It is important to realize, however, that the automation-mediated trade-off (between workload and loss of situation awareness) is not inevitable (Tsang & Vidulich, 2006; Wickens, Li, et al., 2010). For example, on the one hand, it may be possible to increase the level of automation to some degree such that workload will decrease but SA will not. This will happen if the curves of SA and WL decrease against automation level increase are non-linear (Wickens, 2008). On the other hand, there are certainly things that can be done to design that will simultaneously reduce workload while improving SA. Certainly training is one: The skilled operator will have less workload and greater SA than the novice. But importantly, for this chapter, many aspects of display integration can also accomplish the combined goals: A well–designed, integrated, and intuitive display can provide a rapid, easy-to-process picture of a dynamic system (supporting situation awareness), and in so doing reduce the cognitive demands of information access, integration, and working memory, simultaneously lowering workload.

Conclusion

In conclusion, we have seen how both the fuel and the filter metaphors provide a useful way of representing many aspects of attention. Derived from basic theory, these two also provide important implications for system design and cognitive engineering. Yet despite the fact that theoretical concepts of attention have been prominent for over a century (James, 1880; Titchner, 1908) and have been applied to system design for over half that time (e.g., Craik, 1947), much remains to be done. For example, the two metaphors need to be better linked to understand the relationship between scanning, selection, and multitasking. In particular, computational models of how attention operates in the complex world beyond the laboratory must be formulated and subjected to rigorous empirical validation, with complex and heterogeneous tasks, to assess the strategies adopted by workers: when to perform tasks concurrently and when, once the red line is exceeded, to abandon and initiate serial multitasking. This is the invitation to the next generation of researchers.

References

Andre, A. D., & Wickens, C. D. (1992). Layout analysis for cockpit display systems. SID International Symposium Digest of Technical Papers. Paper presented at Annual Symposium of the Society for Information Display. Seattle WashingtonFind this resource:

Aretz, A. J. (1991). The design of electronic map displays. Human Factors, 33, 85–101.Find this resource:

Bailey, B. P., & Konstan, J. A. (2006). On the need for attention-aware systems: Measuring effects of interruption on task performance, error rate, and affective state. Computers in Human Behavior, 23, 685–708.Find this resource:

Banbury, S., & Tremblay, S. (Eds.). (2004). A cognitive approach to situation awareness: Theory and application. Aldershot, England: Ashgate.Find this resource:

Barclay, R. L., Vicari, J. J., Doughty, A. S., Johanson, J. F., & Greenlaw, R. L. (2006). Colonoscopic withdrawal times and adenoma detection during screening colonoscopy. New England Journal of Medicine, 355, 2533–2541.Find this resource:

Barnett, B. J., & Wickens, C. D. (1988). Display proximity in multicue information integration: The benefit of boxes. Human Factors, 30, 15–24.Find this resource:

(p. 52) Beck, R., Lohrenz, M., & Trafton, G. (2010). Measuring search efficiency in complex search tasks. Journal of Experimental Psychology: Applied, 16, 238–250.Find this resource:

Bennett, K. B., & Flach, J. M. (2011). Display and interface design: Subtle science, exact art. Boca Raton, FL: CRC Press.Find this resource:

Bennett, K. B., & Flach, J. (2012). Visual momentum redux. International Journal of Human-Computer Studies, 70, 399–414.Find this resource:

Bennett, K. B., & Flach, J. M. (1992). Graphical displays: Implications for divided attention, focused attention, and problem solving. Human Factors, 34, 513–533.Find this resource:

Boot, W., Kramer, A., & Becic, E. (2007). Capturing attention in the laboratory and the real world. In A. Kramer, D. Wiegmann, & A. Kirlik (Eds.), Attention: From theory to practice (pp. 27–45). Oxford, England: Oxford University Press.Find this resource:

Breznitz, S. (1983). Cry-wolf: The psychology of false alarms. Hillsdale, NJ: Erlbaum.Find this resource:

Broadbent, D. (1958). Perception and communications. New York, NY: Pergamon.Find this resource:

Burian, B. (2007). Perturbing the system: Emergency and off-nominal situations under NextGen. International Journal of Applied Aviation Studies, 8, 114–127.Find this resource:

Burns, C. M., & Hajdukiewicz, J. R. (2004). Ecological interface design. Boca Raton, FL: CRC Press.Find this resource:

Burns, C. M., Skraaning, G., Jamieson, G. A., Lau, N., Kwok, J., Welch, R., & Andresen, G. (2008). Evaluation of ecological interface design for nuclear process control: Situation awareness effects. Human Factors, 50, 663–679.Find this resource:

Carpenter, S. (2002). Sights unseen. APA Monitor, 32, 54–57.Find this resource:

Carswell, C. M., & Wickens, C. D. (1996). Mixing and matching lower-level codes for object displays: Evidence for two sources of proximity compatibility. Human Factors, 38, 1–22.Find this resource:

Craik, K. W. J. (1947). Theory of the human operator in control systems I: The operator as an engineering system. British Journal of Psychology, 38, 56–61.Find this resource:

Damos, D. L. (1997). Using interruptions to identify task prioritization in Part 121 air carrier operations. In R. Jensen (Ed.), In Proceedings of the 9th International Symposium on Aviation Psychology. Columbus, OH: Ohio State University.Find this resource:

Dismukes, R. K. (2010). Remembrance of things future. In D. Harris (Ed.), Reviews of human factors & ergonomics (Vol. 6). Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Dixon, S. R., & Wickens, C. D. (2006). Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence. Human Factors, 48, 474–468.Find this resource:

Dixon, S. R., Wickens, C. D., & McCarley, J. (2007). On the independence of reliance and compliance: Are automation false alarms worse than misses? Human Factors, 49, 564–573.Find this resource:

Dorneich, M. C., Ververs. P. M., Mathan, S., Whitlow, S., & Hayes, C. C. (2012). Considering etiquette in the design of an adaptive system. Journal of Cognitive Engineering and Decision Making, 6 (2), 243–265.Find this resource:

Drury, C. G. (1990). Visual search in industrial inspection. In D. Brogan (Ed.), Visual search (pp. 263–276). London, England: Taylor & Francis.Find this resource:

Drury, C. G. (1994). The speed accuracy tradeoff in industry. Ergonomics, 37, 747–763Find this resource:

Drury, C. G. (2006). Inspection. In W. Karwowski (Ed.), International encyclopedia of ergonomics and human factors (Vol. 2). Boca Raton, FL: Taylor & Francis.Find this resource:

Drury, C. G., & Chi, C. F. (1995). A test of economic models of stopping policy in visual search. IIE Transactions, 27, 382–393.Find this resource:

Drury, C., Spencer, F., & Schurman, D. (1997). Measuring human detection performance in aircraft inspection. In Proceedings of the 41st Annual Meeting of the Human Factors Society. Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Drury, C. G., Maheswar, G., Das, A., & Helander, M. G. (2001). Improving visual inspection using binocular rivalry. International Journal of Production Research39, 2143–2153.Find this resource:

Duncan, J. (1984). Selective attention and the organization of visual information. Journal of Experimental Psychology: General, 119, 501–517.Find this resource:

Durso, F., Rawson, K., & Girotto, S. (2007). Comprehension and situation awareness. In F. Durso (Ed.), Handbook of applied cognition (pp 163–194). Chichester, England: John Wiley.Find this resource:

Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37, 32–64.Find this resource:

Fisher, D. L., Coury, B. G., Tengs, T. O., & Duffy, S. A. (1989). Minimizing the time to search visual displays: The role of highlighting. Human Factors, 31(2), 167–182.Find this resource:

Fisher, J. D., & Tan, K. C. (1989). Visual displays: The highlighting paradox. Human Factors, 31(17–31).Find this resource:

Fitts, P., & Posner, M. I. Human performance. Bellmont, CA: Brooks/Cole.Find this resource:

Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044.Find this resource:

Fougnie, D., & Marois, R. (2007). Executive working memory load induces inattentional blindness. Psychonomic Bulletin & Review, 14, 142–147.Find this resource:

Fowler, R., Williams, W., Fowler, M., & Young, D. (1968). An investigation of the relationship between operator performance and operator panel layout for continuous tasks (Technical Report No. 68–170). Wright Patterson AFB, Ohio: US Air Force Flight Dynamics Lab.Find this resource:

Geisler, W. S., & Chou, K. (1995). Separation of low-level and high-level factors in complex tasks: Visual search. Psychological Review, 102, 356–378.Find this resource:

Goettl, B. P., Wickens, C. D., & Kramer, A. F. (1991). Integrated displays and the perception of graphical data. Ergonomics, 34, 1047–1063.Find this resource:

Gray, W., & Boehm-Davis, D. (2000). Milliseconds matter. Journal of Experimental Psychology: Applied, 6, 322–335.Find this resource:

Gray, W. D., & Fu W. T. (2004). Soft constraints in interactive behavior: The case of ignoring perfect knowledge in-the-world for imperfect knowledge in-the-hea d . Cognitive Science, 28, 359–382.Find this resource:

Grundgeiger, T., Sanderson, P., Macdougall, H., & Balaubramanian, V. (2010). Interruption management in the intensive care uni t . Journal of Experimental Psychology: Applied, 16, 317–334.Find this resource:

Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. H. N. Meshkati (Ed.), Human mental workload (pp. 139–183). Amsterdam, The Netherlands: North Holland.Find this resource:

Hart, S. G., & Wickens, C. D. (2010). Cognitive Workload. NASA Human Systems Integration handbook, Chapter (p. 53) 6. Washington, DC: National Aeronautics and Space Administration.Find this resource:

Hill, S. G., Iavecchia, H., Byers, J., Bittner, A., Zaklad, A., & Christ, R. (1992). Comparison of four subjective workload rating scales. Human Factors, 34, 429–440.Find this resource:

Ho C. Y., Nikolic, M. I., Waters, M., & Sarter, N. B. (2004). Not now! Supporting interruption management by indicating the modality and urgency of pending tasks. Human Factors, 46, 399–410.Find this resource:

Hollands, J. G., Pavlovic, N. J., Enomoto, Y., & Jiang, H. (2008). Smooth rotation of 2-D and 3-D representations of terrain: An investigation into the utility of visual momentum. Human Factors50, 62–76.Find this resource:

Horrey, W. J., Lesch, M. F., & Garabet, A. (2009). Dissociation between driving performance and driver’s subjective estimates of performance and workload in dual task conditions. Journal of Safety Research40, 7–12.Find this resource:

Horrey, W. J., Wickens, C. D., & Consalus, K. P. (2006). Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies. Journal of Experimental Psychology: Applied, 12(2), 67–86.Find this resource:

Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506.Find this resource:

James, W. (1890). Principles of psychology. New York, NY: Holt. (Reprinted in 1950 by Dover Press.)Find this resource:

Jolicoeur, P., & Ingleton, M. (1991). Size invariance in curve tracing. Memory & Cognition, 19(1), 21–36.Find this resource:

Just, M. A., Carpenter, P. A., Keller, T. A., Emery, L., Zajac, H., & Thulborn, K. R. (2001). Interdependence of nonoverlapping cortical systems in dual cognitive tasks. Neuroimage, 14, 417–426.Find this resource:

Kaber, D. B., & Endsley, M. (2004). The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical Issues in Ergonomics Science, 5, 113–153.Find this resource:

Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice Hall.Find this resource:

Kestin, I., Miller, B., & Lockhart, C. (1988). Auditory alarms during anesthesia monitoring. Anestheiology, 69, 106–109.Find this resource:

Keller, M. D., & Rice, S. (2010) System-wide versus component-specific trust using multiple aids. The Journal of General Psychology, 137, 114–128.Find this resource:

Kirwan, B., & Ainsworth, L. (1992). A guide to task analysis. London, England: Taylor & Francis.Find this resource:

Koh, R., Park, T., Wickens, C., Teng, O., & Chia, N. (2011). Differences in attentional strategies by novice and experienced operating theatre scrub nurses. Journal of Experimental Psychology: Applied, 17, 233–246.Find this resource:

Kramer, A. F., & Parasuraman, R. (2007). Neuroergonomics—application of neuroscience to human factors. In J. Caccioppo, L. Tassinary, & G. Berntson (Eds.), Handbook of psychophysiology (2nd ed.). New York: Cambridge University Press.Find this resource:

Kramer, A., Wiegmann, D., & Kirlik, A. (Eds.) (2007). Attention: from theory to practice. Oxford, England: Oxford University Press.Find this resource:

Kroft, P. D., & Wickens, C. D. (2003). Displaying multi-domain graphical database information: An evaluation of scanning, clutter, display size, and user interactivity. Information Design Journal, 11(1), 44–52.Find this resource:

Laudeman, I. V., & Palmer, E. A. (1995). Quantitative measurement of observed workload in the analysis of aircrew performance. International Journal of Aviation Psychology, 5(2), 187–198.Find this resource:

Laughery, K. R., LeBiere, C., & Archer, S. (2006). Modeling human performance in complex systems. In G. Salvendy (Ed.), Handbook of human factors & ergonomics (pp. 967–996). Hoboken, NJ: John Wiley & Sons.Find this resource:

Lees, M. N., & Lee. J. D. (2007). The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics, 50, 1264–1286.Find this resource:

Liao, J., & Moray, N. (1993). A simulation study of human performance deterioration and mental workload. Le Travail Humain, 56(4), 321–344.Find this resource:

Liu, Y. (1996.) Queueing netword modeling of elementary mental processes. Psychological Review, 103, 116–136.Find this resource:

Loukopoulos, L., Dismukes, R. K., & Barshi, I. (2009). The multi-tasking myth/Handling complexity in real world operations. Burlington, VT: Ashgate.Find this resource:

Lu S., Wickens, C.D, Sarter, N., & Sebok, A. (2011). Informing the design of multimodal displays: A meta-analysis of empirical studies comparing auditory and tactile interruptions. In Proceedings of the 2011 meeting of the Human Factors & Ergonomics Society. Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Mack, A., & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT Press.Find this resource:

Madhavan, P., Weigmann, D., & Lacson, F. (2006). Automation failures on tasks easily performed by operators undermine trust in automated aids. Human Factors, 48, 241–256.Find this resource:

Maltz, M., & Shinar, D. (2003). New alternative methods in analyzing human behavior in cued target acquisition. Human Factors, 45, 281–295.Find this resource:

McCarley, J., Wickens, C., Sebok, A., Steelman-Allen, K., Bzostek, J., & Koenecke, C. (2009). Control of attention: Modeling the effects of stimulus characteristics, task demands, and individual differences. University of Illinois Human Factors Division: Urbana, IL: NASA NRA: NNX07AV97A.Find this resource:

McFarlane, D. C., & Latorella, K. A. (2002). The source and importance of human interruption in human-computer interface design. Human-Computer Interaction, 17, 1–61.Find this resource:

McKee, S. P., & Nakayama, K. (1984). The detection of motion in the peripheral visual field. Vision Research, 24, 25–32.Find this resource:

Metzger, U., & Parasuraman, R. (2005). Automation in future air traffic management: Effects of decision aid reliability on controller performance and mental workload. Human Factors, 47, 33–49.Find this resource:

Meyer, D. E., & Kieras, D. E. (1997). A computational theory of executive cognitive processes and multiple-task performance: Part 1: Basic mechanisms. Psychological Review, 104, 3–65.Find this resource:

Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human Factors, 43(4), 563–572.Find this resource:

Meyer, J. (2004). Conceptual issues in the study of dynamic hazard warnings. Human Factors, 46(2), 196–204.Find this resource:

Monk, C., Trafton, G., & Boehm-Davis, D. (2008). The effect of interruption duration and demand on resuming suspended goals. Journal of Experimental Psychology: Applied, 13, 299–315.Find this resource:

Moray, N. (1986). Monitoring behavior and supervisory control. In L. K. K. R. Boff & J. P. Thomas (Eds.), Handbook of perception and performance (Vol. 2, pp. 40–1–40–51). New York, NY: Wiley & Sons.Find this resource:

Moray, N. (2003). Monitoring, complacency, scepticism and eutectic behaviour. International Journal of Industrial Ergonomics, 31(3), 175–178.Find this resource:

(p. 54) Moray, N., Dessouky, M. I., Kijowski, B. A., & Adapathya, R. (1991). Strategic behavior, workload and performance in task scheduling. Human Factors, 33, 607–632.Find this resource:

Moray, N., & Inagaki, T. (2000). Attention and complacency. Theoretical Issues in Ergonomics Science, 1, 354–365.Find this resource:

Moray, N., & Rotenberg, I. (1989). Fault management in process control: Eye movements and action. Ergonomics, 32(11), 1319–1342.Find this resource:

Morgan, P., Patrick, J., Waldron, S., King, S., & Patrick, T. (2009). Improving memory after interruption: exploiting soft constraints and manipulating information access cost. Journal of Experimental Psychology: Applied15, 291–306.Find this resource:

Most, S. B., & Astur, R. S. (2007). Feature based attentional set as a cause of traffic accidents. Visual Cognition, 15(2), 125–132.Find this resource:

Navon, D., & Gopher, D. (1979). On the economy of the human processing systems. Psychological Review, 86, 254–255.Find this resource:

Neisser, U. (1963). Decision time without reaction time: Experiments on visual search. American Journal of Psychology, 76, 376–395.Find this resource:

Norman, D. A., & Bobrow, D. G. (1975). On data-limited and resource-limited processes. Cognitive Psychology, 7, 44–64.Find this resource:

Nunes, A., Wickens, C. D., & Yin, S. (2006). Examining the viability of the Neisser search model in the flight domain and the benefits of highlighting in visual search. In Proceedings of the 50th Annual Meeting of the Human Factors & Ergonomics Society (pp. 35–39). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Olmos, O., Liang, C. -C., & Wickens, C. D. (1997). Electronic map evaluation in simulated visual meteorological conditions. International Journal of Aviation Psychology, 7, 37–66.Find this resource:

Parasuraman, R., Sheridan, T., & Wickens, C. D. (2008). Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Cognitive Engineering and Decision Making, 2, 141–161.Find this resource:

Pashler, H. E. (1998). The psychology of attention. Cambridge, MA: MIT Press.Find this resource:

Peebles, D. (2008). The effect of emergent features on judgments of quantity in configural and separable displays. Journal of Experimental Psychology: Applied, 14, 85–100.Find this resource:

Pomerantz, J. R., & Pristach, E. A. (1989). Emergent features, attention, and perceptual glue in visual form perception. Journal of Experimental Psychology, 15, 635–649.Find this resource:

Previc, F. H. (1998). The neuropsychology of 3-D space. Psychological Bulletin, 124, 123–164.Find this resource:

Previc, F. H. (2000). Neuropsychological guidelines for aircraft control stations. IEEE Engineering in Medicine and Biology, March/April, 81–88.Find this resource:

Proctor, R., & Proctor, J. (2006). Selection and control of action. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (3rd ed., pp. 89–110). New York, NY: John Wiley.Find this resource:

Raby, M., & Wickens, C. D. (1994). Strategic workload management and decision biases in aviation. International Journal of Aviation Psychology, 4(3), 211–240.Find this resource:

Rantanen, E. M., Wickens, C. D., Xu X., & Thomas, L. C. (2004). Developing and validating human factors certification criteria for cockpit displays of traffic information avionics. Technical Report AHFD-04–1/FAA-04–1. Savoy, IL: University of Illinois, Aviation Human Factors Division.Find this resource:

Regan, M., Lee, J., & Young, K. (2009). Driver distraction. Boca Raton, FL: CRC PressFind this resource:

Remington, R. W., Johnston, J. C., Ruthruff, E., Gold, M., & Romera, M. (2001). Visual search in complex displays: Factors affecting conflict detection by air traffic controllers. Human Factors, 42, 349–366.Find this resource:

Rensink, R. A. (2002). Change detection. Annual Review of Psychology, 53, 245–277.Find this resource:

Robertson, G., Czerwinski, M., Fisher, D., & Lee, B. (2009). Human factors of information visualization. In F. Durso (Ed.), Reviews of Human Factors and Ergonomics (Vol. 5). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Rogers, R. D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124, 207–231.Find this resource:

Rosenhotz, R., Li Y., & Nakano, L. (2007). Measuring visual clutter. Journal of Vision, 7, 1–22.Find this resource:

Salvucci, D., & Taatgen, N. A. (2008). Threaded cognition. Psychological Review, 115, 101–130.Find this resource:

Salvucci, D., & Taatgen, N. A. (2011). The multi tasking mind. Oxford, England: Oxford University Press.Find this resource:

Sanders, M., & McCormick, E. (1993). Human factors in engineering and design. New York: Wiley.Find this resource:

Sanquist, T. F., Doctor, P., & Parasuraman, R. (2008). A threat display concept for radiation detection in homeland security cargo screening. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications, 38, 856–860.Find this resource:

Sarno, K. J., & Wickens, C. D. (1995). Role of multiple resources in predicting time-sharing efficiency: Evaluation of three workload models in a multiple-task setting. International Journal of Aviation Psychology, 5(1), 107–130.Find this resource:

Schneider, W., & Shiffrin, R. (1977). Controlled and automatic human information processing I: Detection, search and attention. Psychological Review, 84, 1–66.Find this resource:

Scholl, B. J. (2001). Objects and attention: The state of the art. Cognition, 80, 1–46.Find this resource:

Seagull, F. J., & Sandserson, P. M. (2001). Anesthesiology alarms in context: An observational study. Human Factors43, 66–78.Find this resource:

Seidler, K. S., & Wickens, C. D. (1992). Distance and organization in multifunction displays. Human Factors, 34, 555–569.Find this resource:

Senders, J. (1964). The human operator as a monitor and controller of multidegree of freedom systems. IEEE Transactions on Human Factors in Electronics, HFE-5, 2–6.Find this resource:

Senders, J. (1980). Visual scanning processes (Unpublished doctoral dissertation). University of Tilburg, The Netherlands.Find this resource:

Sheridan, T. (1970). On how often the supervisor should sample. IEEE Transactions on Systems Science and Cybernetics, SSC-6(2), 140–145.Find this resource:

Simons, D. J., & Levin, D. T. (1997). Change blindness. Trends in Cognitive Science, 1(7), 261–267.Find this resource:

Slamecka, N. J., & Graf, P. (1978). The generation effect: Delineation of a phenomena. Journal of Experimental Psychology: Human Learning and Memory, 4, 592–604.Find this resource:

Sorkin, R. D. (1989). Why are people turning off our alarms? Human Factors Bulletin, 32(4), 3–4.Find this resource:

Sorkin, R. D., Kantowitz, B. H., & Kantowitz, S. C. (1988). Likelihood alarm displays. Human Factors, 30(4), 445–459.Find this resource:

Stanton, N. A., & Baber, C. (2008). Modelling of human alarm handling response times: A case study of the Ladbroke Grove rail accident in the UK. Ergonomics, 51, 423–440.Find this resource:

Steelman-Allen, K. S., McCarley, J. S., & Wickens, C. D. (2011). Modeling the control of attention in complex visual displays. Human Factors, 53, 143–153.Find this resource:

(p. 55) St. John, M., & Manes, D. I. (2002). Making unreliable automation useful. 46th Annual Meeting of the Human Factors & Ergonomic Society. Santa Monica Cal.: Human Factors.Find this resource:

St. John, M., & Smallman, H. (2008) Four design principles for supporting situation awareness. Journal of Cognitive Engineering and Decision Making, 2, 118–139.Find this resource:

Sternberg, S. (1966). High speed scanning in human memory. Science, 153, 652–654.Find this resource:

Tang, 2001Find this resource:

Teichner, W. H., & Mocharnuk, J. B. (1979). Visual search for complex targets. Human Factors, 21, 259–275.Find this resource:

Titchner, E. B. (1908). Lectures on the elementary psychology of feeling and attention. New York, NY: MacMillan.Find this resource:

Trafton, J. G., Altman, E. M., & Brock, D. P. (2005). Huh? What was I doing? How people use environmental cues after an interruption. In Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp. 468–472). Santa Monica, CA: Human Factors and Ergonomics Society.Find this resource:

Trafton, J. G., & Monk, C. (2007). Dealing with interruptions. Reviews of Human Factors & Ergonomics 3 (Chapter 4). Santa Monica, CA: Human Factors & Ergonomics Society.Find this resource:

Treisman, A. (1986). Properties, parts, and objects. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance (Vol. 2, pp. 31–31/35–70). New York, NY: Wiley and Sons.Find this resource:

Tsang, P., & Vidulich, M. (2006). Mental workload and situation awareness. In G. Salvendy (Ed.), Handbook of human factors & ergonomics (pp. 243–268). New York, NY: John Wiley.Find this resource:

Tulga, M. K., & Sheridan, T. B. (1980). Dynamic decisions and workload in multitask supervisory control. IEEE Transactions on Systems, Man and Cybernetics, SMC-10, 217–232.Find this resource:

Vicente, K. J. (2002). Ecological interface design: Progress and challenges. Human Factors, 44, 62–78.Find this resource:

Welford, A. T. (1967). Single channel operation in the brain. Acta Psychologica, 27, 5–21.Find this resource:

Wickens, C. D. (1980). The structure of attentional resources. In R. Nickerson (Ed.), Attention and performance (Vol. 7, pp. 239–257). Hillsdale, NJ: Erlbaum.Find this resource:

Wickens, C. D. (1984). Processing resources in attention. In R. Parasuraman & R. Davies (Eds.), Varieties of attention (pp. 63–101). New York, NY: Academic Press.Find this resource:

Wickens, C. D. (1993). Cognitive factors in display design. Journal of the Washington Academy of Sciences, 83(4), 179–201.Find this resource:

Wickens, C. D. (2000a). The tradeoff of design for routine and unexpected performance: Implications of situation awareness. In D.J. Garland & M.R. Endsley (Eds.), Situation awareness analysis and measurement. Mahwah, NJ: Lawrence Erlbaum.Find this resource:

Wickens, C. D. (2000b). Human factors in vector map design: The importance of task-display dependence. Journal of Navigation, 53(1), 54–67.Find this resource:

Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177.Find this resource:

Wickens, C. D. (2005). Multiple resource time sharing models. In N. Stanton et al. (Eds.), Handbook of human factors and ergonomics methods (pp. 40–1–40–7). Boca Raton, FL: CRC Press.Find this resource:

Wickens, C. D. (2008a). Multiple resources and mental workload. Human Factors Golden Anniversary Special Issue, 3, 449–455.Find this resource:

Wickens, C. D. (2008b). Situation awareness. Review of Mica Endsley’s articles on situation awareness. Human Factors, Golden Anniversary Special Issue, 50, 397–403.Find this resource:

Wickens, C. D. (2012). Noticing events in the visual workplace: The SEEV and NSEEV models. In R. Hoffman & R. Parasuraman (Eds.), Handbook of applied cognitive engineering. Cambridge, UK: Cambridge University Press.Find this resource:

Wickens, C. D., & Alexander, A. L. (2009). Attentional tunneling and task management in synthetic vision displays. International Journal of Aviation Psychology, 19, 1–17.Find this resource:

Wickens, C. D., Alexander, A. L., Ambinder, M. S., & Martens, M. (2004). The role of highlighting in visual search through maps. Spatial Vision, 37, 373–388.Find this resource:

Wickens, C. D., Bagnall, T., Gosakan, M., & Walters, B. (2011). Modeling single pilot control of multiple UAVs. In M. Vidulich & P. Tsang (Eds.), Proceedings of the 16th International Symposium on Aviation Psychology. Dayton, OH: Wright State University.Find this resource:

Wickens, C. D., & Carswell, C. M. (1995). The proximity compatibility principle: Its psychological foundation and relevance to display design. Human Factors, 37(3), 473–494.Find this resource:

Wickens, C. D., & Colcombe, A. (2007). Performance consequences of imperfect alerting automation associated with a cockpit display of traffic information. Human Factors49, 564–572.Find this resource:

Wickens, C. D., & Dixon, S. R. (2007). The benefits of imperfect automation: A synthesis of the literature. Theoretical Issues in Ergonomics Sciences, 8(3), 201–212.Find this resource:

Wickens, C. D., Dixon, S., Goh, J., & Hammer, B. (2005). Pilot dependence on imperfect diagnostic automation in simulated UAV flights: An attentional visual scanning analysis. In M. Vidulich & P. Tsang (Eds.), 13th International Symposium on Aviation Psychology, Wright-Patterson AFB, Dayton OH.Find this resource:

Wickens, C. D., Goh, J., Helleberg, J., Horrey, W., & Talleur, D. A. (2003). Attentional models of multi-task pilot performance using advanced display technology. Human Factors, 45(3), 360–380.Find this resource:

Wickens, C. D., & Hollands, J. (2000). Engineering psychology and human performance (3rd ed.). Upper Saddle River NJ: Prentice Hall.Find this resource:

Wickens, C. D, Hollands, J., Banbury, S., & Parasuraman, R. (2012). Engineering psychology and human performance (4th Ed). Upper Saddle River, NJ: Pearson.Find this resource:

Wickens, C. D., Hooey, B. L., Gore, B. F., Sebok, A., & Koenicke, C. S. (2009). Identifying black swans in NextGen: Predicting human performance in off-nominal conditions. Human Factors, 51, 638–651.Find this resource:

Wickens, C. D., & Horrey, W. (2009). Models of attention, distraction and highway hazard avoidance. In M. Regan, J. D. Lee, & K. L. Young, (Eds.), Driver distraction: Theory, effects and mitigation. (pp. 57–72). Boco Ratan, Florida: CRC Press.Find this resource:

Wickens, C. D., & McCarley, J. M. (2008). Applied attention theory. Boca Raton, FL: CRC Press.Find this resource:

Wickens, C. D., McCarley, J. S., Alexander, A. L., Thomas, L. C., Ambinder, M., & Zheng, S. (2008). Attention-situation awareness (A-SA) model of pilot error. In D. Foyle & B. Hooey (Eds.), Human performance models in aviation. Boca Raton, FL: Taylor & Francis.Find this resource:

Wickens, C. Prinet, J. Hutchins, S., Sarter, N. & Sebok A (2011). Auditory-visual redundancy in vehicle control interruptions. Two meta analyses. In Proceedings 2011 annual (p. 56) meeting of the Human Factors & Ergonomcs Society. Santa Monica, Calif.: Human Factors.Find this resource:

Wickens, C. Rice, S. Keller, M. D.|Hutchins, S., Hughes, J., & Klayton, K. (2009). False alerts in air traffic control conflict alerting system: Is there a cry wolf effect? Human Factors, 51, 446–462.Find this resource:

Wickens, C. D., Li H., Santamaria, A., Sebok, A., & Sarter, N. (2010). Stages & levels of automation: An integrated meta-analysis. In Proceedings of the 2010 Conference of the Human Factors & Ergonomics Society. Santa Monica: Human Factors and Ergonomics Society.Find this resource:

Wickens, C. D., & Seidler, K. (1997). Information access in a dual task context. Journal of Experimental Psychology: Applied, 3, 196–215.Find this resource:

Wickens, C. D., Ververs, P., & Fadden, S. (2004). Head-up display design. In D. Harris (Ed.), Human factors for civil flight deck design (pp. 103–140). Aldershot, England: Ashgate.Find this resource:

Wickens, C. D., Vincow, M. Schopper, A., & Lincoln, J. (1997). Human performance models for display design. Wright Patterson AFB: Crew Stations Ergonomics Information Analysis Center SOAR.Find this resource:

Wolfe, J. M. (1994). Guided search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1, 202–238.Find this resource:

Wolfe, J. M. (2007). Guided search 4.0: Current progress with a model of visual search. In W. D. Gray (Ed.), Integrated models of cognitive systems (pp. 99–119). New York: Oxford University Press.Find this resource:

Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Neuroscience, 5, 1–7.Find this resource:

Wolfe, J. M., Horowitz, T. S., & Kenner, N. M. (2005). Rare items often missed in visual searches. Nature, 435, 439–440.Find this resource:

Woods, D. D. (1984). Visual momentum: A concept to improve the coupling of person and computer. International Journal of Man-Machine Studies21, 229–244.Find this resource:

Xiao, Y., Seagull, F. J., Nieves-Khouw, F., Barczak, N., & Perkins, S. (2004). Organizational-historical analysis of the “failure to respond to alarm” problems. IEEE Transactions on Systems, Man, and Cybernetics. Part A. Systems and Humans, 34, 772–778.Find this resource:

Yantis, S. (1993). Stimulus-driven attentional capture. Current Directions in Psychological Sciences, 2, 156–161.Find this resource:

Yeh, M., & Wickens, C. D. (2001a). Display signaling in augmented reality: The effects of cue reliability and image realism on attention allocation and trust calibration. Human Factors, 43(3), 355–365.Find this resource:

Yeh, M., & Wickens, C. D. (2001b). Attentional filtering in the design of electronic map displays: A comparison of color-coding, intensity coding, and decluttering techniques. Human Factors, 43(4), 543–562.Find this resource:

Yeh, M., Merlo, J. L., Wickens, C. D., & Brandenburg, D. L. (2003). Head up versus head down: The costs of imprecision, unreliability, and visual clutter on cue effectiveness for display signaling. Human Factors, 45(3), 390–407.Find this resource:Christopher D. Wickens

Christopher Wickens, Human Factors, University of Illinois, Urbana-Champaign, Urbana-Champaign, IL

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s