In the field of decision neuroscience, and specifically in decision neuroscience experimentation, there often exists a disconnect. The questions that neuroscience researchers ask are at the behavioral level. That is, researchers want to predict behavior; they want to know how subjects will choose, react, and decide. However, the data collected in a given decision neuroscience experiment are not at the behavioral level, they are at the neural level. This disconnect should be of some concern to neuroscience researchers, and indeed is of concern to Brock Kirwan of Brigham Young University. He and his colleagues set out to develop a complementary pairing of neuroscience and behavioral studies focusing on computer security that displayed the necessity of behavioral experimentation to the field of decision neuroscience. Kirwan cited four chief goals of the study:
1. Observe whether neural processes result in behavioral change.
2. Select a behavioral study to enhance ecological and internal validity.
3. Extend neuroimaging experiments with behavioral experiments using theory, rather than simply replicate.
4. Use the neuroimaging and behavioral studies in tandem to inform IT artifact design.
In regards to number four, Kirwan sought to address the biological constraint of habituation in computer security. In the habituation process, as exposure to a stimulus increases, response to that stimulus decreases, which is bad news for users who are constantly bombarded with messages, warnings, and notifications on their devices. Through a series of four studies, both behavioral and neural, Kirwan worked to understand the process behind security warning habituation and how this information could be utilized to reduce habituation to these warnings.
In Study 1, subjects were exposed to computer security warnings which contained the same content but different presentations of that content. Kirwan compared habituation to static warnings to habituation to polymorphic warnings—warnings which moved, jiggled, or contained jarring colors or symbols. He found that static warnings led to habituation, while the polymorphic variations led to subjects paying more attention and an attenuation of habituation.
In Study 2, subjects were asked to use Google chrome to install various extensions, which contained manipulated permissions ranging from mild to deeply concerning. Again, Kirwan tested static vs. polymorphic variations on the permissions window. Additionally, mouse tracking was used to view subjects’ behavior in response to the different permissions windows. Attention was gauged by how circuitous a route the user took with their mouse to the “Agree” button on the window. Presumably, those whose cursors took a straight-line path to the button were paying less attention than those whose cursors moved up and around the page first (a proxy for reading the permissions). Again, those who saw polymorphic variations paid more attention than those who saw static windows.
In Study 3, a longitudinal fMRI and eye-tracking study, subjects were exposed to static and polymorphic warnings over a five-day period. Repetition suppression, or a reduction in neural activity when stimuli are repeated, occurred over the static, but not polymorphic, variations.
In Study 4, a mobile field experiment, subjects were asked to download series of Android apps with manipulated permission warnings, much like in Study 2. Again, there was less habituation in the polymorphic variations than in the static permissions warnings.
Kirwan’s studies culminated in two important conclusions. First, while habituation isn’t necessarily bad (indeed, there is no purpose for our brains to dedicate maximum resources to everything), it can have serious security implications and needs to be taken into account when designing IT artifacts. Second, neuroscience needs behavior. Decision neuroscience research and behavioral research are not substitutes, but complements, and both should be considered by researchers hoping to understand and predict human behavior.