Attention & Awareness
A curious puzzle about visual experience is that while we feel like we see everything around us in rich detail, we can also fail to report obvious changes that happened right in front of our eyes. Perhaps the most well known demonstration of this apparent failure of awareness is inattentional blindness (IB). In IB experiments, an unexpected visual stimulus (e.g., a person in a gorilla suit) appears unexpectedly while subjects are engaged in an attentionally-demanding task (e.g., counting the passes of a basketball). For decades it has been widely thought that subjects who say they were unaware of the unexpected object in IB tasks truly had no awareness of it. In a forthcoming paper with Drs. Ian Phillips, Howard Egeth, & Chaz Firestone using data from >25,000 subjects, we show that the “inattentionally blind” see more than they report noticing.
Visual recognition
Humans (and other animals) have a remarkable capacity for recognizing and categorizing objects by their visual attributes, a capacity which until very recently was reserved for biological systems. With the rapid improvement of artificially intelligent visual systems, new questions arise about whether these new seeing machines process the visual world in the same ways as seeing animals. I have several projects focusing on how humans recognize which objects belong in which categories, and I have shown that people can successfully predict when machine visual systems will succeed or fail to recognize objects.
Human-AI collaboration
Artificially intelligent (AI) systems can assist humans in making critically important decisions, including detecting threats in airport luggage, finding and identifying cancer in medical images, and supporting semi-autonomous driving. Interestingly (and problematically), these human-machine interactions often work better in the lab than in the field. Dr. Jeremy Wolfe and I have developed a flexible and easy-to-implement Human-AI Collaboration Testing (HAICT) framework built on principles of signal detection theory to permit testing of human-AI teams—before they are deployed in our airports, doctor’s offices, and vehicles.