Makaela Nartker

A photo of Makaela

I'm a 4th year PhD student and Science of Learning Fellow in the Psychological & Brain Sciences Department at Johns Hopkins University. I use classical computational tools like machine learning and signal detection theory in new ways to better understand visual awareness and object recognition in human and artifically intelligent (AI) visual systems. My doctoral research has focused on questions like: Why do we sometimes fail to notice things that are right in front of our eyes? Why do artificially intelligent visual systems make the mistakes that they do? And how should human-machine teams interact to produce the best overall outcomes in important tasks like medical image perception and semi-autonomous driving?

[Curriculum vitae]

Email: makaela@jhu.edu

A blurry black & white photograph  of a person in a gorilla suit being ignored by a passing man, created by Dalle-2 and affectionately titled 'Why do I always go unnoticed?'

Attention & Awareness

A curious puzzle about visual experience is that while we feel like we see everything around us in rich detail, we can also fail to report obvious changes that happened right in front of our eyes. Perhaps the most well known demonstration of this apparent failure of awareness is inattentional blindness (IB). In IB experiments, an unexpected visual stimulus (e.g., a person in a gorilla suit) appears unexpectedly while subjects are engaged in an attentionally-demanding task (e.g., counting the passes of a basketball). For decades it has been widely thought that subjects who say they were unaware of the unexpected object in IB tasks truly had no awareness of it. In a forthcoming paper with Drs. Ian Phillips, Howard Egeth, & Chaz Firestone using data from >25,000 subjects, we show that the “inattentionally blind” see more than they report noticing.


A gif of two images of bubbles, one of which was misclassified as a 'salt shaker' by state-of-the-art AI

Visual recognition

Humans (and other animals) have a remarkable capacity for recognizing and categorizing objects by their visual attributes, a capacity which until very recently was reserved for biological systems. With the rapid improvement of artificially intelligent visual systems, new questions arise about whether these new seeing machines process the visual world in the same ways as seeing animals. I have several projects focusing on how humans recognize which objects belong in which categories, and I have shown that people can successfully predict when machine visual systems will succeed or fail to recognize objects.


An example trial from the HAICT experiment

Human-AI collaboration

Artificially intelligent (AI) systems can assist humans in making critically important decisions, including detecting threats in airport luggage, finding and identifying cancer in medical images, and supporting semi-autonomous driving. Interestingly (and problematically), these human-machine interactions often work better in the lab than in the field. Dr. Jeremy Wolfe and I have developed a flexible and easy-to-implement Human-AI Collaboration Testing (HAICT) framework built on principles of signal detection theory to permit testing of human-AI teams—before they are deployed in our airports, doctor’s offices, and vehicles.

Publications


Recorded Talks

    Vision Sciences Society meeeting (2020)


    Vision Sciences Society meeeting (2022)


    Object Perception, Attention, & Memory meeting (2022)