Saccade sampling and objects

Our world is full of objects which we want to gather visual information from using saccades, and maybe even interact with eventually. How do the properties of these objects influence how we plan saccades, and what sort of information we gather from these objects?

In this paper, we used a novel "search and choice" paradigm to answer these questions:

Stewart, E.E.M., Ludwig, C.J.H., & Schütz, A.C. (2022). Humans represent the precision and utility of information acquired across fixations. Scientific Reports. 12, 2411. PDF

Overview
We showed people arrays of real-world objects like the ones below:

          

As you can see, objects like the duck look very different from different viewpoints, while objects like the ramekin on the right look almost identical from every viewpoint. 

We quantified this as the "rotational discriminability" of objects, by asking people to rotate 577 object images like these to show the front, side, and back of each object. Here you can see some example response distributions. 

         

We then measured the "mean resultant length" of the responses for each side to quantify agreement between participants. Objects where there was greater agreement were taken to be more distinct from one viewpoint to the next. 

Participants could then free-view these objects, and choose two which they would prefer to report on (via a match-to-sample task where they had to rotate objects to match what they saw) - the highly rotationally discriminable objects like ducks would be a lot easier for this task than something like the ramekin, so were considered to have a higher task utility.

We hypothesised that participants would retain a representation, or metaknowledge of the precision of the information gathered across fixations. If participants know how much information they have about each object, they should choose to report those objects that they have fixated and that they have more information about and can report more precisely. Therefore, report choice may be used as a measure of perceptual confidence. As a result, we predicted links between fixation behaviour, information uptake, perceptual confidence (report choice) and perceptual error. The properties of the objects used in this paradigm could influence both report choice and fixation behaviour, such that participants should choose objects with greater task utility for the perceptual match-to-sample task, and should therefore fixate these easier objects in order to gather more information from them. 

Unlike a classic "linear" search task, our "circular" search and choice paradigm allowed us to measure 1) how much information people gathered from objects via saccades, 2) their metaknowledge about this infomation gathered, and 3) how inherent object properties (i.e. rotational discriminability) affected fixations and choices:

   

Results
You can read the paper for the full results, but in brief, we found the following:

1. Participants were more likely to choose an object they fixated, showing that they had more confidence for items they fixated. 
2. We built an "information uptake" model to link information uptake (or uncertainty reduced) with choices. 
3. We also showed that people were more likely to fixate, and choose the higher-utility objects, and accounted for the relative utility of objects in an array. This is really cool, because it shows that they could infer the rotational properties of the objects (i.e. a duck will be easier than a ramekin) just by seeing a single viewpoint of the object. Even more interesting is that they seemed to do this using their peripheral representation of the objects, and used it to guide saccades to high-utility candidate objects. This provides a new insight into representations that might influence saccade planning and sampling strategies.