Only the location of the data samples is taken into account. It is important to note that this method for comparing gaze patterns ignores the order of the samples across time. Negative values are theoretically possible but rarely emerge in practice. Therefore, similarity ranges between one for two blocks with heat maps identical in pattern and zero for two blocks with uncorrelated heat maps. Each map is treated as a single vector, and the desired value is the Pearson's r value, the correlation between these two vectors. Normalized cross-correlation was used to attain a similarity value between the two heat maps. Each heat map was scaled down by a factor of two in each dimension to increase computation speed and then smoothed with a Gaussian filter (kernel standard deviation of 7 pixels or 0.6° of visual angle). Each cell in the heat map matrix represents the cumulative number of data points present in the cell's coordinates. First, a heat map was generated for each set (see Figure 2). The similarity measure is based on the normalized cross-correlations between the heat maps of two sets of trials. This resulted in data being scaled 3.7% and 3.8% (up or down) on average on the x- and y-axis, respectively, then translated 20 and 21 pixels on average on the x- and y-axis (see Supplemental material Figure 3S for analysis of unaligned faces). This transform was applied to the recorded data samples according to the stimulus being viewed. The transform minimizes the sum of square distances between each of the stimulus' nine landmark points and their corresponding average locations. The transform was allowed 4° of freedom: horizontal and vertical translation and horizontal and vertical scaling. An average location for each of the nine points was calculated across all stimuli then, for each stimulus, a best-fit transform was found. Each stimulus image was manually labeled with nine landmark points: inner and outer eye corners, center of the pupil, tip of the nose, and corners of the mouth. To allow for accurate processing of the eye-tracking data, recorded samples were retroactively aligned according to the actual stimulus presented. Stimulus images were presented in the same place on the monitor but were not perfectly aligned. To avoid starting-location biases (Arizpe et al., 2012), fixations were presented either to the left or the right prior to the centrally presented face. The next trial started after the subject made the key press. Eye-tracking data collection was terminated if a response was made before 1250 ms. Subjects were asked to press one key for old faces and another key for new faces. Each trial started with a 1000-ms fixation point followed by a face presented for 1250 ms. After reading the instructions, the subjects pressed a key, which initiated the presentation of 24 faces, 12 of which were presented in the study phase and 12 new faces. Subjects were presented with instructions on the computer screen that indicated which key they needed to press for old or new faces. The test phase was initiated by the experimenter after the completion of the study phase. Then the face was presented for 750 ms followed by a 530-ms interstimulus interval. Each face was preceded by a 1000-ms fixation dot. Twelve faces were presented during a study phase during which subjects were asked to memorize the faces. Subjects performed an old/new face recognition task. Such stable and unique scanning patterns may represent a specific behavioral trait/signature and be formed early in development, reflecting idiosyncratic strategies for performing visual recognition tasks. Interestingly, these eye-tracking patterns were not predictive of behavioral performance. These idiosyncratic eye scanning patterns were not random but highly stable even when examined 18 months later. During face viewing, we found that individuals showed diverse scanning patterns that, in many cases, were inconsistent with the typical triangular shape pattern that is commonly observed when eye scanning patterns are averaged across individuals. Here we show that another key factor that significantly influences eye scanning patterns but has been mostly overlooked is the individual observer. Such patterns are determined by the type of image viewed (e.g., faces, scenes) as well as the task individuals are asked to perform (e.g., visual search, memory). Abstract Eye scanning patterns while viewing pictures have provided valuable information in many domains of visual cognition.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |