University of Central Florida Undergraduate Research Journal - Perception of Facial Expressions in Social Anxiety and Gaze Anxiety
US tab


Descriptive Statistics

Descriptive statistics and bivariate correlations for the total sample (n=392) were calculated from scores on the SPAI-23, DASS-21, and GARS. Scores on the SPAI23 (M=19.28, SD=11.76) were significantly correlated (r (392) =.60, p<.001) to scores on the GARS (M=28.71, SD=18.31). Scores on the GARS were significantly correlated to scores on the DASS-21 (M=34.83, SD=26.74; r (392) =.52, p<.001). And finally, scores on the SPAI-23 were significantly correlated to scores on the DASS-21 (r (392) =.38, p<.001). Looking at the subscales of the DASS-21 and GARS separately, participants scored higher on the DASS-21 Stress subscale (M=14.27, SD=10.29) than they did on the Anxiety subscale (M=10.30, SD=9.46) and the Depression subscale (M=10.26, SD=9.92). Participants also reported more fear of eye contact (M=15.72, SD=9.51) than they reported avoidance of eye contact (M=13.00, SD=9.52) as measured by the GARS. These differences were not statistically significant.

On the facial recognition tasks, the average accuracy at identifying emotions was 71.10% (SD=9.90) when the eye region was presented individually and 79.94% (SD=8.36) when the entire face was presented. Images of the partitioned eye regions received an average intensity rating of 3.14 (SD=0.50), and images of the entire face received an average intensity rating of 3.29 (SD=0.47).

Regression Analysis

The suggested cutoff score for high social anxiety on the SPAI-23 is 28 (Schry et al., 2012). Of the 392 valid participants, 89 had a SPAI-23 score above this cutoff. The results from the regression and correlation analysis below present findings when only including these 89 participants. Four linear regressions were conducted with GARS total scores entered into the models as the predictor variable. The four outcome variables for these regressions were (1) accuracy at identifying emotion in the eyes, (2) accuracy at identifying emotion in the entire face, (3) the average perceived intensity of emotion in the eyes, and (4) the average perceived intensity of emotion in the entire face. Given the number of hypotheses tested, a Bonferroni correction was calculated, and the required p-value was p=.01.

GARS total scores did not significantly predict identification accuracy of the eye region (p=.48) or identification accuracy of the entire face (p=.54). GARS also did not significantly predict intensity ratings for the partitioned eye region (p= .078).

The only significant regression model produced was with GARS total scores as a predictor for the average perceived intensity of emotion in the face (see table 1). This model produced a significant R2 of .13 [F (1, 88) = 12.63, p=.001]. GARS total scores were positively related to perceived intensity (B=.01, ß=.36, t=3.55, p<.001). When examining these results by subscale, another significant linear model was produced with gaze avoidance as the predictor and perceived facial intensity as the outcome [R2=.16, F (1, 88) = 16.47, p<.001]. Higher amounts of gaze avoidance were related to higher amount of perceived emotional intensity in faces (B=.02, ß=.4, t=4.06, p<.001; see table 2).

To investigate if the relationship between gaze anxiety and the perception of emotional intensity was indeed different for those with high social anxiety as opposed to just a linear trend for the entire population, I also conducted an additional multiple regression analysis using the total sample, n=392. In this model, centered SPAI-23 scores, centered GARS total scores, and a SPAI-23 by GARS interaction term were entered in as predictors for average perceived intensity [R2=.05, F(3, 389)=6.25, p<.001]. There was a significant interaction effect between gaze anxiety and social anxiety (B=.001, SEB=.001, ß=.145, p=.005).

Correlation between GARS and type of emotion

Next, I conducted a bivariate correlation comparing GARS total scores, GARS subscales (fear and avoidance), and the average accuracy of identifying each of the five emotions presented in the recognition task (anger, fear, happiness, neutral, and sadness). Participants were more accurate at identifying angry emotions when they reported higher GARS total scores [r (89) =.21, p=.045] and higher GARS fear scores [r (89) = .24, p=.027]. Participants were less accurate at identifying neutral faces when they had higher GARS total scores [r (89) = -.22, p=.04] and higher GARS avoidance scores [r (89)= -.23, p=.033].

Comparison of average performance on facial recognition task

The entire sample (n=392) was used when comparing average performance on the facial recognition tasks. A series of paired samples t-tests were conducted comparing recognition performance of participants when they were shown only the eye regions to when they were shown the entire face. There was a significant difference in accuracy when participants were only presented the eye region (M=71.12, SD=9.90) compared to when they were presented the entire face (M=79.89, SD=8.41; t (391) =-14.94, p<.001). There was also a very marginal difference between the average intensity ratings of eyes (M=3.14, SD=0.50) and the average intensity ratings of faces (M=3.29, SD=0.47; t (391) = -7.85, p<.001). When it came to differences between the gender of the image presented, participants were significantly more accurate at identifying female faces (M=86.89, SD=9.97) than they were at identifying male faces (M=72.88, SD=11.79; t (391) = -20.01, p<.001).

In order to assess whether there were performance differences depending on the gender of the participant, I conducted a MANOVA with gender as the fixed factor and average perceived intensity and accuracy as the dependent variables. There was a significant difference in facial recognition performance depending on the gender of the participant [Wilks' λ = .95, F (4,202) = 2.85, p =.02]. Between-subjects effects revealed the difference between males and females was based on accuracy (F=6.50, p=.01). Females demonstrated about 2.84% greater accuracy at identifying emotion than males (p<.05).

Discussion >>