University of Central Florida Undergraduate Research Journal - Perception of Facial Expressions in Social Anxiety and Gaze Anxiety
US tab

Hypothesis

I predict that the amount of gaze anxiety reported by socially anxious participants will be negatively related to performance on a facial recognition task. I expect that the experience of anxiety towards the eye region will correlate with lower accuracy when detecting emotional state in faces. In addition, participants with higher levels of gaze anxiety are predicted to perceive facial expressions as being more intense than they actually are.

Methods

Participants

After excluding outliers and incomplete responses, a convenience sample of 392 University of Central Florida psychology students was obtained. There was a total of 104 males and 288 females with an average age of 21.07 (SD=5.12). Recruitment took place through the UCF online SONA research participation system in which psychology students can receive class credit for their participation in research studies. There were 89 participants–16 male and 73 female–in my sample who reported a high amount of social anxiety traits with an average age of 20.38 (SD=3.46). Based on a power analysis, a sample size of 105 participants high in social anxiety traits was required to achieve 95% power at alpha 0.05 in my regression analysis using a moderate effect size of 0.3.

Measures

Depression Anxiety Stress Scale-21 (DASS-21; Lovibond & Lovibond, 1995). Because depression and emotional state is related to facial recognition performance and social anxiety, the DASS-21 was administered (Hills & Lewis, 2011; Joormann & Gotlib, 2006; APA, 2013). The scale's inclusion provided a comprehensive look at current emotional state measured along three subscales for depression, anxiety, and stress. The DASS-21 has demonstrated strong psychometric qualities, and scores on the DASS-21 correlate with scores on the Beck Depression Inventory (BDI) and the Beck Anxiety Inventory (BAI; Lovibond & Lovibond, 1995; Antony, Bieling, Cox, Enns, & Swinson, 1998). There are seven items on each of the three subscales asking participants how much each item relates to them on a scale of 0 ("Did not apply to me at all") to 3 ("Applied to me very much or most of the time"). The subtotals are then doubled and summed for an aggregate score. Lovibond & Lovibond (1995) recommend a score of 28 on the depression subscale as indicative of displaying severely depressed traits. Use of the brief form was to reduce fatigue effects, and the maximum score on the DASS-21 is 126 with 42 on each subscale.

Social Phobia and Anxiety Inventory-23 (SPAI-23; Roberson-Nay, Strong, Nay, Beidel, & Turner, 2007). Participants were assessed for social anxiety using the SPAI-23, a brief version of the Social Phobia and Anxiety Inventory. Scores on this measure, as well as subscale scores on the DASS-21, were used to account for anxiety severity. The SPAI-23 has been found to have good convergent validity to similar self-report forms, as well as correlating highly with scores on the full-length SPAI (Schry & Roberson-Nay, 2012). Additionally, it can be completed in about two minutes (Roberson-Nay et al., 2007). The SPAI-23 asks participants to report traits related to both social anxiety and agoraphobia. Because agoraphobia and social anxiety occur in similar situations, the total score on the SPAI-23 is calculated by subtracting the total score on the agoraphobia subset from the social phobia subset (Roberson-Nay et al., 2007; Schry et al., 2012). Under these scoring guidelines, the maximum obtainable difference score is 48.

Gaze Anxiety Rating Scale (GARS; Schneier et al., 2011). Gaze avoidance and fear were assessed with the GARS. This is a relatively new instrument that measures selfreported anxiety and avoidance of eye contact in various social situations. Although many social anxiety selfreports include items that ask about gaze anxiety, few take into account gaze anxiety as an independent construct. The GARS assessed both anxiety and avoidance of gaze, which allowed me to explore the relationship between gaze anxiety and facial perception. Initial investigations have provided evidence for the GARS's reliability and convergent validity within an undergraduate sample (Langer, Rodebaugh, Menatti, Weeks, & Schneier, 2014; Schneier et al., 2011). The questionnaire contains two subscales measuring the fear of mutual gaze and the avoidance of mutual gaze. Individuals are asked to note their level of fear and their level of avoidance on a scale of 0 ("No anxiety"/"No avoidance") to 3 ("A lot of anxiety"/"Avoid a lot") in 17 different social situations. The maximum obtainable total score is 102 with 51 on each subscale.

Facial Recognition Questionnaire. In order to measure participants' ability to identify emotion in faces, a facial recognition task was generated. In this facial recognition task, participants were asked to identify the emotional expressiveness of two different groups of facial stimuli. The first group consisted of pictures of the eye region independent from the face. This region was sectioned off to include the entire eyebrow as well as the upper portion of the cheek. The forehead and everything below the nostrils was excluded. The second group of images consisted of pictures of the entire face. This experimental design allowed me to examine if the information processed from the eye region differed from that of the entire face.

Each group of images contained male and female faces expressing anger, fear, happiness, neutrality, or sadness. Further, there was a mild and extreme version of each emotion. There was a total of 20 female images (4 for each emotion with two pictures at each intensity level) and 20 male images divided in the same fashion. These stimuli were retrieved with permission from a previous study conducted on facial recognition in Asperger's Disorder and Social Phobia (Wong, 2010). Participants were asked to identify which emotion was expressed in each picture using multiple choice answer selection. Participants were also asked to assess the relative intensity of the emotion being expressed on a five-point scale ranging from 1 (mild) to 5 (extreme).

Procedure

Participants were screened for their age after signing up for the study through the SONA online system. Qualifying participants were redirected to complete the DASS-21, the SPAI-23, the GARS, and the facial recognition tasks on Qualtrics (Provo, UT). The order of the forms was random for each participant. Participants were instructed to complete each form in a timely manner, not dwelling on one specific image for a prolonged period of time.

Data Analysis

After excluding participants for non-completion of the survey, blatant response error, and high DASS-21 depression scores, a total sample size of 392 participants was collected. A total of 24 respondents did not complete the survey, and the decision to exclude a portion of completed participant data was made on the basis of response times and the nature of the outliers. Six participants who completed the online survey in an unreasonable amount of time (< seven minutes) were excluded from data analysis in order to control for response bias. This cutoff time was determined by preliminary survey completion rates. Given the large amount of questions within the survey—four separate self-reports and 158 items—completing all questions truthfully and accurately was unlikely in seven minutes or under. Median completion time was 13 minutes. In addition, extreme outliers were excluded using a twostep procedure. Outliers were initially flagged using the boxplot outlier labeling rule for normal distributions on scores from the facial recognition tasks (Banerjee & Iglewicz, 2007). These flagged data points were examined on a case by case basis in order to make a decision on their validity and influence. The 10 excluded outliers presented obvious response patterns; responding with all 0's or all 1's for more than two self-reports. Finally, in order to control for the potential confounding effects of depression, 14 participants who scored 28 or higher on the depression subscale of the DASS-21 were excluded. A total of 30 cases were excluded using these criteria.

Performance on the recognition task was measured as recognition accuracy and average perceived intensity of emotion on a Likert scale of 1-5. After initial analysis, scores on these variables could be further divided depending on the type of emotion presented in the image (anger, fear, happiness, neutrality, and sadness) or the gender of the stimuli face. Average accuracy ratings are reported in percentages and average perceived intensity ratings are reported out of a maximum score of five.

I conducted a series of regression analyses by entering scores on the GARS into the model as a predictor for recognition accuracy and average intensity. For the purposes of these regressions, I split the data into two groups: participants high in social anxiety (≥28 on the SPAI-23) and participants in the normal range (<28 on the SPAI-23; Roberson-Nay et al., 2007; Schry et al., 2012). By dividing the data this way, I explored gaze anxiety's relationship to facial perception within only socially anxious individuals (n=89). Significant models were further explored by considering the two subscales of the GARS as separate predictors. I conducted an exploratory correlation analysis to examine the relationship between gaze anxiety and facial recognition accuracy for each of the five emotions presented.

Next, I analyzed the difference between face perception and eye region perception by conducting paired t-tests and MANOVA analyses. Paired t-tests compared facial recognition versus eye region recognition and male versus female imagery. To test if there were any significant effects for participant gender, the MANOVA considered gender as the fixed factor and facial recognition and perceived intensity as the dependent variables. Because this test is sensitive to unequal group sizes, a simple random sample of 104 female data points were chosen to match the male participant group as part of this analysis.

Results >>