Now showing 1 - 4 of 4
  • Publication
    Restricted
    How Traders' Appearances and Moral Descriptions Influence Receivers' Choices in the Ultimatum Game
    This work reports on a series of experiments involving 960 participants (aged between 20-30 years and equally balanced by gender), asked to play the receiver role in a modified version of the Ultimatum Game, where together with information on the offer's fairness (e.g. 40 (fair) vs 10 (unfair) of 100 euros), a photo depicted the trader's appearance (trustworthy vs. untrustworthy) and a text provided his moral description (honest vs. dishonest). Receivers were asked to motivate their decision in connection with the appearance, moral judgment, and fairness of the offer, and report on how these variables affected their emotional feelings. Data analysis shows that, in all conditions containing a fair offer, the trader's appearance plays a significant role in the receivers' decisions in terms of acceptance rate. Moral descriptions play a significant role only in conditions containing an unfair offer. However, when asked to motivate their choices, subjects do not feel the interference of the social appearance, rather they provide more or less equal number of motivations with reference to the amount of offers and moral judgments. As for the emotions driving their decisions, non-converging feelings are observed both at intra and inter group level. © 2017 IEEE.
      58  2
  • Publication
    Restricted
    Effects of Emotional Visual Scenes on the Ability to Decode Emotional Melodies
    An effective change in Human Computer Interaction requires to account of how communication practices are transformed in different contexts, how users sense the interaction with a machine, and an efficient machine sensitivity in interpreting users' communicative signals, and activities. To this aims, the present paper investigates on whether and how positive and negative visual scenes may alter listeners' ability to decode emotional melodies. Emotional tunes were played alone and with, either positive, or negative, or neutral emotional scenes. Afterword, subjects (8 groups, each of 38 subjects, equally balanced by gender) were asked to decode the emotional feeling aroused by melodies ascribing them either emotional valences (positive, negative, I don't know) or emotional labels (happy, sad, fear, anger, another emotion, I don't know). It was found that dimensional emotional features rather than emotional labels strongly affect cognitive judgements of emotional melodies. Musical emotional information is most effectively retained when the task is to assign labels rather than valence values to melodies. In addition, significant misperception effects are observed when happy or positively judged melodies are concurrently played with negative scenes.
      52  5
  • Publication
    Restricted
    On the Amount of Semantic Information Conveyed by Gestures
    This paper aims at investigating on whether and how semantic information conveyed by gestures supports communication effectiveness. The research hypothesis was operationalized as a word retrieval task. To this aim, 140 subjects (73 males, 67 females) aged between 18 and 35 years were recruited at the University of Salerno (Italy). They underwent a memory task after being requested to watch video-clips explicating 15 every-day words through 4 different presentation modes: a) only audio, b) only gestures, c) audio and articulatory (complex auditory) information, and d) audio, gestures, and articulatory (multimodal) information. It was found that semantic information is most effectively retained when conveyed through the multimodal mode, and that the only gestures outperforms the only audio and complex auditory mode. It was also found that females have higher significant recall ability than males, no matter the experimental condition.
      63  6
  • Publication
    Open Access
    Impairments in decoding facial and vocal emotional expressions in high functioning autistic adults and adolescents
    The present investigation shows that gender of stimuli, age, and emotional categories affects the ability of adults and adolescent with Autistic Spectrum Conditions (ASC) to decode facial and vocal emotional expressions. A total of 60 subjects participated to the research: 15 ASC and 15 control adolescents aged between 10-14 years; and 15 ASC and 15 control young adults aged between 20-24 years. Their tasks consisted in decoding: a) 24 adults and 24 children contemporary facial emotional expressions of happiness, sadness, anger, fear, surprise, and disgust; and b) 20 adult's vocal emotional expressions of the same abovementioned emotions (except disgust). Significant differences were observed between ASC and typically developed peers. The data suggest that gender, type (voices or faces) of stimuli, and participants' age affect the emotion recognition process making difficult the definition of a common and shared pattern of emotional expression's recognition compliance among autistic and control groups. These results suggest that efficient and effective e-health technologies need to be able to learn and adapt to user individual traits and subjective needs to offer personalized assistance and support.
      31  10