Publication

Existing Questionnaires:  IVA Proceedings 2013-2018

In this study, we collected 189 constructs from IVA Proceedings 2013-2018 and developed an Evaluation Instrument Model for human interaction with ASAs. The model contains 7 main categories: (1) Agent's Basic Properties, (2) Agent's Social Traits, (3) Agent's Impression Left by Interaction, (4) Agent's Role Performance, (5) (Human-Agent) Interaction Quality, (6) Human's Impression Left by Interaction, and (7) Human's Attributes to Support Interaction.

The result of the study is reported in:

Resulted data:

 


Study 1: Defining Categories

In this study, we asked the participants to place 189 constructs into categories in the Evaluation Instrument Model. We also asked them to add more categories if necessary. Using 50% agreement cut-off value, we found that 89 constructs (47%) could be placed into one particular category, whereas 99 constructs (52%) could be placed into two particular categories and only 1 construct could not be placed into any category. Among 188 constructs, 11 constructs were placed outside the 7 main categories (i.e. External Variables, Process Variables, Outcome Variables and Other). 

The result of the study is reported in:

Resulted data:


Study 2: Defining Constructs

In this study, we asked the participants to organize 177 constructs into groups using 7 card-sorting tasks that were corresponding with the 7 main categories in the Evaluation Instrument Model. Using 50% agreement cut-off value, we found that 25 cards (14%) could not be included into any card-sorting groups. Based on these groups, we recognized a final of 19 constructs. Some of these constructs have dimensions. 

The result of the study is reported in:

Resulted data:


Study 3: Collecting Questionnaire Items

In this study, we have collected 431 relevant questionnaire items for the 19 constructs and their dimensions. 

The result of the study is reported in:

Resulted data:


Study 4: Defining Questionnaire Items

In this study, we asked the participants to validate the questionnaires items collected in the previous study. It resulted in 207 content-validated items for the 19 constructs and their dimensions.

The result of the study is reported in:

Resulted data:


Study 5: Collecting Prototypical ASAs

In this study, we asked members of the workgroup to join effort in collecting existing artifical social agents. This includes their video link and description (if available). Further three sets of videos were selected to be stimuli in the further studies.

The result of the study is reported in:

Resulted data:


Study 6: Reliability Analysis of the Questionnaire Items

This study is the first study into the validation of the questionnaire instrument for evaluating human interaction with an artificial social agent. It involves crowdworkers registered in an online crowdsourcing platform. They were asked to rate using the questionnaire instrument an interaction between an agent and a human user, which was displayed in a 30 second video clip (resulted from Study 5: Collecting Prototypical ASAs). The result of this study were used to analyze the internal consistency between items of the questionnaire's measurement constructs. The analysis has resulted in 131 reliability-analyzed questionnaire items for the 19 constructs and their dimensions.

The result of the study is reported in:

Resulted data:


Study 7: Construct Validity: Convergent and Discriminant Validity analysis

The research is Study 7 is the second study into the validation of the questionnaire instrument for evaluating human interaction with an artificial social agent. It involved crowdworkers on an online crowdsourcing platform. They were asked to use the questionnaire instrument to rate an interaction between an agent and a human user, which was displayed in a 30 second video clip. Each participant was randomly assigned to rate one of 14 different ASA prototypes. The data gathered was analyzed and used to examine the association of the questionnaire items with the latent constructs, i.e. construct validity. The analysis included several factor analysis models, and resulted in the selection of 90 items for inclusion of the long version of the ASA questionnaire. In addition, a representative item of each construct or dimension was select to create a 24-item short version of the ASA questionnaire. Whereas the long version is suitable for a comprehensive evaluation of human-ASA interaction, the short version allows quick analysis and description of the interaction with the ASA. To support reporting ASA questionnaire results, we also put forward an ASA chart. The chart provides a quick overview of agent profile. 

The result of the study is reported in:

Resulted data:

  • Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, Amal Abdulrahman, and Willem-Paul Brinkman. 2022. The artificial-social-agent questionnaire: establishing the long and short questionnaire versions. In Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents (IVA '22). Association for Computing Machinery, New York, NY, USA, Article 18, 1–8. https://doi.org/10.1145/3514197.3549612
  • Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, Amal Abdulrahman, and Willem-Paul Brinkman. 2022. Artificial Social Agent Questionnaire Instrument. (2022). https://doi.org/10.4121/19650846 4TU.ResearchData.
  • Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, Amal Abdulrahman, and Willem-Paul Brinkman. 2022. Data and analysis underlying the research into the Artificial-Social-Agent Questionnaire: Establishing the long and short questionnaire versions. (2022). https://doi.org/10.4121/19758436 4TU.ResearchData.
  • Crowd-workers' answers on 131 questionnaire items (.csv)

Study 8: Cross Validation Final Questionnaire Set

In Study 8, we determine the generalization performance of the long and short questionnaire version resulted in the Study 7 (i.e. cross validation: fit model on data set from new set of ASAs). It involves crowd-workers on an online crowd-sourcing platform. The participants will be asked to use the questionnaire instrument to rate an interaction between an agent and a human user, which is displayed in a 30 second video clip. Each participant will be randomly assigned to rate one of 15 different ASA prototypes.


Study 9: Concurrent Analysis and A Normative Dataset Development

In Study 9, we compare the newly developed Artificial-Social-Agent (ASA) questionnaire with other existing relevant questionnaires (i.e., concurrent validity) at the same time to develop a normative dataset of the ASA questionnaire based on widely used and accessible ASAs. Data are gathered by asking participants (crowd-workers) to rate, simultaneously on the ASA questionnaire and the other (existing) questionnaires, their interaction with their most familiar ASA. We analyze the correlation of results from the ASA questionnaire and the selected existing questionnaires whether the ASA questionnaire correlates well with the existing questionnaires. In parallel, per selected ASA, we collect the participants’ ASA questionnaire results to develop a normative dataset.


Translation of the ASAQ

To expand the use of the ASAQ to studies using languages ​​other than English, translating the ASAQ into more languages ​​allows for the inclusion of other populations and the ability to compare them. That is why we have carried out several projects of ASAQ translation into other languages. Each translation consists of three construction cycles that include forward and backward translations and involve bilingual crowdworkers from an online crowdsourcing platform. Because this is a starting point, we will continue to encourage more translation projects in the future. We also hope that the translation projects and the translated ASAQ into other languages ​​will motivate researchers to study human-ASA interactions among different populations in the world and to study cultural similarities and differences in this area.

  • Chinese:

    Fengxiang Li, Siska Fitrianie, Merijn Bruijnes, Amal Abdulrahman, Fu Guo, and Willem-Paul Brinkman. 2023. Mandarin Chinese translation of the Artificial-Social-Agent questionnaire instrument for evaluating human-agent interaction. Frontiers in Computer Science, Sec. Human-Media Interaction, Volume 5 https://doi.org/10.3389/fcomp.2023.1149305